Core Manual
Core Manual
Core Manual
Release 4.8
core-dev
CONTENTS
Introduction
1.1 Whats New? . . . . . . . . . . . .
1.2 Architecture . . . . . . . . . . . .
1.3 How Does it Work? . . . . . . . .
1.4 Prior Work . . . . . . . . . . . . .
1.5 Open Source Project and Resources
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
2
3
4
Installation
2.1 Prerequisites . . . . . . .
2.2 Installing from Packages .
2.3 Installing from Source . .
2.4 Quagga Routing Software
2.5 VCORE . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
6
9
12
14
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
15
16
19
24
26
29
32
32
33
33
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Python Scripting
35
Machine Types
5.1 netns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 physical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 xen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
37
37
37
Control Network
6.1 Activating the Primary Control Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Control Network in Distributed Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 Auxiliary Control Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
39
40
40
EMANE
7.1 What is EMANE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
43
i
7.2
7.3
7.4
EMANE Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single PC with EMANE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Distributed EMANE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
44
45
ns-3
8.1
8.2
8.3
8.4
8.5
What is ns-3? . . . .
ns-3 Scripting . . . .
Integration details .
Mobility . . . . . .
Under Development
49
49
49
51
52
52
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Performance
10 Developers Guide
10.1 Coding Standard . . . . . . . . . . .
10.2 Source Code Guide . . . . . . . . . .
10.3 The CORE API . . . . . . . . . . . .
10.4 Linux network namespace Commands
10.5 FreeBSD Commands . . . . . . . . .
53
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
55
55
55
55
56
57
11 Acknowledgments
61
63
Index
65
ii
CHAPTER
ONE
INTRODUCTION
The Common Open Research Emulator (CORE) is a tool for building virtual networks. As an emulator, CORE builds
a representation of a real computer network that runs in real time, as opposed to simulation, where abstract models
are used. The live-running emulation can be connected to physical networks and routers. It provides an environment
for running real applications and protocols, taking advantage of virtualization provided by the Linux or FreeBSD
operating systems.
Some of its key features are:
efficient and scalable
runs applications and protocols without modification
easy-to-use GUI
highly customizable
CORE is typically used for network and protocol research, demonstrations, application and platform testing, evaluating
networking scenarios, security studies, and increasing the size of physical test networks.
1.2 Architecture
The main components of CORE are shown in CORE Architecture. A CORE daemon (backend) manages emulation
sessions. It builds emulated networks using kernel virtualization for virtual nodes and some form of bridging and
packet manipulation for virtual networks. The nodes and networks come together via interfaces installed on nodes.
1
The daemon is controlled via the graphical user interface, the CORE GUI (frontend). The daemon uses Python
modules that can be imported directly by Python scripts. The GUI and the daemon communicate using a custom,
asynchronous, sockets-based API, known as the CORE API. The dashed line in the figure notionally depicts the
user-space and kernel-space separation. The components the user interacts with are colored blue: GUI, scripts, or
command-line tools.
The system is modular to allow mixing different components. The virtual networks component, for example, can be
realized with other network simulators and emulators, such as ns-3 and EMANE. Different types of kernel virtualization are supported. Another example is how a session can be designed and started using the GUI, and continue to
run in headless operation with the GUI closed. The CORE API is sockets based, to allow the possibility of running
different components on different physical machines.
Chapter 1. Introduction
FreeBSD CORE uses jails with a network stack virtualization kernel option to build virtual nodes, and ties them
together with virtual networks using BSDs Netgraph system.
1.3.1 Linux
Linux network namespaces (also known as netns, LXC, or Linux containers) is the primary virtualization technique
used by CORE. LXC has been part of the mainline Linux kernel since 2.6.24. Recent Linux distributions such as
Fedora and Ubuntu have namespaces-enabled kernels out of the box, so the kernel does not need to be patched or
recompiled. A namespace is created using the clone() system call. Similar to the BSD jails, each namespace has
its own process environment and private network stack. Network namespaces share the same filesystem in CORE.
CORE combines these namespaces with Linux Ethernet bridging to form networks. Link characteristics are applied
using Linux Netem queuing disciplines. Ebtables is Ethernet frame filtering on Linux bridges. Wireless networks are
emulated by controlling which interfaces can send and receive with ebtables rules.
1.3.2 FreeBSD
FreeBSD jails provide an isolated process space, a virtual environment for running programs. Starting with FreeBSD
8.0, a new vimage kernel option extends BSD jails so that each jail can have its own virtual network stack its
own networking variables such as addresses, interfaces, routes, counters, protocol state, socket information, etc. The
existing networking algorithms and code paths are intact but operate on this virtualized state.
Each jail plus network stack forms a lightweight virtual machine. These are named jails or virtual images (or vimages)
and are created using a the jail or vimage command. Unlike traditional virtual machines, vimages do not feature
entire operating systems running on emulated hardware. All of the vimages will share the same processor, memory,
clock, and other system resources. Because the actual hardware is not emulated and network packets can be passed by
reference through the in-kernel Netgraph system, vimages are quite lightweight and a single system can accommodate
numerous instances.
Virtual network stacks in FreeBSD were historically available as a patch to the FreeBSD 4.11 and 7.0 kernels, and the
VirtNet project 1 2 added this functionality to the mainline 8.0-RELEASE and newer kernels.
The FreeBSD Operating System kernel features a graph-based networking subsystem named Netgraph. The netgraph(4) manual page quoted below best defines this system:
The netgraph system provides a uniform and modular system for the implementation of kernel objects
which perform various networking functions. The objects, known as nodes, can be arranged into arbitrarily complicated graphs. Nodes have hooks which are used to connect two nodes together, forming the
edges in the graph. Nodes communicate along the edges to process data, implement protocols, etc.
The aim of netgraph is to supplement rather than replace the existing kernel networking infrastructure.
http://www.nlnet.nl/project/virtnet/
http://www.imunes.net/virtnet/
1.5.1 Goals
These are the Goals of the CORE project; they are similar to what we consider to be the key features.
1. Ease of use - In a few clicks the user should have a running network.
2. Efficiency and scalability - A node is more lightweight than a full virtual machine. Tens of nodes should be
possible on a standard laptop computer.
3. Software re-use - Re-use real implementation code, protocols, networking stacks.
4. Networking - CORE is focused on emulating networks and offers various ways to connect the running emulation
with real or simulated networks.
5. Hackable - The source code is available and easy to understand and modify.
1.5.2 Non-Goals
This is a list of Non-Goals, specific things that people may be interested in but are not areas that we will pursue.
1. Reinventing the wheel - Where possible, CORE reuses existing open source components such as virtualization,
Netgraph, netem, bridging, Quagga, etc.
2. 1,000,000 nodes - While the goal of CORE is to provide efficient, scalable network emulation, there is no set
goal of N number of nodes. There are realistic limits on what a machine can handle as its resources are divided
amongst virtual nodes. We will continue to make things more efficient and let the user determine the right
number of nodes based on available hardware and the activities each node is performing.
3. Solves every problem - CORE is about emulating networking layers 3-7 using virtual network stacks in the
Linux or FreeBSD operating systems.
4. Hardware-specific - CORE itself is not an instantiation of hardware, a testbed, or a specific laboratory setup; it
should run on commodity laptop and desktop PCs, in addition to high-end server hardware.
Chapter 1. Introduction
CHAPTER
TWO
INSTALLATION
This chapter describes how to set up a CORE machine. Note that the easiest way to install CORE is using a binary package on Ubuntu or Fedora (deb or rpm) using the distributions package manager to automatically install
dependencies, see Installing from Packages.
Ubuntu and Fedora Linux are the recommended distributions for running CORE. Ubuntu 12.04 or 14.04 and Fedora
19 or 20 ship with kernels with support for namespaces built-in. They support the latest hardware. However, these
distributions are not strictly required. CORE will likely work on other flavors of Linux, see Installing from Source.
The primary dependencies are Tcl/Tk (8.5 or newer) for the GUI, and Python 2.6 or 2.7 for the CORE daemon.
CORE files are installed to the following directories. When installing from source, the /usr/local prefix is used
in place of /usr by default.
Install Path
/usr/bin/core-gui
/usr/sbin/core-daemon
/usr/sbin/
/usr/lib/core
/usr/lib/python2.7/dist-packages/core
/etc/core/
~/.core/
/usr/share/core/
/usr/share/man/man1/
/etc/init.d/core-daemon
Description
GUI startup command
Daemon startup command
Misc. helper commands/scripts
GUI files
Python modules for daemon/scripts
Daemon configuration files
User-specific GUI preferences and scenario files
Example scripts and scenarios
Command man pages
System startup script for daemon
Under Fedora, /site-packages/ is used instead of /dist-packages/ for the Python modules, and
/etc/systemd/system/core-daemon.service instead of /etc/init.d/core-daemon for the system startup script.
2.1 Prerequisites
The Linux or FreeBSD operating system is required. The GUI uses the Tcl/Tk scripting toolkit, and the CORE daemon
require Python. Details of the individual software packages required can be found in the installation steps.
Proceed to the Install Quagga for routing. line below to install Quagga. The other commands shown in this section
apply to binary packages downloaded from the CORE website instead of using the Debian/Ubuntu repositories.
Note: Linux package managers (e.g. software-center, yum) will take care of installing the dependencies for you when
you use the CORE packages. You do not need to manually use these installation lines. You do need to select which
Quagga package to use.
Chapter 2. Installation
Optional: install the prerequisite packages (otherwise skip this step and have the package manager install them
for you.)
# make sure the system is up to date; you can also use synaptic or
# update-manager instead of apt-get update/dist-upgrade
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install bash bridge-utils ebtables iproute libev-dev python tcl8.5 tk8.5 libtk-img
Install Quagga for routing. If you plan on working with wireless networks, we recommend installing OSPF
MDR (replace amd64 below with i386 if needed to match your architecture):
export URL=http://downloads.pf.itd.nrl.navy.mil/ospf-manet
wget $URL/quagga-0.99.21mr2.2/quagga-mr_0.99.21mr2.2_amd64.deb
sudo dpkg -i quagga-mr_0.99.21mr2.2_amd64.deb
Install the CORE deb packages for Ubuntu, using a GUI that automatically resolves dependencies (note that the
absolute path to the deb file must be used with software-center):
software-center /home/user/Downloads/core-daemon_4.8-0ubuntu1_precise_amd64.deb
software-center /home/user/Downloads/core-gui_4.8-0ubuntu1_precise_all.deb
After running the core-gui command, a GUI should appear with a canvas for drawing topologies. Messages will
print out on the console about connecting to the CORE daemon.
CentOS 7.x only: as of this writing, the tkimg prerequisite package is missing from EPEL 7.x, but the EPEL
6.x package can be manually installed from here
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/tkimg-1.4-1.el6.x86_64.rpm
yum localinstall tkimg-1.4-1.el6.x86_64.rpm
Optional: install the prerequisite packages (otherwise skip this step and have the package manager install them
for you.)
# make sure the system is up to date; you can also use the
# update applet instead of yum update
yum update
yum install bash bridge-utils ebtables iproute libev python procps-ng net-tools tcl tk tkimg
Optional (Fedora 17+): Fedora 17 and newer have an additional prerequisite providing the required netem
kernel modules (otherwise skip this step and have the package manager install it for you.)
yum install kernel-modules-extra
Install Quagga for routing. If you plan on working with wireless networks, we recommend installing OSPF
MDR:
export URL=http://downloads.pf.itd.nrl.navy.mil/ospf-manet
wget $URL/quagga-0.99.21mr2.2/quagga-0.99.21mr2.2-1.fc16.x86_64.rpm
yum localinstall quagga-0.99.21mr2.2-1.fc16.x86_64.rpm
Install the CORE RPM packages for Fedora and automatically resolve dependencies:
yum localinstall core-daemon-4.8-1.fc20.x86_64.rpm --nogpgcheck
yum localinstall core-gui-4.8-1.fc20.noarch.rpm --nogpgcheck
Turn off SELINUX by setting SELINUX=disabled in the /etc/sysconfig/selinux file, and adding
selinux=0 to the kernel line in your /etc/grub.conf file; on Fedora 15 and newer, disable sandboxd
using chkconfig sandbox off; you need to reboot in order for this change to take effect
Turn
off
firewalls
with
systemctl disable firewalld,
systemctl disable
iptables.service, systemctl disable ip6tables.service (chkconfig iptables
off, chkconfig ip6tables off) or configure them with permissive rules for CORE virtual networks;
you need to reboot after making this change, or flush the firewall using iptables -F, ip6tables -F.
Start the CORE daemon as root. Fedora uses the systemd start-up daemon instead of traditional init scripts.
CentOS uses the init script.
# for Fedora using systemd:
systemctl daemon-reload
systemctl start core-daemon.service
# or for CentOS:
/etc/init.d/core-daemon start
After running the core-gui command, a GUI should appear with a canvas for drawing topologies. Messages will
print out on the console about connecting to the CORE daemon.
Chapter 2. Installation
You can obtain the CORE source from the CORE source page. Choose either a stable release version or the development snapshot available in the nightly_snapshots directory. The -j8 argument to make will run eight simultaneous
jobs, to speed up builds on multi-core systems.
tar xzf core-4.8.tar.gz
cd core-4.8
./bootstrap.sh
./configure
make -j8
sudo make install
The CORE Manual documentation is built separately from the doc/ sub-directory in the source. It requires Sphinx:
sudo apt-get install python-sphinx
cd core-4.8/doc
make html
make latexpdf
You can obtain the CORE source from the CORE source page. Choose either a stable release version or the development snapshot available in the nightly_snapshots directory. The -j8 argument to make will run eight
simultaneous jobs, to speed up builds on multi-core systems. Notice the configure flag to tell the build system that
a systemd service file should be installed under Fedora.
tar xzf core-4.8.tar.gz
cd core-4.8
./bootstrap.sh
./configure --with-startup=systemd
make -j8
sudo make install
Note that the Linux RPM and Debian packages do not use the /usr/local prefix, and files are instead installed to
/usr/sbin, and /usr/lib. This difference is a result of aligning with the directory structure of Linux packaging
systems and FreeBSD ports packaging.
Another note is that the Python distutils in Fedora Linux will install the CORE Python modules to
/usr/lib/python2.7/site-packages/core, instead of using the dist-packages directory.
The CORE Manual documentation is built separately from the doc/ sub-directory in the source. It requires Sphinx:
sudo yum install python-sphinx
cd core-4.8/doc
make html
make latexpdf
Now use the same instructions shown in Installing from Source on Fedora. CentOS/EL6 does not use the systemd
service file, so the configure option with-startup=systemd should be omitted:
./configure
This causes a separate init script to be installed that is tailored towards SUSE systems.
The zypper command is used instead of yum.
For OpenSUSE/Xen based installations, refer to the README-Xen file included in the CORE source.
but
the
directory
The kernel patch is available from the CORE source tarball under core-4.8/kernel/symlinks-8.1-RELEASE.diff. This
patch applies to the FreeBSD 8.x or 9.x kernels.
10
Chapter 2. Installation
cd /usr/src/sys
# first you can check if the patch applies cleanly using the -C option
patch -p1 -C < ~/core-4.8/kernel/symlinks-8.1-RELEASE.diff
# without -C applies the patch
patch -p1 < ~/core-4.8/kernel/symlinks-8.1-RELEASE.diff
A kernel configuration file named CORE can be found within the source tarball: core-4.8/kernel/freebsd8-configCORE. The config is valid for FreeBSD 8.x or 9.x kernels.
The contents of this configuration file are shown below; you can edit it to suit your needs.
# this is the FreeBSD 9.x kernel configuration file for CORE
include
GENERIC
ident
CORE
options
nooptions
options
device
VIMAGE
SCTP
IPSEC
crypto
options
options
IPFIREWALL
IPFIREWALL_DEFAULT_TO_ACCEPT
The kernel configuration file can be linked or copied to the kernel source directory. Use it to configure and build the
kernel:
cd /usr/src/sys/amd64/conf
cp ~/core-4.8/kernel/freebsd8-config-CORE CORE
config CORE
cd ../compile/CORE
make cleandepend && make depend
make -j8 && make install
Change the number 8 above to match the number of CPU cores you have times two. Note that the make install
step will move your existing kernel to /boot/kernel.old and removes that directory if it already exists. Reboot
to enable this new patched kernel.
Building CORE from Source on FreeBSD
Here are the prerequisite packages from the FreeBSD ports system:
pkg_add
pkg_add
pkg_add
pkg_add
pkg_add
pkg_add
pkg_add
pkg_add
-r
-r
-r
-r
-r
-r
-r
-r
tk85
libimg
bash
libev
sudo
python
autotools
gmake
Note that if you are installing to a bare FreeBSD system and want to SSH with X11 forwarding to that system, these
packages will help:
pkg_add -r xauth
pkg_add -r xorg-fonts
The sudo package needs to be configured so a normal user can run the CORE GUI using the command core-gui
(opening a shell window on a node uses a command such as sudo vimage n1.)
On FreeBSD, the CORE source is built using autotools and gmake:
11
Build and install the vimage utility for controlling virtual images. The source can be obtained from FreeBSD SVN,
or it is included with the CORE source for convenience:
cd core-4.8/kernel/vimage
make
make install
On FreeBSD you should also install the CORE kernel modules for wireless emulation. Perform this step after you
have recompiled and installed FreeBSD kernel.
cd core-4.8/kernel/ng_pipe
make
sudo make install
cd ../ng_wlan
make
sudo make install
The ng_wlan kernel module allows for the creation of WLAN nodes. This is a modified ng_hub Netgraph module.
Instead of packets being copied to every connected node, the WLAN maintains a hash table of connected node pairs.
Furthermore, link parameters can be specified for node pairs, in addition to the on/off connectivity. The parameters
are tagged to each packet and sent to the connected ng_pipe module. The ng_pipe has been modified to read any
tagged parameters and apply them instead of its default link effects.
The ng_wlan also supports linking together multiple WLANs across different machines using the ng_ksocket
Netgraph node, for distributed emulation.
The Quagga routing suite is recommended for routing, Quagga Routing Software for installation.
12
Chapter 2. Installation
Fedora users:
yum install quagga
FreeBSD users:
pkg_add -r quagga
To install the Quagga variant having OSPFv3 MDR, first download the appropriate package, and install using the
package manager.
Ubuntu users:
export URL=http://downloads.pf.itd.nrl.navy.mil/ospf-manet
wget $URL/quagga-0.99.21mr2.2/quagga-mr_0.99.21mr2.2_amd64.deb
sudo dpkg -i quagga-mr_0.99.21mr2.2_amd64.deb
Note that the configuration directory /usr/local/etc/quagga shown for Quagga above could
be /etc/quagga, if you create a symbolic link from /etc/quagga/Quagga.conf ->
/usr/local/etc/quagga/Quagga.conf on the host. The quaggaboot.sh script in a Linux network
namespace will try and do this for you if needed.
If you try to run quagga after installing from source and get an error such as:
error while loading shared libraries libzebra.so.0
this is usually a sign that you have to run sudo ldconfig to refresh the cache file.
To compile Quagga to work with CORE on FreeBSD:
13
On FreeBSD 9.0 you can use make or gmake. You probably want to compile Quagga from the ports system in
/usr/ports/net/quagga.
2.5 VCORE
CORE is capable of running inside of a virtual machine, using software such as VirtualBox, VMware Server or
QEMU. However, CORE itself is performing machine virtualization in order to realize multiple emulated nodes, and
running CORE virtually adds additional contention for the physical resources. For performance reasons, this is not
recommended. Timing inside of a VM often has problems. If you do run CORE from within a VM, it is recommended
that you view the GUI with remote X11 over SSH, so the virtual machine does not need to emulate the video card with
the X11 application.
A CORE virtual machine is provided for download, named VCORE. This is the perhaps the easiest way to get CORE
up and running as the machine is already set up for you. This may be adequate for initially evaluating the tool but keep
in mind the performance limitations of running within VirtualBox or VMware. To install the virtual machine, you first
need to obtain VirtualBox from http://www.virtualbox.org, or VMware Server or Player from http://www.vmware.com
(this commercial software is distributed for free.) Once virtualization software has been installed, you can import the
virtual machine appliance using the vbox file for VirtualBox or the vmx file for VMware. See the documentation that
comes with VCORE for login information.
14
Chapter 2. Installation
CHAPTER
THREE
15
Once the emulation is running, the GUI can be closed, and a prompt will appear asking if the emulation should be
terminated. The emulation may be left running and the GUI can reconnect to an existing session at a later time.
There is also a Batch mode where CORE runs without the GUI and will instantiate a topology from a given file. This
is similar to the --start option, except that the GUI is not used:
core-gui --batch ~/.core/configs/myfile.imn
A session running in batch mode can be accessed using the vcmd command (or vimage on FreeBSD), or the GUI
can connect to the session.
The session number is printed in the terminal when batch mode is started. This session number can later be used to
stop the batch mode session:
core-gui --closebatch 12345
Tip: If you forget the session number, you can always start the CORE GUI and use Session Menu CORE sessions
dialog box.
Note: It is quite easy to have overlapping sessions when running in batch mode. This may become a problem when
control networks are employed in these sessions as there could be addressing conflicts. See Control Network for
remedies.
Note: If you like to use batch mode, consider writing a CORE Python script directly. This enables access to the
full power of the Python API. The File Menu has a basic Export Python Script option for getting started with a GUIdesigned topology. There is also an Execute Python script option for later connecting the GUI to such scripts.
The GUI can be run as a normal user on Linux. For FreeBSD, the GUI should be run as root in order to start an
emulation.
The GUI can be connected to a different address or TCP port using the --address and/or --port options. The
defaults are shown below.
core-gui --address 127.0.0.1 --port 4038
3.2 Toolbar
The toolbar is a row of buttons that runs vertically along the left side of the CORE GUI window. The toolbar changes
depending on the mode of operation.
16
Link - the Link Tool allows network links to be drawn between two nodes by clicking and dragging the
mouse
Host - emulated server machine having a default route, runs SSH server
Edit - edit node types button invokes the CORE Node Types dialog. New types of nodes may be
created having different icons and names. The default services that are started with each node type can be
changed here.
Link-layer nodes
Hub - the Ethernet hub forwards incoming packets to every connected node
Switch - the Ethernet switch intelligently forwards incoming packets to attached hosts using an
Ethernet address hash table
Wireless LAN - when routers are connected to this WLAN node, they join a wireless network and
an antenna is drawn instead of a connecting line; the WLAN node typically controls connectivity between
attached wireless nodes based on the distance between them
RJ45 - with the RJ45 Physical Interface Tool, emulated nodes can be linked to real physical
interfaces on the Linux or FreeBSD machine; using this tool, real networks and devices can be physically
connected to the live-running emulation (RJ45 Tool)
Tunnel - the Tunnel Tool allows connecting together more than one CORE emulation using GRE
tunnels (Tunnel Tool)
Annotation Tools
3.2. Toolbar
17
Oval - for drawing circles on the canvas that appear in the background
Rectangle - for drawing rectangles on the canvas that appear in the background
Selection Tool - in Execute mode, the Selection Tool can be used for moving nodes around the canvas,
and double-clicking on a node will open a shell window for that node; right-clicking on a node invokes a pop-up
menu of run-time options for that node
Stop button - stops Execute mode, terminates the emulation, returns CORE to edit mode.
Observer Widgets Tool - clicking on this magnifying glass icon invokes a menu for easily selecting an
Observer Widget. The icon has a darker gray background when an Observer Widget is active, during which
time moving the mouse over a node will pop up an information display for that node (Observer Widgets).
Plot Tool - with this tool enabled, clicking on any link will activate the Throughput Widget and draw a
small, scrolling throughput plot on the canvas. The plot shows the real-time kbps traffic for that link. The plots
may be dragged around the canvas; right-click on a plot to remove it.
Marker - for drawing freehand lines on the canvas, useful during demonstrations; markings are not
saved
18
Two-node Tool - click to choose a starting and ending node, and run a one-time traceroute between
those nodes or a continuous ping -R between nodes. The output is displayed in real time in a results box, while
the IP addresses are parsed and the complete network path is highlighted on the CORE display.
Run Tool - this tool allows easily running a command on all or a subset of all nodes. A list box
allows selecting any of the nodes. A text entry box allows entering any command. The command should return
immediately, otherwise the display will block awaiting response. The ping command, for example, with no
parameters, is not a good idea. The result of each command is displayed in a results box. The first occurrence
of the special text NODE will be replaced with the node name. The command will not be attempted to run on
nodes that are not routers, PCs, or hosts, even if they are selected.
3.3 Menubar
The menubar runs along the top of the CORE GUI window and provides access to a variety of features. Some of the
menus are detachable, such as the Widgets menu, by clicking the dashed line at the top.
3.3. Menubar
19
Cut, Copy, Paste - used to cut, copy, and paste a selection. When nodes are pasted, their node numbers are
automatically incremented, and existing links are preserved with new IP addresses assigned. Services and their
customizations are copied to the new node, but care should be taken as node IP addresses have changed with
possibly old addresses remaining in any custom service configurations. Annotations may also be copied and
pasted.
Select All - selects all items on the canvas. Selected items can be moved as a group.
Select Adjacent - select all nodes that are linked to the already selected node(s). For wireless nodes this simply
selects the WLAN node(s) that the wireless node belongs to. You can use this by clicking on a node and pressing
CTRL+N to select the adjacent nodes.
Find... - invokes the Find dialog box. The Find dialog can be used to search for nodes by name or number.
Results are listed in a table that includes the node or link location and details such as IP addresses or link
parameters. Clicking on a result will focus the canvas on that node or link, switching canvases if necessary.
Clear marker - clears any annotations drawn with the marker tool. Also clears any markings used to indicate a
nodes status.
Preferences... - invokes the Preferences dialog box.
20
3D GUI... - launches a 3D GUI by running the command defined under Preferences, 3D GUI command. This
is typically a script that runs the SDT3D display. SDT is the Scripted Display Tool from NRL that is based on
NASAs Java-based WorldWind virtual globe software.
Zoom In - magnifies the display. You can also zoom in by clicking zoom 100% label in the status bar, or by
pressing the + (plus) key.
Zoom Out - reduces the size of the display. You can also zoom out by right-clicking zoom 100% label in the
status bar or by pressing the - (minus) key.
3.3. Menubar
21
Wheel - the wheel pattern links nodes in a combination of both Star and Cycle patterns.
Cube - generate a cube graph of nodes
Clique - creates a clique graph of nodes, where every node is connected to every other node
Bipartite - creates a bipartite graph of nodes, having two disjoint sets of vertices.
Debugger... - opens the CORE Debugger window for executing arbitrary Tcl/Tk commands.
22
3.3. Menubar
23
24
with the IP address of the tunnel peer. This is the IP address of the other CORE machine or physical machine, not an
IP address of another virtual node.
Note: Be aware of possible MTU issues with GRE devices. The gretap device has an interface MTU of 1,458
bytes; when joined to a Linux bridge, the bridges MTU becomes 1,458 bytes. The Linux bridge will not perform
fragmentation for large packets if other bridge ports have a higher MTU such as 1,500 bytes.
The GRE key is used to identify flows with GRE tunneling. This allows multiple GRE tunnels to exist between that
same pair of tunnel peers. A unique number should be used when multiple tunnels are used with the same peer. When
configuring the peer side of the tunnel, ensure that the matching keys are used.
Here are example commands for building the other end of a tunnel on a Linux machine. In this example,
a router in CORE has the virtual address 10.0.0.1/24 and the CORE host machine has the (real) address
198.51.100.34/24. The Linux box that will connect with the CORE machine is reachable over the (real) network
at 198.51.100.76/24. The emulated router is linked with the Tunnel Node. In the Tunnel Node configuration
dialog, the address 198.51.100.76 is entered, with the key set to 1. The gretap interface on the Linux box will be
assigned an address from the subnet of the virtual router node, 10.0.0.2/24.
# these
sudo ip
sudo ip
sudo ip
commands
link add
addr add
link set
Now the virtual router should be able to ping the Linux machine:
# from the CORE router node
ping 10.0.0.2
And the Linux machine should be able to ping inside the CORE emulation:
# from the tunnel peer
ping 10.0.0.1
To debug this configuration, tcpdump can be run on the gretap devices, or on the physical interfaces on the CORE or
Linux machines. Make sure that a firewall is not blocking the GRE traffic.
Note that the coresendmsg utility can be used for a node to send messages to the CORE daemon running on the
host (if the listenaddr = 0.0.0.0 is set in the /etc/core/core.conf file) to interact with the running
emulation. For example, a node may move itself or other nodes, or change its icon based on some node state.
3.4. Connecting with Physical Networks
25
Other Methods
There are still other ways to connect a host with a node. The RJ45 Tool can be used in conjunction with a dummy
interface to access a node:
sudo modprobe dummy numdummies=1
A dummy0 interface should appear on the host. Use the RJ45 tool assigned to dummy0, and link this to a node in your
scenario. After starting the session, configure an address on the host.
sudo brctl show
# determine bridge name from the above command
# assign an IP address on the same network as the linked node
sudo ip addr add 10.0.1.2/24 dev b.48304.34658
In the example shown above, the host will have the address 10.0.1.2 and the node linked to the RJ45 may have the
address 10.0.1.1.
Supported
Platform(s)
Linux, FreeBSD
Fidelity
Low
EMANE
Plug-in
Linux
High
Description
Linux Ethernet bridging with ebtables (Linux) or ng_wlan
(FreeBSD)
TAP device connected to EMANE emulator with pluggable MAC
and PHY radio types
To quickly build a wireless network, you can first place several router nodes onto the canvas. If you have the Quagga
MDR software installed, it is recommended that you use the mdr node type for reduced routing overhead. Next choose
26
the wireless LAN from the Link-layer nodes submenu. First set the desired WLAN parameters by double-clicking the
cloud icon. Then you can link all of the routers by right-clicking on the WLAN and choosing Link to all routers.
Linking a router to the WLAN causes a small antenna to appear, but no red link line is drawn. Routers can have multiple
wireless links and both wireless and wired links (however, you will need to manually configure route redistribution.)
The mdr node type will generate a routing configuration that enables OSPFv3 with MANET extensions. This is a
Boeing-developed extension to Quaggas OSPFv3 that reduces flooding overhead and optimizes the flooding procedure
for mobile ad-hoc (MANET) networks.
The default configuration of the WLAN is set to use the basic range model, using the Basic tab in the WLAN configuration dialog. Having this model selected causes core-daemon to calculate the distance between nodes based
on screen pixels. A numeric range in screen pixels is set for the wireless network using the Range slider. When two
wireless nodes are within range of each other, a green line is drawn between them and they are linked. Two wireless
nodes that are farther than the range pixels apart are not linked. During Execute mode, users may move wireless nodes
around by clicking and dragging them, and wireless links will be dynamically made or broken.
The EMANE tab lists available EMANE models to use for wireless networking. See the EMANE chapter for details
on using EMANE.
On FreeBSD, the WLAN node is realized using the ng_wlan Netgraph node.
When the Execute mode is started and one of the WLAN nodes has a mobility script, a mobility script window will
appear. This window contains controls for starting, stopping, and resetting the running time for the mobility script.
The loop checkbox causes the script to play continuously. The resolution text box contains the number of milliseconds
between each timer event; lower values cause the mobility to appear smoother but consumes greater CPU time.
The format of an ns-2 mobility script looks like:
# nodes: 3, max time: 35.000000, max x: 600.00, max y: 600.00
$node_(2) set X_ 144.0
$node_(2) set Y_ 240.0
$node_(2) set Z_ 0.00
$ns_ at 1.00 "$node_(2) setdest 130.0 280.0 15.0"
The first three lines set an initial position for node 2. The last line in the above example causes node 2 to move towards
the destination (130, 280) at speed 15. All units are screen coordinates, with speed in units per second. The total script
27
time is learned after all nodes have reached their waypoints. Initially, the time slider in the mobility script dialog will
not be accurate.
Examples mobility scripts (and their associated topology files) can be found in the configs/ directory (see Configuration Files).
The listenaddr should be set to the address of the interface that should receive CORE API control commands from
the other servers; setting listenaddr = 0.0.0.0 causes the Python daemon to listen on all interfaces. CORE
uses TCP port 4038 by default to communicate from the controlling machine (with GUI) to the emulation servers.
Make sure that firewall rules are configured as necessary to allow this traffic.
In order to easily open shells on the emulation servers, the servers should be running an SSH server, and public key
login should be enabled. This is accomplished by generating an SSH key for your user if you do not already have one
(use ssh-keygen -t rsa), and then copying your public key to the authorized_keys file on the server (for example, ssh-copy-id user@server or scp ~/.ssh/id_rsa.pub server:.ssh/authorized_keys.)
When double-clicking on a node during runtime, instead of opening a local shell, the GUI will attempt to SSH to the
emulation server to run an interactive shell. The user name used for these remote shells is the same user that is running
the CORE GUI.
Hint: Here is a quick distributed emulation checklist.
1. Install the CORE daemon on all servers.
2. Configure public-key SSH access to all servers (if you want to use double-click shells or Widgets.)
28
3. Set listenaddr=0.0.0.0 in all of the servers core.conf files, then start (or restart) the daemon.
4. Select nodes, right-click them, and choose Assign to to assign the servers (add servers through Session, Emulation Servers...)
5. Press the Start button to launch the distributed emulation.
Servers are configured by choosing Emulation servers... from the Session menu. Servers parameters are configured in
the list below and stored in a servers.conf file for use in different scenarios. The IP address and port of the server must
be specified. The name of each server will be saved in the topology file as each nodes location.
Note: The server that the GUI connects with is referred to as the master server.
The user needs to assign nodes to emulation servers in the scenario. Making no assignment means the node will be
emulated on the master server In the configuration window of every node, a drop-down box located between the Node
name and the Image button will select the name of the emulation server. By default, this menu shows (none), indicating
that the node will be emulated locally on the master. When entering Execute mode, the CORE GUI will deploy the
node on its assigned emulation server.
Another way to assign emulation servers is to select one or more nodes using the select tool (shift-click to select
multiple), and right-click one of the nodes and choose Assign to....
The CORE emulation servers dialog box may also be used to assign nodes to servers. The assigned server name
appears in parenthesis next to the node name. To assign all nodes to one of the servers, click on the server name and
then the all nodes button. Servers that have assigned nodes are shown in blue in the server list. Another option is to
first select a subset of nodes, then open the CORE emulation servers box and use the selected nodes button.
Important: Leave the nodes unassigned if they are to be run on the master server. Do not explicitly assign the nodes
to the master server.
The emulation server machines should be reachable on the specified port and via SSH. SSH is used when doubleclicking a node to open a shell, the GUI will open an SSH prompt to that nodes emulation server. Public-key authentication should be configured so that SSH passwords are not needed.
If there is a link between two nodes residing on different servers, the GUI will draw the link with a dashed line, and
automatically create necessary tunnels between the nodes when executed. Care should be taken to arrange the topology
such that the number of tunnels is minimized. The tunnels carry data between servers to connect nodes as specified in
the topology. These tunnels are created using GRE tunneling, similar to the Tunnel Tool.
Wireless nodes, i.e. those connected to a WLAN node, can be assigned to different emulation servers and participate
in the same wireless network only if an EMANE model is used for the WLAN. See Distributed EMANE for more
details. The basic range model does not work across multiple servers due to the Linux bridging and ebtables rules that
are used.
Note: The basic range wireless model does not support distributed emulation, but EMANE does.
3.6 Services
CORE uses the concept of services to specify what processes or scripts run on a node when it is started. Layer-3
nodes such as routers and PCs are defined by the services that they run. The Quagga Routing Software, for example,
transforms a node into a router.
3.6. Services
29
Services may be customized for each node, or new custom services can be created. New node types can be created each
having a different name, icon, and set of default services. Each service defines the per-node directories, configuration
files, startup index, starting commands, validation commands, shutdown commands, and meta-data associated with a
node.
Note: Network namespace nodes do not undergo the normal Linux boot process using the init, upstart, or
systemd frameworks. These lightweight nodes use configured CORE services.
30
The Files tab is used to display or edit the configuration files or scripts that are used for this service. Files can be
selected from a drop-down list, and their contents are displayed in a text entry below. The file contents are generated
by the CORE daemon based on the network topology that exists at the time the customization dialog is invoked.
The Directories tab shows the per-node directories for this service. For the default types, CORE nodes share the same
filesystem tree, except for these per-node directories that are defined by the services. For example, the /var/run/quagga
directory needs to be unique for each node running the Zebra service, because Quagga running on each node needs to
write separate PID files to that directory.
Note: The /var/log and /var/run directories are mounted uniquely per-node by default. Per-node mount
targets can be found in /tmp/pycore.nnnnn/nN.conf/ (where nnnnn is the session number and N is the node
number.)
The Startup/shutdown tab lists commands that are used to start and stop this service. The startup index allows configuring when this service starts relative to the other services enabled for this node; a service with a lower startup index
value is started before those with higher values. Because shell scripts generated by the Files tab will not have execute
permissions set, the startup commands should include the shell name, with something like sh script.sh.
Shutdown commands optionally terminate the process(es) associated with this service. Generally they send a kill
signal to the running process using the kill or killall commands. If the service does not terminate the running processes
using a shutdown command, the processes will be killed when the vnoded daemon is terminated (with kill -9) and
the namespace destroyed. It is a good practice to specify shutdown commands, which will allow for proper process
termination, and for run-time control of stopping and restarting services.
Validate commands are executed following the startup commands. A validate command can execute a process or script
that should return zero if the service has started successfully, and have a non-zero return value for services that have
had a problem starting. For example, the pidof command will check if a process is running and return zero when
found. When a validate command produces a non-zero return value, an exception is generated, which will cause an
error to be displayed in the Check Emulation Light.
Tip: To start, stop, and restart services during run-time, right-click a node and use the Services... menu.
3.6. Services
31
32
Tip: When using the .imn file format, file paths for things like custom icons may contain the special variables
$CORE_DATA_DIR or $CONFDIR which will be substituted with /usr/share/core or ~/.core/configs.
Tip: Feel free to edit the files directly using your favorite text editor.
3.10 Preferences
The Preferences Dialog can be accessed from the Edit Menu. There are numerous defaults that can be set with this
dialog, which are stored in the ~/.core/prefs.conf preferences file.
33
34
CHAPTER
FOUR
PYTHON SCRIPTING
CORE can be used via the GUI or Python scripting. Writing your own Python scripts offers a rich programming
environment with complete control over all aspects of the emulation. This chapter provides a brief introduction to
scripting. Most of the documentation is available from sample scripts, or online via interactive Python.
The best starting point is the sample scripts that are included with CORE. If you have a CORE source tree, the example
script files can be found under core/daemon/examples/netns/. When CORE is installed from packages, the
example script files will be in /usr/share/core/examples/netns/ (or the /usr/local/... prefix when
installed from source.) For the most part, the example scripts are self-documenting; see the comments contained
within the Python code.
The scripts should be run with root privileges because they create new network namespaces. In general, a CORE
Python script does not connect to the CORE daemon, core-daemon; in fact, core-daemon is just another Python
script that uses the CORE Python modules and exchanges messages with the GUI. To connect the GUI to your scripts,
see the included sample scripts that allow for GUI connections.
Here are the basic elements of a CORE Python script:
#!/usr/bin/python
from core import pycore
session = pycore.Session(persistent=True)
node1 = session.addobj(cls=pycore.nodes.CoreNode, name="n1")
node2 = session.addobj(cls=pycore.nodes.CoreNode, name="n2")
hub1 = session.addobj(cls=pycore.nodes.HubNode, name="hub1")
node1.newnetif(hub1, ["10.0.0.1/24"])
node2.newnetif(hub1, ["10.0.0.2/24"])
node1.icmd(["ping", "-c", "5", "10.0.0.2"])
session.shutdown()
The above script creates a CORE session having two nodes connected with a hub. The first node pings the second
node with 5 ping packets; the result is displayed on screen.
A good way to learn about the CORE Python modules is via interactive Python. Scripts can be run using python -i.
Cut and paste the simple script above and you will have two nodes connected by a hub, with one node running a test
ping to the other.
The CORE Python modules are documented with comments in the code. From an interactive Python shell,
you can retrieve online help about the various classes and methods; for example help(pycore.nodes.CoreNode) or
help(pycore.Session).
An interactive development environment (IDE) is available for browsing the CORE source, the Eric Python IDE.
CORE has a project file that can be opened by Eric, in the source under core/daemon/CORE.e4p. This IDE has
a class browser for viewing a tree of classes and methods. It features syntax highlighting, auto-completion, indenting,
35
and more. One feature that is helpful with learning the CORE Python modules is the ability to generate class diagrams;
right-click on a class, choose Diagrams, and Class Diagram.
Note: The CORE daemon core-daemon manages a list of sessions and allows the GUI to connect and control
sessions. Your Python script uses the same CORE modules but runs independently of the daemon. The daemon does
not need to be running for your script to work.
The session created by a Python script may be viewed in the GUI if certain steps are followed. The GUI has a File
Menu, Execute Python script... option for running a script and automatically connecting to it. Once connected, normal
GUI interaction is possible, such as moving and double-clicking nodes, activating Widgets, etc.
The script should have a line such as the following for running it from the GUI.
if __name__ == "__main__" or __name__ == "__builtin__":
main()
Also, the script should add its session to the session list after creating it. A global server variable is exposed to the
script pointing to the CoreServer object in the core-daemon.
def add_to_server(session):
Add this session to the servers list if this script is executed from
the core-daemon server.
global server
try:
server.addsession(session)
return True
except NameError:
return False
session = pycore.Session(persistent=True)
add_to_server(session)
Finally, nodes and networks need to have their coordinates set to something, otherwise they will be grouped at the
coordinates <0, 0>. First sketching the topology in the GUI and then using the Export Python script option may
help here.
switch.setposition(x=80,y=50)
A fully-worked example script that you can launch from the GUI is available in the file switch.py in the examples
directory.
36
CHAPTER
FIVE
MACHINE TYPES
Different node types can be configured in CORE, and each node type has a machine type that indicates how the node
will be represented at run time. Different machine types allow for different virtualization options.
5.1 netns
The netns machine type is the default. This is for nodes that will be backed by Linux network namespaces. See
Linux for a brief explanation of netns. This default machine type is very lightweight, providing a minimum amount of
virtualization in order to emulate a network. Another reason this is designated as the default machine type is because
this virtualization technology typically requires no changes to the kernel; it is available out-of-the-box from the latest
mainstream Linux distributions.
5.2 physical
The physical machine type is used for nodes that represent a real Linux-based machine that will participate in the
emulated network scenario. This is typically used, for example, to incorporate racks of server machines from an
emulation testbed. A physical node is one that is running the CORE daemon (core-daemon), but will not be further
partitioned into virtual machines. Services that are run on the physical node do not run in an isolated or virtualized
environment, but directly on the operating system.
Physical nodes must be assigned to servers, the same way nodes are assigned to emulation servers with Distributed
Emulation. The list of available physical nodes currently shares the same dialog box and list as the emulation servers,
accessed using the Emulation Servers... entry from the Session menu.
Support for physical nodes is under development and may be improved in future releases. Currently, when any node
is linked to a physical node, a dashed line is drawn to indicate network tunneling. A GRE tunneling interface will be
created on the physical node and used to tunnel traffic to and from the emulated world.
Double-clicking on a physical node during runtime opens a terminal with an SSH shell to that node. Users should
configure public-key SSH login as done with emulation servers.
5.3 xen
The xen machine type is an experimental new type in CORE for managing Xen domUs from within CORE. After
further development, it may be documented here.
Current limitations include only supporting ISO-based filesystems, and lack of integration with node services,
EMANE, and possibly other features of CORE.
37
There is a README-Xen file available in the CORE source that contains further instructions for setting up Xen-based
nodes.
38
CHAPTER
SIX
CONTROL NETWORK
The CORE control network allows the virtual nodes to communicate with their host environment. There are two
types: the primary control network and auxiliary control networks. The primary control network is used mainly for
communicating with the virtual nodes from host machines and for master-slave communications in a multi-server
distributed environment. Auxiliary control networks have been introduced to for routing namespace hosted emulation
software traffic to the test network.
Important: Running a session with a control network can fail if a previous session has set up a control network and
the its bridge is still up. Close the previous session first or wait for it to complete. If unable to, the core-daemon
may need to be restarted and the lingering bridge(s) removed manually:
# Restart the CORE Daemon
sudo /etc/init.d core-daemon restart
# Remove lingering control network bridges
ctrlbridges=brctl show | grep b.ctrl | awk {print $1}
for cb in $ctrlbridges; do
sudo ifconfig $cb down
sudo brctl delbr $cb
done
Tip: If adjustments to the primary control network configuration made in /etc/core/core.conf do not seem
to take affect, check if there is anything set in the Session Menu, the Options... dialog. They may need to be cleared.
These per session settings override the defaults in /etc/core/core.conf.
39
then, the control network bridges will be assigned as follows: core1 = 172.16.1.254 (assuming it is the master
server), core2 = 172.16.2.254, and core3 = 172.16.3.254.
Tunnels back to the master server will still be built, but it is up to the user to add appropriate routes if networking
between control network prefixes is desired. The control network script may help with this.
will activate the primary and two auxiliary control networks and add interfaces ctrl0, ctrl1, ctrl2 to each node.
One use case would be to assign ctrl1 to the OTA manager device and ctrl2 to the Event Service device in the
EMANE Options dialog box and leave ctrl0 for CORE control traffic.
Note: controlnet0 may be used in place of controlnet to configure the primary control network.
Unlike the primary control network, the auxiliary control networks will not employ tunneling since their primary
purpose is for efficiently transporting multicast EMANE OTA and event traffic. Note that there is no per-session
configuration for auxiliary control networks.
40
To extend the auxiliary control networks across a distributed test environment, host network interfaces need to be
added to them. The following lines in /etc/core/core.conf will add host devices eth1, eth2 and eth3 to
controlnet1, controlnet2, controlnet3:
controlnetif1 = eth1
controlnetif2 = eth2
controlnetif3 = eth3
Note: There is no need to assign an interface to the primary control network because tunnels are formed between the
master and the slaves using IP addresses that are provided in servers.conf. (See Distributed Emulation.)
Shown below is a representative diagram of the configuration above.
41
42
CHAPTER
SEVEN
EMANE
This chapter describes running CORE with the EMANE emulator.
43
CORE can also subscribe to EMANE location events and move the nodes on the canvas as they are moved in the
EMANE emulation. This would occur when an Emulation Script Generator, for example, is running a mobility script.
EMANE can be installed from deb or RPM packages or from source. See the EMANE website for full details.
Here are quick instructions for installing all EMANE packages:
# install dependencies
sudo apt-get install libssl-dev libxml-lixbml-perl libxml-simple-perl
# download and install EMANE 0.8.1
export URL=http://downloads.pf.itd.nrl.navy.mil/emane/0.8.1-r2
wget $URL/emane-0.8.1-release-2.ubuntu-12_04.amd64.tgz
tar xzf emane-0.8.1-release-2.ubuntu-12_04.amd64.tgz
sudo dpkg -i emane-0.8.1-release-2/deb/ubuntu-12_04/amd64/*.deb
If you have an EMANE event generator (e.g. mobility or pathloss scripts) and want to have CORE subscribe to
EMANE location events, set the following line in the /etc/core/core.conf configuration file:
emane_event_monitor = True
Do not set the above option to True if you want to manually drag nodes around on the canvas to update their location
in EMANE.
Another common issue is if installing EMANE from source, the default configure prefix will place the DTD files in
/usr/local/share/emane/dtd while CORE expects them in /usr/share/emane/dtd. A symbolic link
will fix this:
sudo ln -s /usr/local/share/emane /usr/share/emane
44
Chapter 7. EMANE
EMANE is configured through a WLAN node, because it is all about emulating wireless radio networks. Once a node
is linked to a WLAN cloud configured with an EMANE model, the radio interface on that node may also be configured
separately (apart from the cloud.)
Double-click on a WLAN node to invoke the WLAN configuration dialog. Click the EMANE tab; when EMANE
has been properly installed, EMANE wireless modules should be listed in the EMANE Models list. (You may need to
restart the CORE daemon if it was running prior to installing the EMANE Python bindings.) Click on a model name
to enable it.
When an EMANE model is selected in the EMANE Models list, clicking on the model options button causes the GUI
to query the CORE daemon for configuration items. Each model will have different parameters, refer to the EMANE
documentation for an explanation of each item. The defaults values are presented in the dialog. Clicking Apply and
Apply again will store the EMANE model selections.
The EMANE options button allows specifying some global parameters for EMANE, some of which are necessary for
distributed operation, see Distributed EMANE.
The RF-PIPE and IEEE 802.11abg models use a Universal PHY that supports geographic location information for
determining pathloss between nodes. A default latitude and longitude location is provided by CORE and this locationbased pathloss is enabled by default; this is the pathloss mode setting for the Universal PHY. Moving a node on
the canvas while the emulation is running generates location events for EMANE. To view or change the geographic
location or scale of the canvas use the Canvas Size and Scale dialog available from the Canvas menu.
Note that conversion between geographic and Cartesian coordinate systems is done using UTM (Universal Transverse
Mercator) projection, where different zones of 6 degree longitude bands are defined. The location events generated by
CORE may become inaccurate near the zone boundaries for very large scenarios that span multiple UTM zones. It is
recommended that EMANE location scripts be used to achieve geo-location accuracy in this situation.
Clicking the green Start button launches the emulation and causes TAP devices to be created in the virtual nodes that
are linked to the EMANE WLAN. These devices appear with interface names such as eth0, eth1, etc. The EMANE
processes should now be running in each namespace. For a four node scenario:
> ps -aef | grep
root 1063 969 0
root 1117 959 0
root 1179 942 0
root 1239 979 0
emane
11:46
11:46
11:46
11:46
?
?
?
?
00:00:00
00:00:00
00:00:00
00:00:00
emane
emane
emane
emane
-d
-d
-d
-d
--logl
--logl
--logl
--logl
3
3
3
3
-r
-r
-r
-r
-f
-f
-f
-f
/tmp/pycore.59992/emane4.log
/tmp/pycore.59992/emane2.log
/tmp/pycore.59992/emane1.log
/tmp/pycore.59992/emane5.log
The example above shows the EMANE processes started by CORE. To view the configuration generated by CORE,
look in the /tmp/pycore.nnnnn/ session directory for a platform.xml file and other XML files. One easy
way to view this information is by double-clicking one of the virtual nodes, and typing cd .. in the shell to go up to the
session directory.
45
/tmp/pycore.59
/tmp/pycore.59
/tmp/pycore.59
/tmp/pycore.59
46
Chapter 7. EMANE
47
48
Chapter 7. EMANE
CHAPTER
EIGHT
NS-3
This chapter describes running CORE with the ns-3 network simulator.
http://www.nsnam.org
49
Open a waf shell as root, so that network namespaces may be instantiated by the script with root permissions. For
an example, run the ns3wifi.py program, which simply instantiates 10 nodes (by default) and places them on an
ns-3 WiFi channel. That is, the script will instantiate 10 namespace nodes, and create a special tap device that sends
packets between the namespace node and a special ns-3 simulation node, where the tap device is bridged to an ns-3
WiFi network device, and attached to an ns-3 WiFi channel.
> cd ns-allinone-3.16/ns-3.16
> sudo ./waf shell
# # use /usr/local below if installed from source
# cd /usr/share/core/examples/corens3/
# python -i ns3wifi.py
running ns-3 simulation for 600 seconds
>>> print session
<corens3.obj.Ns3Session object at 0x1963e50>
>>>
The interactive Python shell allows some interaction with the Python objects for the emulation.
In another terminal, nodes can be accessed using vcmd:
vcmd -c /tmp/pycore.10781/n1 -- bash
root@n1:/tmp/pycore.10781/n1.conf#
root@n1:/tmp/pycore.10781/n1.conf# ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=7.99 ms
64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=3.73 ms
64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=3.60 ms
^C
--- 10.0.0.3 ping statistics --3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 3.603/5.111/7.993/2.038 ms
root@n1:/tmp/pycore.10781/n1.conf#
The ping packets shown above are traversing an ns-3 ad-hoc Wifi simulated network.
To clean up the session, use the Session.shutdown() method from the Python terminal.
>>> print session
<corens3.obj.Ns3Session object at 0x1963e50>
>>>
>>> session.shutdown()
>>>
A CORE/ns-3 Python script will instantiate an Ns3Session, which is a CORE Session having CoreNs3Nodes, an ns-3
MobilityHelper, and a fixed duration. The CoreNs3Node inherits from both the CoreNode and the ns-3 Node classes
it is a network namespace having an associated simulator object. The CORE TunTap interface is used, represented
by a ns-3 TapBridge in CONFIGURE_LOCAL mode, where ns-3 creates and configures the tap device. An event is
scheduled to install the taps at time 0.
Note: The GUI can be used to run the ns3wifi.py and ns3wifirandomwalk.py scripts directly. First,
core-daemon must be stopped and run within the waf root shell. Then the GUI may be run as a normal user, and
the Execute Python Script... option may be used from the File menu. Dragging nodes around in the ns3wifi.py
example will cause their ns-3 positions to be updated.
Users may find the files ns3wimax.py and ns3lte.py in that example directory; those files were similarly configured, but the underlying ns-3 support is not present as of ns-3.16, so they will not work. Specifically, the ns-3 has
to be extended to support bridging the Tap device to an LTE and a WiMax device.
50
Chapter 8. ns-3
8.3.1 Ns3Session
The Ns3Session class is a CORE Session that starts an ns-3 simulation thread. ns-3 actually runs as a separate process
on the same host as the CORE daemon, and the control of starting and stopping this process is performed by the
Ns3Session class.
Example:
session = Ns3Session(persistent=True, duration=opt.duration)
Note the use of the duration attribute to control how long the ns-3 simulation should run. By default, the duration is
600 seconds.
Typically, the session keeps track of the ns-3 nodes (holding a node container for references to the nodes). This is
accomplished via the addnode() method, e.g.:
for i in xrange(1, opt.numnodes + 1):
node = session.addnode(name = "n%d" % i)
8.3.2 CoreNs3Node
A CoreNs3Node is both a CoreNode and an ns-3 node:
class CoreNs3Node(CoreNode, ns.network.Node):
The CoreNs3Node is both a CoreNode backed by a network namespace and
an ns-3 Node simulator object. When linked to simulated networks, the TunTap
device will be used.
8.3.3 CoreNs3Net
A CoreNs3Net derives from PyCoreNet. This network exists entirely in simulation, using the TunTap device to interact
between the emulated and the simulated realm. Ns3WifiNet is a specialization of this.
As an example, this type of code would be typically used to add a WiFi network to a session:
wifi = session.addobj(cls=Ns3WifiNet, name="wlan1", rate="OfdmRate12Mbps")
wifi.setposition(30, 30, 0)
The above two lines will create a wlan1 object and set its initial canvas position. Later in the code, the newnetif method
of the CoreNs3Node can be used to add interfaces on particular nodes to this network; e.g.:
51
8.4 Mobility
Mobility in ns-3 is handled by an object (a MobilityModel) aggregated to an ns-3 node. The MobilityModel is able
to report the position of the object in the ns-3 space. This is a slightly different model from, for instance, EMANE,
where location is associated with an interface, and the CORE GUI, where mobility is configured by right-clicking on
a WiFi cloud.
The CORE GUI supports the ability to render the underlying ns-3 mobility model, if one is configured, on the CORE
canvas. For example, the example program ns3wifirandomwalk.py uses five nodes (by default) in a random
walk mobility model. This can be executed by starting the core daemon from an ns-3 waf shell:
#
#
#
#
sudo bash
cd /path/to/ns-3
./waf shell
core-daemon
and in a separate window, starting the CORE GUI (not from a waf shell) and selecting the Execute Python script...
option from the File menu, selecting the ns3wifirandomwalk.py script.
The program invokes ns-3 mobility through the following statement:
session.setuprandomwalkmobility(bounds=(1000.0, 750.0, 0))
This can be replaced by a different mode of mobility, in which nodes are placed according to a constant mobility
model, and a special API call to the CoreNs3Net object is made to use the CORE canvas positions.
+
+
In this mode, the user dragging around the nodes on the canvas will cause CORE to update the position of the underlying ns-3 nodes.
52
Chapter 8. ns-3
CHAPTER
NINE
PERFORMANCE
The top question about the performance of CORE is often how many nodes can it handle? The answer depends on
several factors:
Hardware - the number and speed of processors in the computer, the available processor cache, RAM memory,
and front-side bus speed may greatly affect overall performance.
Operating system version - Linux or FreeBSD, and the specific kernel versions used will affect overall performance.
Active processes - all nodes share the same CPU resources, so if one or more nodes is performing a CPUintensive task, overall performance will suffer.
Network traffic - the more packets that are sent around the virtual network increases the amount of CPU usage.
GUI usage - widgets that run periodically, mobility scenarios, and other GUI interactions generally consume
CPU cycles that may be needed for emulation.
On a typical single-CPU Xeon 3.0GHz server machine with 2GB RAM running FreeBSD 9.0, we have found it
reasonable to run 30-75 nodes running OSPFv2 and OSPFv3 routing. On this hardware CORE can instantiate 100 or
more nodes, but at that point it becomes critical as to what each of the nodes is doing.
Because this software is primarily a network emulator, the more appropriate question is how much network traffic can
it handle? On the same 3.0GHz server described above, running FreeBSD 4.11, about 300,000 packets-per-second
can be pushed through the system. The number of hops and the size of the packets is less important. The limiting
factor is the number of times that the operating system needs to handle a packet. The 300,000 pps figure represents the
number of times the system as a whole needed to deal with a packet. As more network hops are added, this increases
the number of context switches and decreases the throughput seen on the full length of the network path.
Note: The right question to be asking is how much traffic?, not how many nodes?.
For a more detailed study of performance in CORE, refer to the following publications:
J. Ahrenholz, T. Goff, and B. Adamson, Integration of the CORE and EMANE Network Emulators, Proceedings
of the IEEE Military Communications Conference 2011, November 2011.
Ahrenholz, J., Comparison of CORE Network Emulation Platforms, Proceedings of the IEEE Military Communications Conference 2010, pp. 864-869, November 2010.
J. Ahrenholz, C. Danilov, T. Henderson, and J.H. Kim, CORE: A real-time network emulator, Proceedings of
IEEE MILCOM Conference, 2008.
53
54
Chapter 9. Performance
CHAPTER
TEN
DEVELOPERS GUIDE
This section contains advanced usage information, intended for developers and others who are comfortable with the
command line.
55
The CORE API is currently specified in a separate document, available from the CORE website.
Similarly, the IPv4 routes Observer Widget will run a command to display the routing table using a command such as:
vcmd -c /tmp/pycore.50160/n1 -- /sbin/ip -4 ro
A script named core-cleanup is provided to clean up any running CORE emulations. It will attempt to kill any
remaining vnoded processes, kill any EMANE processes, remove the /tmp/pycore.* session directories, and
remove any bridges or ebtables rules. With a -d option, it will also kill any running CORE daemon.
The netns command is not used by CORE directly. This utility can be used to run a command in a new network
namespace for testing purposes. It does not open a control channel for receiving further commands.
Here are some other Linux commands that are useful for managing the Linux network namespace emulation.
# view the Linux bridging setup
brctl show
# view the netem rules used for applying link effects
tc qdisc show
# view the rules that make the wireless LAN work
ebtables -L
Below is a transcript of creating two emulated nodes and connecting them together with a wired link:
# create node 1 namespace container
vnoded -c /tmp/n1.ctl -l /tmp/n1.log -p /tmp/n1.pid
# create a virtual Ethernet (veth) pair, installing one end into node 1
ip link add name n1.0.1 type veth peer name n1.0
ip link set n1.0 netns cat /tmp/n1.pid
vcmd -c /tmp/n1.ctl -- ip link set lo up
vcmd -c /tmp/n1.ctl -- ip link set n1.0 name eth0 up
vcmd -c /tmp/n1.ctl -- ip addr add 10.0.0.1/24 dev eth0
# create node 2 namespace container
vnoded -c /tmp/n2.ctl -l /tmp/n2.log -p /tmp/n2.pid
# create a virtual Ethernet (veth) pair, installing one end into node 2
56
ip link
ip link
vcmd -c
vcmd -c
vcmd -c
# bridge together nodes 1 and 2 using the other end of each veth pair
brctl addbr b.1.1
brctl setfd b.1.1 0
brctl addif b.1.1 n1.0.1
brctl addif b.1.1 n2.0.1
ip link set n1.0.1 up
ip link set n2.0.1 up
ip link set b.1.1 up
# display connectivity and ping from node 1 to node 2
brctl show
vcmd -c /tmp/n1.ctl -- ping 10.0.0.2
The above example script can be found as twonodes.sh in the examples/netns directory. Use core-cleanup to
clean up after the script.
The ngctl command is more complex, due to the variety of Netgraph nodes available and each of their options.
ngctl
ngctl
ngctl
ngctl
l
# list active Netgraph nodes
show e0_n8:
# display node hook information
msg e0_n0-n1: getstats # get pkt count statistics from a pipe node
shutdown \\[0x0da3\\]: # shut down unnamed node using hex node ID
57
There are many other combinations of commands not shown here. See the online manual (man) pages for complete
details.
Below is a transcript of creating two emulated nodes, router0 and router1, and connecting them together with a link:
# create node 0
vimage -c e0_n0
vimage e0_n0 hostname router0
ngctl mkpeer eiface ether ether
vimage -i e0_n0 ngeth0 eth0
vimage e0_n0 ifconfig eth0 link 40:00:aa:aa:00:00
vimage e0_n0 ifconfig lo0 inet localhost
vimage e0_n0 sysctl net.inet.ip.forwarding=1
vimage e0_n0 sysctl net.inet6.ip6.forwarding=1
vimage e0_n0 ifconfig eth0 mtu 1500
# create node 1
vimage -c e0_n1
vimage e0_n1 hostname router1
ngctl mkpeer eiface ether ether
vimage -i e0_n1 ngeth1 eth0
vimage e0_n1 ifconfig eth0 link 40:00:aa:aa:0:1
vimage e0_n1 ifconfig lo0 inet localhost
vimage e0_n1 sysctl net.inet.ip.forwarding=1
vimage e0_n1 sysctl net.inet6.ip6.forwarding=1
vimage e0_n1 ifconfig eth0 mtu 1500
# create a link between n0 and n1
ngctl mkpeer eth0@e0_n0: pipe ether upper
ngctl name eth0@e0_n0:ether e0_n0-n1
ngctl connect e0_n0-n1: eth0@e0_n1: lower ether
ngctl msg e0_n0-n1: setcfg \\
{{ bandwidth=100000000 delay=0 upstream={ BER=0 dupl
icate=0 } downstream={ BER=0 duplicate=0 } }}
ngctl msg e0_n0-n1: setcfg {{ downstream={ fifo=1 } }}
ngctl msg e0_n0-n1: setcfg {{ downstream={ droptail=1 } }}
ngctl msg e0_n0-n1: setcfg {{ downstream={ queuelen=50 } }}
ngctl msg e0_n0-n1: setcfg {{ upstream={ fifo=1 } }}
ngctl msg e0_n0-n1: setcfg {{ upstream={ droptail=1 } }}
ngctl msg e0_n0-n1: setcfg {{ upstream={ queuelen=50 } }}
58
59
60
CHAPTER
ELEVEN
ACKNOWLEDGMENTS
The CORE project was derived from the open source IMUNES project from the University of Zagreb in 2004. In 2006,
changes for CORE were released back to that project, some items of which were adopted. Marko Zec <zec@fer.hr> is
the primary developer from the University of Zagreb responsible for the IMUNES (GUI) and VirtNet (kernel) projects.
Ana Kukec and Miljenko Mikuc are known contributors.
Jeff Ahrenholz has been the primary Boeing developer of CORE, and has written this manual. Tom Goff designed the
Python framework and has made significant contributions. Claudiu Danilov, Rod Santiago, Kevin Larson, Gary Pei,
Phil Spagnolo, and Ian Chakeres have contributed code to CORE. Dan Mackley helped develop the CORE API, originally to interface with a simulator. Jae Kim and Tom Henderson have supervised the project and provided direction.
61
62
CHAPTER
TWELVE
63
64
INDEX
Symbols
3D GUI, 20
802.11 model, 45
A
Adjacency Widget, 22
align to grid, 21
annotation tools, 17, 33
API, 1, 55
autorearrange all, 21
autorearrange mode, 21
autorearrange selected, 21
auxiliary control networks, 40
auxiliary_controlnet, 40
B
background annotations, 17
basic on/off range, 27
batch, 16
batch mode, 16, 32
binary packages, 6
bipartite, 21
Build hosts File dialog, 21
C
canvas, 20, 28
canvas size and scale, 20
canvas wallpaper, 33
captions, 33
CEL, 32
CEL batch mode, 32
chain, 21
Change sessions, 23
check emulation light, 32
clear marker, 20
clique, 21
closebatch, 16
command-line, 56, 58
comments, 23
configuration file, 32
connected grid topology, 21
D
daemon versus script, 36
decluttering the display, 20
default services, 30
deleting, 20
detachable menus, 19
directories tab, 31
distributed control network, 40
distributed emulation, 28
distributed wireless, 29
Distributed_EMANE, 45
dummy interface, 26
dummy0, 26
E
ebtables, 3
65
Edit mode, 15
Edit Node Types, 17, 23
editing Observer Widgets, 22
EMANE, 43
Configuration, 44
Installation, 44
introduction to, 43
EMANE tab, 27
emulation server, 28
emulation testbed machines, 37
erasing, 20
Ethernet, 26
exceptions, 32
Execute mode, 15
Execute Python script with options, 19
Execute XML or Python script, 19
Export Python script, 19
F
file menu, 19
files tab, 30
find, 20
FreeBSD
jails, 3
kernel modules, 12
Netgraph, 3
Network stack virtualization, 3
vimages, 3
G
geographic location, 45
GRE tunnels, 17, 24
GRE tunnels with physical nodes, 37
gretap, 25
grid topology, 21
H
Hardware requirements, 5
headless mode, 28
hide items, 20
hide nodes, 20
hook scripts, 23
hook states, 23
hooks, 23
host access to a node, 39
Host Tool, 17
hosts file, 21
how to use CORE, 15
hub, 26
Hub Tool, 17
I
icons, 33
66
ieee80211abg model, 45
images, 33
imn file, 32
IMUNES, 3
install locations, 5
install paths, 5
installer, 6
IP Addresses dialog, 21
ip link command, 25
K
kernel modules, 12
kernel patch, 10
key features, 1
L
lanswitch, 26
latitude and longitude, 20
license, 4
limitations with ns-3, 52
link configuration, 26
Link Tool, 16
link-layer virtual nodes, 17
links, 26
Linux
bridging, 3
containers, 2
networking, 3
virtualization, 2
locked view, 20
LXC, 2
lxctools, 56
M
MAC Addresses dialog, 21
machine types, 37
manage canvases, 20
MANET Designated Routers (MDR), 12
marker, 20
Marker Tool, 17, 18
marker tool, 33
MDR Tool, 17
menu, 19
menubar, 19
menus, 19
mobility script, 27
mobility scripting, 27
N
Netgraph, 57, 58
Netgraph nodes, 58
netns, 56
netns machine type, 37
network namespaces, 2
Index
network path, 18
network performance, 53
network-layer virtual nodes, 17
New, 19
new, 20
ng_wlan and ng_pipe, 12
ngctl, 57
node access to the host, 39
node services, 29
nodes.conf, 30
ns-3, 49
ns-3 integration details, 51
ns-3 Introduction, 49
ns-3 mobility, 52
ns-3 scripting, 49
ns2imunes converter, 21
number of nodes, 53
real node, 37
Recently used files, 19
Rectangle Tool, 17
rectangles, 33
redo, 19
remote API, 55
renumber nodes, 21
resizing, 20
resizing canvas, 20
RF-PIPE model, 45
RJ45 Tool, 17, 24
root privileges, 16
route, 18
router adjacency, 22
Router Tool, 17
run command, 18
Run Tool, 18
Open, 19
Open current file in editor, 19
open source project, 4
OSPF neighbors, 22
OSPFv3 MDR, 12
Oval Tool, 17
ovals, 33
P
paste, 19
path, 18
paths, 5
PC Tool, 17
per-node directories, 31
performance, 53
physical machine type, 37
physical node, 37
physical nodes, 30
ping, 18
port number, 16
preferences, 33
Preferences Dialog, 33
Prerequisites, 5
Print, 19
printing, 19
prior work, 3
PRouter Tool, 17
Python scripting, 35
Q
Quagga, 12
Quit, 19
R
random, 21
Index
67
switching, 20
System requirements, 5
T
Text Tool, 17
text tool, 33
throughput, 22
Throughput tool, 18
Throughput Widget, 22
tools menu, 21
topogen, 21
topology generator, 21
topology partitioning, 21
traceroute, 18
traffic, 21
Traffic Flows, 21
Tunnel Tool, 17, 24
Two-node Tool, 18
X
X11 applications, 39
X11 forwarding, 25
Xen, 30
xen machine type, 37
Z
zoom in, 21
U
undo, 19
Universal PHY, 45
UserDefined service, 31
UTM projection, 45
UTM zones, 45
V
validate commands, 31
vcmd, 56
VCORE, 14
view menu, 20
vimage, 57
VirtNet, 3
virtual machines, 14
VirtualBox, 14
VLAN, 24
VLAN devices, 24
VLANning, 24
VMware, 14
vnoded, 56
W
wallpaper, 33
website, 4
wheel, 21
widget, 22
widgets, 22
wiki, 4
wired links, 26
wireless, 26
wireless LAN, 26
Wireless Tool, 17
WLAN, 26
workflow, 15
68
Index