ArticlePDF Available

In VINI VERITAS: realistic and controlled network experimentation

Authors:

Abstract and Figures

This paper describes VINI, a virtual network infrastructure that allows network researchers to evaluate their protocols and services in a realistic environment that also provides a high degree of control over network conditions. VINI allows researchers to deploy and evaluate their ideas with real routing software, traffic loads, and network events. To provide researchers flexibility in designing their experiments, VINI supports simultaneous experiments with arbitrary network topologies on a shared physical infrastructure. This paper tackles the following important design question: What set of concepts and techniques facilitate flexible, realistic, and controlled experimentation (e.g., multiple topologies and the ability to tweak routing algorithms) on a fixed physical infrastructure? We first present VINI's high-level design and the challenges of virtualizing a single network. We then present PL-VINI, an implementation of VINI on PlanetLab, running the "Internet In a Slice". Our evaluation of PL-VINI shows that it provides a realistic and controlled environment for evaluating new protocols and services.
Content may be subject to copyright.
In VINI Veritas: Realistic and Controlled Network
Experimentation
Andy Bavier
, Nick Feamster
, Mark Huang
, Larry Peterson
, and Jennifer Rexford
Princeton University
Georgia Tech
ABSTRACT
This paper describes VINI, a virtual network infrastructure that al-
lows network researchers to evaluate their protocols and services
in a realistic environment that also provides a high degree of con-
trol over network conditions. VINI allows researchers to deploy
and evaluate their ideas with real routing software, traffic loads,
and network events. To provide researchers flexibility in designing
their experiments, VINI supports simultaneous experiments with
arbitrary network topologies on a shared physical infrastructure.
This paper tackles the following important design question: What
set of concepts and techniques facilitate flexible, realistic, and con-
trolled experimentation (e.g., multiple topologies and the ability to
tweak routing algorithms) on a fixed physical infrastructure? We
first present VINI’s high-level design and the challenges of virtual-
izing a single network. We then present PL-VINI, an implementa-
tion of VINI on PlanetLab, running the “Internet In a Slice”. Our
evaluation of PL-VINI shows that it provides a realistic and con-
trolled environment for evaluating new protocols and services.
Categories and Subject Descriptors
C.2.6 [Computer Communication Networks]: Internetworking;
C.2.1 [Computer Communication Networks]: Network Archi-
tecture and Design
General Terms
Design, Experimentation, Measurement, Performance
Keywords
Internet, architecture, virtualization, routing, experimentation
1. Introduction
Researchers continually propose new protocols and services de-
signed to improve the Internet’s performance, reliability, and scal-
ability. Testing these new ideas under realistic network conditions
is a critical step for evaluating their merits and, ultimately, for de-
ploying them in practice. Unfortunately, evaluating new ideas in
operational networks is difficult, because of the need to convince
equipment vendors and network operators to deploy the solution.
Accordingly, researchers are faced with the option of evaluating
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGCOMM’06, September 11–15, 2006, Pisa, Italy.
Copyright 2006 ACM 1-59593-308-5/06/0009 ...
$5.00.
their proposals via simulations, driven either by synthetic models
of topology and workloads or by measurements of the existing pro-
tocols, or evaluating their proposals in a small-scale testbed. Ide-
ally, researchers should be able to conduct experiments that are both
realistic and controlled.
1
Even services that operate above the network layer are difficult
to evaluate without some level of visibility into and control over
network events at lower layers. Consider a Resilient Overlay Net-
work (RON) that circumvents performance and reachability prob-
lems in the underlying network by directing traffic through interme-
diate hosts [1]. RON can offer service to real users without mod-
ifying the underlying infrastructure; unfortunately, evaluating its
effectiveness requires waiting for network failures to occur “natu-
rally”. Additionally, determining when and why a system like RON
works—and how well it works under various failure scenarios—is
challenging (if not impossible) without access to either information
about failures in the underlying network or the ability to inject such
failures [2].
Researchers evaluating new protocols and services should not be
forced to choose between realistic conditions and controlled exper-
iments. Instead, we believe that the research community needs an
experimental infrastructure that satisfies the following four goals:
Running real routing software: Researchers should be able
to run conventional routing software in their experiments, to
evaluate the effects of extensions to the protocols and to eval-
uate new services over commodity network components.
Exposing realistic network conditions: Researchers should
be able to construct experiments on realistic topologies and
routing configurations. The experiments should be able to
examine system behavior in response to exogenous events,
such as routing-protocol messages from the “real” Internet.
Controlling network events: Researchers should be able to
inject network events (e.g., link failures and flash crowds)
that do not occur often in practice, to enable controlled ex-
periments and fine-grained measurements of these events.
Carrying real traffic: Researchers should be able to eval-
uate their protocols and services carrying application traffic
between real end hosts, to enable measurements of end-to-
end performance and effects of feedback at the end systems.
Satisfying these four goals requires both the tools for building
virtual networks and the infrastructure for deploying them. On the
one hand, PlanetLab is an infrastructure that supports multiple dis-
tributed services running on hundreds of machines throughout the
world [3, 4]. However, conducting controlled and realistic network-
ing experiments on PlanetLab is quite challenging, especially con-
sidering the first three goals above. On the other hand, toolkits like
1
We precisely define the terms “realism” and “control” in Section 2.
3
X-Bone [5] and Violin [6] automate the creation of overlay net-
works using tunnels between hosts, allowing researchers to evaluate
new protocols and services. However, these tools are not integrated
with a fixed, wide-area physical infrastructure that reflects a real
network deployment. Instead, we believe the community needs a
shared infrastructure (like PlanetLab) that can support virtual net-
works (like X-Bone and Violin), in a controlled and realistic envi-
ronment.
To that end, we are building VINI, a Virtual Network Infras-
tructure, for evaluating new protocols and services. We are work-
ing with the National Lambda Rail (NLR) and Abilene Internet2
backbones to deploy VINI nodes that have direct connections to
the routers in these networks and dedicated bandwidth between the
sites. VINI will have its own globally visible IP address blocks, and
it will participate in routing with neighboring domains.
2
Our goal is
for VINI to become shared infrastructure that enables researchers to
simultaneously evaluate new protocols and services using real traf-
fic from distributed services that are also sharing VINI resources.
The nodes at each site will initially be high-end servers, but may
eventually be programmable hardware devices that can better han-
dle a large number of simultaneous experiments carrying a large
volume of real traffic and many simultaneously running protocols.
Rather than presenting a complete design and implementation of
VINI, this paper addresses the following important prerequisite de-
sign question: What set of concepts and techniques facilitate flexi-
ble, realistic, and controlled experimentation (e.g., multiple topolo-
gies, ability to tweak routing algorithms, etc.) on a fixed physical
infrastructure? The answer to this question and other insights we
glean from the design and implementation of VINI will provide im-
portant lessons for the design of experimental infrastructures such
as the National Science Foundation’s Global Environment for Net-
work Innovations (GENI) [7, 8] and similar efforts in other coun-
tries. Toward this end, our paper makes three main contributions:
Proposed design of VINI: In designing VINI, we grapple with
the challenges of representing every component in the network:
routers, interfaces, links, routing, and forwarding, as well as the
failures of these components, as discussed in Section 3. In addition
to facing similar challenges as testbeds like PlanetLab, we must
confront additional issues such as sharing routing-protocol port
numbers across experiments, supporting multiple network topolo-
gies, numbering the ends of a virtual link from a common subnet,
forwarding data packets quickly, diverting user traffic into the in-
frastructure, performing network address translation to receive re-
turn traffic from the Internet, and allowing multiple experiments to
share a routing adjacency with a neighboring domain.
Initial prototype of VINI on PlanetLab: In prototyping VINI,
we focus rst on the significant challenges of supporting one ex-
periment on the infrastructure at a time, as discussed in Sec-
tion 4. We synthesize many of the software components created
by the networking research community—from software routers to
configuration-management tools—into a single functioning infras-
tructure. We use XORP for routing [9], Click for packet forwarding
and network address translation [10], OpenVPN servers to connect
with end users [11], and rcc for parsing router configuration data
from operational networks to drive our experiments [12]. We use
the PlanetLab nodes in Abilene for prototyping and experimenting,
while working in parallel on deploying equipment for VINI.
Evaluation of PL-VINI: We evaluate our prototype to demon-
strate its suitability for evaluating network architectures and sys-
tems in a realistic and controlled setting, as discussed in Section 5.
2
We are in discussions with service providers about having dedicated up-
stream connectivity to the commercial Internet at a few exchange points.
We first use microbenchmarks to show that VINI efficiently for-
wards data packets. Our second set of experiments validates VINI’s
behavior in the wide-area. We mirror the Abilene backbone—with
the real topology and the same OSPF configuration—on PlanetLab
nodes co-located at Abilene PoPs. We inject a link failure into our
network and observe the effects of OSPF route convergence on traf-
fic running between two of the nodes.
As we continue to build VINI, we hope to provide the research
community with not only a suitable environment for testing new
network protocols and architectures, but also a credible path to real-
world deployment.
2. VINI Usage Model
Researchers who design, implement, and deploy new network
protocols and architectures may demand different amounts of con-
trol and realism over many features of the network—topology
(including link bandwidths), failure modes, and traffic load—
depending on the aspects of the new protocol or architecture under
test. Although VINI bears resemblance to many existing tools for
evaluating network protocols and architectures, we believe that one
of VINI’s unique strengths is that it provides the experimenter con-
siderably more latitude in introducing various amounts of control
and realism into an experiment. This latitude makes VINI an en-
vironment that is suitable both for running controlled experiments
and for deploying long-running deployment studies.
Control refers to the researcher’s ability to introduce exogenous
events (e.g., failures, changes in traffic volume, etc.) into the sys-
tem. Researchers and protocol designers often need control over
an experiment to study the behavior of a protocol or system under
a wide variety of network conditions. For example, an experiment
that studies how link or node failures affect end-to-end performance
in the context of a protocol modification requires the ability to inject
failures into the routing system (rather than simply waiting for links
or nodes to fail). VINI offers levels of control that are comparable
to those provided in simulators such as ns-2 [13] or SSFNet [14],
or in emulation testbeds (e.g., Emulab [15], DETER [16], Mod-
elnet [17], WAIL [18], or ONL [19]), which allow researchers to
evaluate real prototypes of network protocols and architectures in a
controlled environment.
Realism refers to the ability of a network researcher to subject
a prototype network protocol or architecture to network conditions
(i.e., topology, failures, and trafc) that resemble those of an actual
deployment as closely as possible. Although it is certainly possible
to synthetically generate these features of the network according to
various “realistic models”, VINI’s philosophy is that the best way
to test a prototype under realistic conditions is to actually deploy
the prototype in a real network. For example, as we will see in
Section 5, VINI allows a researcher to deploy and test protocols
on virtual networks that physically mirror the Abilene network, a
capability that is not provided by any existing simulator or testbed.
VINI is most useful for experiments that ultimately require some
level of both realism and control. These experiments fall into two
broad classes: controlled experiments and long-running deploy-
ment studies. Although VINI can support controlled experiments
involving synthetic traffic and network events, these experiments
could arguably run in an existing testbed such as Emulab. How-
ever, a controlled experiment that requires some level of control
over traffic, network events, or topology but eventually wants to
incorporate realistic features can benefit tremendously from VINI.
Once a controlled experiment demonstrates the value of a new
idea, the protocol might be deployed as a long-running study. Real
end hosts—either users or servers—could “opt in” to the prototype
4
system, to achieve better performance, reliability, or security, or to
access services that are not available elsewhere. In fact, one system
running in VINI might even provide services for another, where end
hosts subscribe to some service that, in turn, runs over a new net-
work architecture deployed on VINI. For example, end hosts want-
ing certain guarantees about the integrity of content might subscribe
to a content delivery service deployed over a secure routing infras-
tructure.
3. VINI Design Requirements
This section outlines the design requirements for a virtual net-
work infrastructure. We focus on the general requirements of such
an infrastructure—and why we believe the infrastructure should
provide those requirements—independent of how any particular in-
stantiation of VINI would meet these requirements.
VINI’s design requirements are motivated by the desire for real-
ism (of traffic, routing software, and network conditions) and con-
trol (over network events), as well as the need to provide sufficient
flexibility for embedding different experimental topologies on a sin-
gle, fixed physical infrastructure. Generally speaking, virtualization
provides much of the machinery for solving this problem; indeed,
virtualization is a common solution to many problems in computer
architecture, operating systems, and even in networked distributed
systems. Still, despite the promises of virtualization, its application
to building communication networks is not straightforward.
As Table 1 shows, constructing a virtual network involves solv-
ing four main problems. First, the infrastructure must provide sup-
port for virtualizing network devices and attachment points because
a network researcher may wish to use the physical infrastructure to
build an arbitrary topology (Section 3.1). Second, once the basic
topology is established, the infrastructure must facilitate running
routing protocols over this virtual topology. This goal is challeng-
ing because each virtual node may have characteristics that are dis-
tinct from physical reality Section 3.2 discusses these requirements
in more detail. Third, once the virtual network can establish its own
routing and forwarding tables, it must be able to transport traffic to
and from real networks (Section 3.3). Finally, the virtual network
infrastructure should allow multiple network researchers to perform
the above three steps using the same physical infrastructure, which
presents complications that we discuss in Section 3.4.
3.1 Flexible Network Topology
To allow researchers (and practitioners) to evaluate new routing
protocols, architectures, and management systems, VINI must offer
the ability to configure a wide variety of nodes and links. Enabling
this type of exible network configuration requires satisfying two
main challenges: the ability to configure each of these nodes with
an arbitrary number of interfaces (i.e., the flexibility to give each
node an arbitrary degree), and the ability to provide the appearance
of a physical link between any two virtual nodes (i.e., the flexibil-
ity to establish arbitrary edges in the topology). Neither of these
problems is straightforward: indeed, each problem involves some-
how abstracting (“virtualizing”) physical network components in
new and interesting ways.
Problem: Unique interfaces per experiment. Routing protocols
such as OSPF and IS-IS have configurable parameters for each in-
terface (e.g., weights and areas). To run these protocols, VINI must
enable an experiment to have multiple interfaces on the same exper-
iment, but most commodity physical nodes typically have a xed
(and typically small) number of physical interfaces. Limiting the
flexibility of interface configuration to the physical constraints of
each node is not acceptable: Because different experiments may
need more (or fewer) interfaces for each node, massively overpro-
visioning each node with a large number of physical devices may
prohibitively expensive and physically impossible.
Even if a node could be deployed with a plethora of physical
interfaces, we ultimately envision VINI as an infrastructure that is
shared among multiple experiments. Many experiments, each of
which may configure a different number of virtual interfaces for
each node, must be able to share a fixed (and likely small) number
of physical interfaces.
Problem: Virtual point-to-point connectivity. To allow construc-
tion of arbitrary network topologies, VINI must also provide a fa-
cility for constructing virtual “links” (i.e., the appearance of di-
rect physical connectivity between any two virtual nodes). At first
brush, providing this capability might seem simple: VINI can sim-
ply allow an experimenter to create the appearance of a link be-
tween any two arbitrary nodes by building an overlay network of
tunnels. In principle, this approach is the essence of our solution,
but our desire to make VINI look and feel like a “real” network—
not just an overlay—presents additional complications.
Each virtual link must create the illusion of a physical link not
only in terms of providing connectivity (i.e., all physical nodes in
between two endpoints of any virtual link must know how to for-
ward traffic along that link) but also from the standpoint of resource
control (i.e., the performance of any virtual link should ideally be
independent of the other traffic that is traversing that physical link).
A primary concern is that the topology that an experimenter estab-
lishes in VINI should reflect to a reasonable degree the properties
of the corresponding links in the underlying network. Virtual links
in a VINI experiment will, in many cases, not consist of a single
point-to-point physical connection, but may instead be overlaid on
a sequence of physical links.
Providing this type of guarantee is challenging. First, some of
these “links” may bear very little correspondence to how a layer-
two link between the same nodes might actually behave, since each
IP link comprising a single virtual link may experience network
events such as congestion and failures independently. Ultimately,
as we discuss in Section 3.4, the underlying links in the network
may be shared by multiple topologies, and the traffic from one ex-
periment may affect the network conditions seen in another virtual
network. The challenges we face in solving these problems are
similar in spirit to those faced by Emulab [20], but we are grap-
pling with these issues over the Internet, rather than in a controlled
testbed environment.
Problem: Exposure of underlying topology changes. A physical
component and its associated virtual components should share fate.
Topology changes in the physical network should manifest them-
selves in the virtual topology. If a physical link fails, for example,
VINI should guarantee that the virtual links that use that physical
link should see that failure. For example, VINI should not allow
the underlying IP network to mask the failure by dynamically re-
routing around it. Without this requirement, experiments on VINI
would be subject to properties of the underlying network substrate
(e.g., IP routing), and the designer of a new network protocol, archi-
tecture, or management system would have trouble distinguishing
properties of the new system from artifacts of the substrate.
3.2 Flexible Forwarding and Routing
VINI must not only provide the flexibility for constructing flex-
ible network topologies, but it must also carry traffic over these
topologies. This requirement implies that VINI must support capa-
bilities for forwarding (i.e., directing trafc along a particular path)
5
Design Requirement Solution Section
Flexible Network Topology (Section 3.1)
Virtual point-to-point connectivity Virtual network devices from common subnets in UML 4.1.3
Tunnels and encapsulation in Click 4.2.1
Unique interfaces per experiment
Virtual network devices in UML 4.2.2
Exposure of underlying topology changes
Upcalls of layer-3 alarms to virtual nodes
Flexible Routing and Forwarding (Section 3.2)
Distinct forwarding tables per virtual node Separate instance of Click on each virtual node 4.2.1
Distinct routing processes per virtual node
Separate instance of XORP on each virtual node 4.2.2
Connectivity to External Hosts (Section 3.3)
Allowing end hosts to direct traffic through VINI End-host connection to an OpenVPN server 4.2.3
Ensure return traffic flows back through VINI
Network address translation in Click on egress 4.2.3
Support for Simultaneous Experiments (Section 3.4)
Resource isolation between experiments Virtual servers and network isolation in PlanetLab 4.1.1
Extensions for CPU reservations and priorities 4.1.2
Distinct external routing adjacencies
BGP multiplexer to share external BGP sessions
Table 1: Design requirements for VINI. This table also discusses how our prototype implementation of VINI tackles each of these challenges; these
solutions are discussed in more detail in Section 4.
and routing (i.e., distributing the information that dictates how traf-
fic is forwarded). VINI must provide its users the flexibility to arbi-
trarily control how routing and forwarding over the virtual topolo-
gies is done. Forwarding must be flexible because different exper-
iments may require different virtual topologies. Routing must be
flexible because each experiment may implement entirely different
routing mechanisms and protocols. In this section, we describe how
VINI’s design facilitates node-specific forwarding and routing.
Problem: Disti nct forwarding tables per virtual node. As we de-
scribed in Section 3.1, different experiments may require different
topologies: Any given virtual node may connect to a different set
of neighboring nodes. For example, one experiment may use a
topology where every node has a direct point-to-point connection
with every other node, while another experiment may wish to set
up a topology with significantly fewer edges. Supporting flexible
topology construction not only requires supporting flexible inter-
face configuration, but it also implies that the each topology will
require different forwarding tables. In addition, VINI must also
allow experimenters to implement completely different forwarding
paradigms than those based on today’s IPv4 destination-based for-
warding. This implies that VINI must allow network experiments
to specify different forwarding mechanisms (e.g., forwarding based
on source and destination, forwarding on tags or flat identifiers,
etc.).
Problem: Distinct routing processes per virtual node. For simi-
lar reasons of flexible experimentation, VINI must enable each ex-
periment to construct its own routing table and implement its own
routing policies. Thus, in addition to giving each slice the abil-
ity to configure its own network topology and forwarding tables,
VINI must also allow each experiment to run its own distinct rout-
ing routing protocols and processes. These routing processes much
each handle two cases: (1) discovering routes to destinations within
VINI; and (2) discovering routes to external destinations.
3.3 Connectivity to External Hosts
A cornerstone of VINI is the ability to carry traffic to and from
real end hosts, to allow researchers to evaluate their protocols and
services under realistic conditions. This enables closed-loop exper-
iments that capture how network behavior affects end-to-end per-
formance and, in turn, how adaptation at the end system affects the
offered traffic. Supporting real traffic requires the VINI design to
address the following two problems.
Problem: Allowing end hosts to direct traffic through VINI. End
hosts should be able to “opt in” to having their traffic traverse an ex-
periment running on VINI. For example, end users should be able to
connect to nearby VINI nodes and have their packets reach services
running on VINI, as well as external services (e.g., Web sites) on
the existing Internet. This requires VINI to provide the illusion of
an access network between the end host and the VINI node, and en-
sure that all packets to and from the end host (or to/from a particular
application on the end host) reach the virtual node in the appropriate
virtual topology. The virtual nodes can then forward these packets
across the virtual topology using the forwarding tables constructed
by the experimental routing software.
Problem: Ensuring return traffic from external services flows back
through VINI. To support realistic experiments, VINI should be
able to direct traffic to and from external hosts that offer communi-
cation services, even if these hosts do not participate in VINI. For
example, a VINI experiment should be able to act as a stub network
that connects to the Internet to reach a wide range of conventional
services (e.g., Web sites). Directing traffic from VINI to the ex-
ternal Internet is not especially difficult. However, ensuring that
the return traffic is directed to a VINI node, and forwarded through
VINI and onward to the end host, is more challenging.
Solving these two problems would enable a wide range of exper-
iments with either synthetic or real users running real applications
that direct traffic over experimental network protocols and services
running on VINI. Ultimately, we envision that some VINI experi-
ments could provide long-running services for end users and appli-
cations that need better performance, security, and reliability than
they have today.
3.4 Support for Simultaneous Experiments
VINI should support multiple simultaneous experiments to amor-
tize the cost of deploying and running the physical infrastructure.
In addition, running several experiments at the same time allows
researchers to provide long-running services that attract real users,
while still permitting other researchers to experiment with new pro-
tocols and services. Supporting multiple virtual topologies at the
6
same time introduces two main technical challenges in the design
of VINI.
Problem: Resource isolation between simultaneous experiments.
Each physical node should support multiple virtual nodes that are
each part of its own virtual topology. To provide virtual nodes with
their own dedicated resources, each physical node should allocate
and schedule resources (e.g., CPU, bandwidth, memory, and stor-
age) so that the run-time behavior of one experiment does not ad-
versely affect the performance of other experiments running on the
same node. Furthermore, the resource guarantees must be strict, in
the sense that they should afford an experiment no more—and no
less—resources than allocated, to ensure repeatability of the exper-
iments. Each virtual node also needs its own name spaces (e.g., file
names) and IP addresses and port numbers for communicating with
the outside world.
Problem: Distinct external routing adjacencies per virtual node.
Multiple virtual nodes may need to exchange routing information,
such as BGP announcements, with the same operational router in
the external Internet. This is crucial for allowing each virtual topol-
ogy to announce its own address space to the external Internet and
control where its traffic enters and leaves the network. However, ex-
ternal networks are not likely to establish separate routing-protocol
adjacencies with each virtual node, for two reasons. First, oper-
ational networks might reasonably worry about the stability of a
routing-protocol session running on prototype software as part of
a research experiment, especially when session failures and im-
plementation errors might compromise routing stability in the real
Internet. Second, maintaining multiple routing-protocol sessions
(each with a different virtual node) would impose a memory, band-
width, and CPU overhead on the operational router. VINI must
address these issues to strike the right trade-off between providing
flexibility (for experimenters) and robustness (for the external net-
works).
In the next section, we describe how we address these challenges
in our prototype of VINI running on the PlanetLab nodes in the
Abilene backbone.
4. A VINI Implementation on PlanetLab
As a first step toward realizing VINI, we have built an initial pro-
totype on the PlanetLab nodes in the Abilene backbone. Although
we do not (yet) have dedicated bandwidth between the nodes or up-
stream connectivity to commercial ISPs, this environment enables
us to address many of the challenges of supporting virtual networks
on a fixed physical infrastructure. For extensibility and ease of pro-
totyping, we place many key functions in user space through careful
configuration of the routing and forwarding software. In this sec-
tion, we describe PL-VINI , our extensions to PlanetLab to support
experimentation with network protocols and services, and “Internet
In a Slice” (IIAS), a network architecture that PL-VINI enables.
Table 1 summarizes how the PL-VINI prototype addresses the
problems outlined in Section 3. The table emphasizes that we must
solve several problems in user space software (e.g., providing each
experiment with point-to-point connectivity and unique network in-
terfaces) that would ideally be addressed in the kernel or dedicated
hardware. This division is a direct consequence our decision to
implement our initial VINI prototype on PlanetLab; since Planet-
Lab must continue to support a large user base, we cannot make
extensive changes to the kernel. We expect more functionality to
be provided by the infrastructure itself as we gain insight from our
initial experiences.
4.1 PL-VINI: PlanetLab Extensions for VINI
Our prototype implementation of VINI augments PlanetLab with
features that improve its support for networking experiments. This
goal appears to depart somewhat from PlanetLab’s original mission,
which was to enable wide deployment of overlays—distributed sys-
tems that, like networks, may route packets, but that communicate
using sockets (e.g., UDP tunnels). PL-VINI does, however, preserve
PlanetLab’s vision by enabling interesting and meaningful network
protocols and services to be evaluated on an overlay; we describe
one such network design in Section 4.2.
4.1.1
PlanetLab: Slices and Resource Isolation
PlanetLab was a natural choice for a proof-of-concept VINI pro-
totype and deployment, both due to its large physical infrastruc-
ture and the virtualization it already provides. Virtualization—the
ability to partition a real node and its resources into an arbitrary
number of virtual nodes and resource pools—is a defining require-
ment of VINI. PlanetLab isolates experiments in virtual servers
(VServers) [21]. Each VServer is a lightweight “slice” of the node
with its own namespace. Because of the isolation provided by Plan-
etLab, multiple PL-VINI experiments can run on the same Planet-
Lab nodes simultaneously in different slices. VINI also leverages
PlanetLab’s slice management infrastructure.
VServers enable tight control over resources, such as CPU and
network bandwidth, on a per-slice (rather than a per-process or a
per-user) basis. The PlanetLab CPU scheduler grants each slice a
“fair share” of the node’s available CPU, and supports temporary
share increases (e.g., via Sirius [22]). Similarly, the Linux hierar-
chical token bucket (HTB) scheduler [23] provides fair share ac-
cess to, and minimum rate guarantees for, outgoing network band-
width. Network isolation on PlanetLab is provided by a module
called VNET [24] that tracks and multiplexes incoming and outgo-
ing traffic. VNET provides each slice with the illusion of root-level
access to the underlying network device. Each slice has access only
to its own traffic and may reserve specific ports.
4.1.2
Improved CPU Isolation
PlanetLab provides a fair share of the CPU resources to each
slice, but fluctuations in the CPU demands of other slices can make
running repeatable networking experiments challenging. If a node
supports a large number of slices, a routing process running in one
slice may not have enough processing resources to keep up with
sending heartbeat messages and responding to events, and a for-
warding process may not be able to maintain a desired throughput.
Many slices simultaneously contending for the CPU can also lead
to jitter in scheduling a forwarding process, which manifests itself
in an overlay network as added latency.
PL-VINI leverages two recently exposed CPU scheduling knobs
on PlanetLab: CPU reservations and Linux real-time priorities [25].
A CPU reservation of 25% provides the slice with a minimum of
25% of the CPU during the times that it is active, though it may
get more than this if no “fair share” slices are running. Boosting a
process to real-time priority on Linux cuts the time between when
a process wakes up (e.g., receives a packet) and it runs. A real-time
process that becomes runnable immediately jumps to the head of
the run-queue and preempts any non-real-time process. Note that
even real-time processes are still subject to PlanetLab’s CPU reser-
vations and shares, so a real-time process that runs amok cannot
lock the machine. These two PlanetLab capabilities provide greater
isolation for a VINI experiment running in a slice. In Section 6.2 we
describe several additional extensions we are exploring to provide
even better isolation between PL-VINI slices.
7
4.1.3 Virtual Network Devices
A networking experiment running in a slice in user space needs
the illusion that each virtual node has access to one or more network
devices. Our prototype leverages User-Mode Linux (UML) [26],
a full-featured Linux kernel that runs as a user-space process, for
this purpose. For each user-space tunnel in our overlay topology,
PL-VINI creates a pair of interfaces on a common subnet in the
UML instances at its endpoints. Routing software running inside
UML is in this way made aware of the structure of an overlay net-
work. PL-VINI then maps packets sent on these network interfaces
to the appropriate tunnel at a layer beneath UML. We note that Vi-
olin [6] also uses an overlay network to connect UML instances.
However, the goal of Violin is to hide topology from Grid applica-
tions, whereas PL-VINI uses network interfaces in UML to expose
a tunnel topology to the routing software that runs above it.
Our prototype also uses a modified version of Linux’s TUN/TAP
driver to allow applications running in the networking experiment’s
slice to send and receive packets on the overlay. A process run-
ning in user space can read from /dev/net/tunX to receive pack-
ets routed by the kernel to the TUN/TAP device; similarly, pack-
ets written to /dev/net/tunX are injected back into the kernel’s
network stack and processed as if they arrived from a network
device. Our modifications to the driver allow it to preserve the
isolation between different slices on PlanetLab: every slice sees
a single TUN/TAP interface with the same IP address, but our
changes allow multiple processes (in different slices) to read from
/dev/net/tunX simultaneously, and each will only see packets
sent by its own slice.
For PL-VINI , we create a virtual Ethernet device called tap0 on
every PlanetLab node. We give each tap0 device a unique IP ad-
dress chosen from the 10.0.0.0/8 private address space. This means
that each PlanetLab node’s kernel will route all packets matching
10.0.0.0/8 to tap0 and onto that slice’s own overlay network.
4.2 IIAS: “Internet In a Slice” Architecture
The Internet In a Slice (IIAS) is the example network architecture
that we run on our PL-VINI . Researchers can use IIAS to conduct
controlled experiments that evaluate the existing IP routing proto-
cols and forwarding mechanisms under realistic conditions. Alter-
natively, researchers can view IIAS as a reference implementation
that they can modify to evaluate extensions to today’s protocols and
mechanisms. An IIAS consists of five components [27]:
1. a forwarding engine for the packets carried by the overlay (an
overlay router);
2. a smart method of configuring the engine’s forwarding tables
(a control plane); and
3. a mechanism for clients to opt-in to the overlay and divert
their packets to it, so that the overlay can carry real traffic (an
overlay ingress);
4. a means of exchanging packets with servers that know noth-
ing about the overlay, since most of the world exists outside
of it (an overlay egress);
5. a collection of distributed machines on which to deploy the
overlay, so that it can be properly evaluated and can attract
real users.
Our IIAS implementation synthesizes many components created
by the networking research and open source communities. IIAS
UDP tunnels
control plane
data plane
UmlSwitch
element
encapsulation table
eth0 eth1 eth2 eth3
uml_switch
XORP
UML
FIB
Click
tap0
Figure 1: An IIAS router on PL-VINI
employs the Click modular software router [10] as the forward-
ing engine, the XORP routing protocol suite [9] as the control
plane, OpenVPN [11] as the ingress mechanism, and performs NAT
(within Click) at the egress. Since we run IIAS on PL -VINI , IIAS
can also use PL-VINI s tap0 device as an ingress/egress mecha-
nism for applications running on a PL-VINI node.
Figure 1 shows the IIAS router supported by PL-VIN I . Routing
protocols implemented by XORP, running unmodified in a UML
kernel process, construct a view of the overlay network topology
exposed by the virtual Ethernet interfaces. Each XORP instance
then configures a forwarding table (FIB) implemented in a Click
process running outside of UML. This means that data packets for-
warded by the overlay do not enter UML, which leads to better per-
formance since forwarding data packets in the UML kernel incurs
nearly 15% additional overhead [6]. Next we discuss significant
features of each component in the IIAS software.
4.2.1
Click: Links and Packet Forwarding
IIAS uses the Click modular software router [10] as its virtual
data plane. Our Click configuration consists of five components
that create the illusion of point-to-point links to other virtual nodes
and enable the virtual nodes to forward data packets:
UDP tunnels: UDP tunnels (i.e., sockets) are the links in the
IIAS overlay network. Each Click instance is configured with
tunnels to each of its neighbors in the overlay.
Local interface: Click reads and writes Ethernet packets to
PL-VINI s local tap0 interface. Packets sent by local appli-
cations to a 10.0.0.0/8 destination are forwarded by the kernel
to tap0 and are received by Click. Likewise, Click writes
packets destined for tap0s IP address to the interface, in-
jecting the packets into the kernel which delivers them to the
proper application.
Forwarding t able: Click’s forwarding table maps IP pre-
fixes (both within and outside of IIASs private address
8
space) to “next hops” within IIAS. The forwarding table is
initially empty and is populated by XORP. Since XORP sees
a network of virtual Ethernet interfaces, the “next hops” in-
serted by XORP are the IP addresses of the virtual interfaces
on neighboring nodes.
Encapsulation t ab le: The preconfigured encapsulation table
matches the “next hop” selected by the forwarding table to
a UDP tunnel by mapping it to the public IP address of a
PlanetLab node.
UML Switch: Click exchanges Ethernet packets with the
local UML instance via a virtual switch (uml
switch) dis-
tributed with UML. We wrote a Click element so that Click
could connect to this virtual switch.
Two points about the IIAS data plane are worthy of note. First,
the forwarding table in IIAS controls both how data and control
traffic is forwarded between IIAS nodes, and how trafc is for-
warded to external destinations (i.e., on the “real” Internet). Sec-
ond, though IIAS currently performs IPv4 forwarding, it can also
support new forwarding paradigms beyond IP. Our design has no
fundamental dependence on IP since Click exchanges Ethernet
frames with UML (via the virtual switch) and the local tap0 in-
terface. One could implement a new addressing scheme in IIAS,
for instance based on DHTs, simply by writing new forwarding and
encapsulation table elements.
4.2.2
XORP: Routing
IIAS uses the XORP open-source routing protocol suite [9] as its
control plane. XORP implements a number of routing protocols,
including BGP, OSPF, RIP, PIM-SM, IGMP, and MLD. XORP ma-
nipulates routes in the data plane through a Forwarding Engine Ab-
straction (FEA); supported forwarding engines include the Linux
kernel routing table and the Click modular software router (which
is why we chose XORP for IIAS).
The main complication of running XORP on PlanetLab is the
lack of physical interfaces to correspond to each virtual link in our
configuration. XORP generally assumes that each link to a neigh-
boring router is associated with a physical interface; OSPF also as-
signs costs to network interfaces. In our Click data plane, inter-
faces conceptually map to sockets and links to tunnels. Therefore,
to present XORP with a view of multiple physical interfaces, we
run it in UML and map packets from each UML interface to the
appropriate UDP tunnel in Click.
An important feature of IIAS is that it decouples the control and
data planes by placing the routing protocol in a different virtual
world than the forwarding engine. In fact, decoupling the control
and data planes in this way means that XORP could run in a differ-
ent slice than Click, or even on a different node.
4.2.3
OpenVPN a nd NAT: External Connectivity
IIAS is intended to enable realistic experiments by carrying real
traffic generated by outside hosts, as well as applications running on
the local node. IIAS uses OpenVPN [11] as an ingress mechanism;
IIAS runs an OpenVPN server on a set of designated ingress nodes,
and hosts “opt-in” to a particular instance of IIAS by connecting an
OpenVPN client that diverts their traffic to the server. OpenVPN is
a robust, open-source VPN access technology that runs on a wide
range of operating systems and supports a large user community.
Note that OpenVPN creates a TUN/TAP device on the client to in-
tercept outgoing packets from the operating system, just as we do
in PL-VINI and IIAS.
UDP payloadsrc: 10.1.87.2 dst: 64.236.16.20
dst: 198.32.154.250src: 198.32.154.170 UDP payloadsrc: 10.1.87.2 dst: 64.236.16.20
UDP payloadsrc: 10.1.87.2 dst: 64.236.16.20src: 198.32.154.250 dst: 198.32.154.226
1
2
3
4
dst: 64.236.16.20 payload
5
src: 198.32.154.226
3 4
XORP
server
VPN
Open
1
XORP
UML
Click
XORP
UML
Click
5
CNN
1
tap0: 10.1.87.2
client
VPN
Open
2
src: 128.112.93.81 dst: 198.32.154.170 OpenVPN
198.32.154.250
Click
UML
198.32.154.170 198.32.154.226
PLVINI
64.236.16.20128.112.93.81
Firefox
payloadsrc: 10.1.87.2 dst: 64.236.16.20
Figure 2: The life of a packet in IIAS (shown shaded) running on PL-
VINI (dotted box)
IIAS’s Click forwarder implements NAPT (Network Address
and Port Translation) to allow hosts participating in IIAS to ex-
change packets with external hosts that have not “opted-in” (like a
Web server). IIAS forwards packets destined for an external host
to an egress point, where they exit IIAS via NAPT. This involves
rewriting the source IP address of the packet to the the egress node’s
public IP address, and rewriting the source port to an available local
port. After passing through Click’s NAPT element, a packet is sent
out and forwarded to the destination by the “real” Internet. Note
that, since the packets reaching the external host bear the source
address of the IIAS egress node, return traffic is sent back to that
node, where it is intercepted by IIAS and forwarded back to the
client.
PL-VINI s tap0 interface provides another ingress/egress mech-
anism for other applications running in the same slice as IIAS. For
example, in the experiments described in Section 5, we send iperf
packets through the overlay using tap0.
4.2.4
IIAS Su mmary: Life of a Packet
Figure 2 ties together the discussion of the various pieces of IIAS
by illustrating the life of a packet as it journeys through the IIAS
overlay. In Figure 2, the Firefox web browser on the client machine
at left is sending a packet to www.cnn.com at right through IIAS
(shown shaded). The steps along the packet’s journey are:
1. Firefox sends a packet to CNN. The routing table of the client
directs the packet to the local tap0 device that was created by
OpenVPN. This device bounces the packet up to the Open-
VPN client on the same machine. The packet has a source
of 10.0.87.2 (the local tap0 address) and a destination of
64.236.16.20 (the IP address of CNN’s web server).
2. The OpenVPN client tunnels the packet over UDP to an
OpenVPN server running on a nearby IIAS node. The packet
is encapsulated in IP, UDP, and OpenVPN encryption head-
ers. The OpenVPN server removes the headers and forwards
the original packet to Click over a local Unix domain socket.
3. Click looks up 64.236.16.20 in its forwarding table and maps
it to the IP address of a UML interface on a neighboring node.
Click consults the encapsulation table to map the UML ad-
dress to 198.32.154.250 (the real IP address of the next hop),
and sends the packet over a UDP tunnel to the latter address.
The same process happens again on the next node.
4. The Click process running on 198.32.154.226 receives the
original packet from a UDP tunnel, consults the forwarding
9
FwdrSrc Sink
1
0.
1
.
1
.
2 1
0.
1
.
1
.3
1
0.
1
.
2
.3
1 Gb/s 1 Gb/s
1
0.
1
.
2
.
2
Figure 3: DETER topology for microbenchmarks
table, and sees that it is the egress node for 64.236.16.20.
Click sends the packet through its NAPT element, which
rewrites the source IP address to the local eth0 address, and
rewrites the source port to an available local port (port rewrit-
ing is not shown in Figure 2). Click then directs the packet to
www.cnn.com via the public Internet.
Then, the packet traverses the rest of the path through the Internet
to the CNN Web server. The response packets from CNN have a
destination IP address of 198.32.154.226, ensuring they return to
the client through the VINI node.
5. Preliminary Experiments
In this section, we describe two experiments that we have run in
IIAS on PL-VINI . These experiments are intended not to demon-
strate PL-VINI as a “final product”, but rather as a proof of concept
that highlights the efciency, correctness, and utility of the VINI
design. The microbenchmark experiments (Section 5.1) demon-
strate that PL-VINI provides a level of support for networking ex-
periments comparable to running on dedicated hardware, allowing
the experiment’s throughput and traffic flow characteristics to mir-
ror that of the underlying network. Next, intra-domain routing ex-
periments (Section 5.2) on the Abilene topology demonstrate that
meaningful results for such experiments can be obtained using P L-
VINI on PlanetLab.
5.1 Microbenchmarks
The purpose of the microbenchmarks is to demonstrate that PL-
VINI can support an interesting networking experiment on Planet-
Lab. To this end, we first establish that the IIAS overlay behaves
like a real network when run on dedicated hardware in an isolated
environment, and then show that PL-VINI can provide IIAS with a
similar environment on PlanetLab.
In order to provide a realistic environment for network experi-
ments, PL-VINI must enable IIAS to deliver along two dimensions:
Capacity: To attract real users and real traffic, IIAS must be
able to forward packets at a relatively high rate. If IIAS’s
performance is bad, nobody will use it.
Behavior: To boost our confidence that observed anomalies
are meaningful network events and not undesirable artifacts
of the PL-VI NI environment, IIAS should exhibit roughly the
same behavioral characteristics as the underlying network.
We run two sets of experiments to measure the capacity and be-
havior of IIAS. The rst set of experiments runs on dedicated ma-
chines on DETER [16], which is based on Emulab [15]; we quantify
the efficiency of the IIAS overlay by evaluating the performance of
DETER’s emulated network topology versus IIAS running over that
same topology. The second set of experiments repeats the DETER
experiments on PlanetLab; here we quantify the effects of moving
IIAS from dedicated hardware (DETER/Emulab) to a shared plat-
form (PlanetLab), and then show how PL-VINI’s support for CPU
reservations and real-time priority reduce CPU contention.
The microbenchmark experiments are run using iperf version
1.7.0 [28]. We measure capacity using iperfs TCP throughput test
click
192.168.1.1
iperf iperfclickclick
1
0.
1
.
1
.
2 1
0.
1
.
1
.3
1
0.
1
.
2
.3
1
0.
1
.
2
.
2
192.168.1.2
1 Gb/s
UDP
UDP
1 Gb/s
Figure 4: Overlay topology on DETER
mean (Mb/s) stddev mean CPU%
Network 940 0 48
IIAS 195 0.843 99
Table 2: TCP throughput test on DETER testbed
min avg max mdev % loss
Network 0.193 0.414 0.593 0.089 0
IIAS 0.269 0.547 0.783 0.080 0
Table 3: ping results on DETER; units are ms
to send 20 simultaneous streams from a client to a server through
the underlying network and PL-VINI . We measure behavior with
iperfs constant-bit-rate UDP test, observing the jitter and loss
rate of packet streams (with 1430-byte UDP payloads) of varying
rates. Each test is run 10 times and we report the mean and standard
deviation. When measuring the capacity of PL- VINI , we also report
the mean CPU percentage consumed by the Click process (using the
TIME field as reported by ps).
5.1.1
Microbenchmark #1: Overlay Efficiency
First we compare the capacity and behavior of IIAS’s user-space
Click forwarder versus in-kernel forwarding. The experiments are
run on the DETER testbed, which allows a researcher to specify an
arbitrary network topology for an experiment, including emulated
link characteristics such as delay and loss rate, using a ns script.
The machines used in the experiment are pc2800 2.8 GHz Xeons
with 2 GB memory and five 10/100/1000 Ethernet interfaces, and
are running Linux 2.6.12.
Our experiments run on a simple topology shown in Figure 3,
consisting of three machines connected by Gigabit Ethernet links
that do not have any emulated delay or loss. In this topology, the
machine Fwdr is configured as an IP router; a packet sent from Src
to Sink, or vice-versa, is forwarded in Fwdrs kernel. We compare
the performance of the network with that of IIAS running on the
same three nodes. We configure a Linux TUN/TAP device on each
node to divert packets sent by iperf to the local Click process.
Click then tunnels the packets over the topology as shown in Fig-
ure 4. The key difference between the two scenarios is that IIAS
makes the the forwarding decisions in user space rather than in the
Linux kernel.
Table 2 shows the results of the TCP throughput test for the IIAS
overlay versus the underlying network. IIAS is not as efficient as
the network alone: it manages to achieve about 10% of the through-
put with an equal amount of CPU. The throughput achieved by the
Linux kernel, 940Mb/s, was roughly the maximum supported by
the configuration, and even at this maximum rate the CPU of Fwdr
was 52% idle. In comparison, Click’s forwarding rate is CPU-
bound. Running strace on the Click process indicates (not sur-
prisingly) that the issue is system-call overhead: for each packet
forwarded, Click calls poll, recvfrom, and sendto once, and
gettimeofday three times, with an estimated cost of 5µs per call.
For sendto and recvfrom, this cost appears to be independent of
packet size. Reducing this overhead is future work. However, step-
ping back, we observe that even 200Mb/s is a significant amount
10
20.2 ms 4.5 ms
(src) (fwdr) (sink)
Chicago New York Washington DC
planetlab1.chin planetlab1.nycm planetlab1.wash
Figure 5: PlanetLab topology for microbenchmarks
Mb/s stddev CPU%
Network 90.8 0.53 N/A
IIAS on PlanetLab 22.5 4.01 13
IIAS on PL-VINI 86.2 0.64 40
Table 4: TCP throughput test on PlanetLab
min avg max mdev loss
Network 24.4 24.5 28.2 0.2 0%
IIAS on PlanetLab 24.7 27.7 80.9 4.8 0%
IIAS on PL-VINI 24.7 25.1 28.6 0.38 0%
Table 5: ping results on PlanetLab; units are ms
mean stddev
Network 0.27 0.16
IIAS on PlanetLab 2.4 3.7
IIAS on PL-VINI 1.3 0.9
Table 6: Summary of jitter results on PlanetLab; units are ms
of throughput for a networking experiment, as it far outstrips the
available bandwidth between edge hosts in the Internet today.
Next we compare the fine-grained behavior of the network and
IIAS. Table 3 shows the results of measuring latency on the overlay
and network using ping -f -c 10000. We see that IIAS adds
about 130µs latency on average, but doesn’t change the standard
deviation of ping times. Likewise, running UDP CBR streams at
rates from 1Mb/s to 100Mb/s over the network and IIAS did not
reveal significant jitter in either case. In all UDP CBR tests, iperf
observed jitter of less than 0.1ms and no packet losses.
5.1.2
Microbenchmark #2: Overlay on PlanetLab
The next set of microbenchmarks contrasts the behavior of IIAS
running on dedicated hardware (DETER) to a shared platform
(PlanetLab) and PL-VINI . Our main concern is that the activities of
other users on a shared system like PlanetLab can negatively affect
the performance of IIAS. To test this, we repeat the experiments
of Section 5.1.1 on three PlanetLab nodes co-located with Abilene
PoPs. Figure 5 shows the topology of the PlanetLab nodes and the
underlying Abilene network, as revealed by running traceroute
between the three nodes. The Chicago and Washington, D.C. Plan-
etLab nodes are 1.4 GHz P-III, and the New York node is a 1.267
GHz P-III; all nodes have 1 GB of memory. Again, we compare the
capacity and behavior of IIAS with that of the underlying network.
Note that the network traffic between Chicago and Washington tra-
verses the three routers only, but IIAS traffic traverses four router
hops since it is forwarded by the Click process on the New York
node and so visits the local router twice. Because the links in the
Abilene backbone are lightly loaded, we do not expect to see sig-
nificant interference from cross traffic.
PlanetLab makes running meaningful experiments challenging
because it is shared among many users, whose actions may change
the experimental results. The Emulab microbenchmarks indicate
0
2
4
6
8
10
12
14
16
0 5 10 15 20 25 30 35 40 45
Percent packet loss
UDP traffic rate (Mb/s)
Network
IIAS
(a) With default share
0
0.5
1
1.5
2
0 5 10 15 20 25 30 35 40 45
Percent packet loss
UDP traffic rate (Mb/s)
Network
IIAS
(b) With PL-VINI
Figure 6: Packet losses in IIAS on PlanetLab
that CPU contention in particular is likely to be a problem for PL-
VINI on PlanetLab; however, PL-VINI uses CPU reservations and
real-time priorities to provide consistent CPU scheduling behavior.
Therefore, we run our experiments from Section 5.1.1 using Plan-
etLab’s default fair share (“IIAS on PlanetLab”), as well as a 25%
CPU reservation plus a priority boost for the IIAS Click process
(“IIAS on PL-VINI in the tables and graphs). The CPU reserva-
tion improves the overall capacity of IIAS by giving it more CPU,
while the boost to real-time priority reduces the scheduling latency
of the Click process and so improves end-to-end overlay latency.
Table 4 shows the results of the bandwidth test with both sets of
CPU scheduling parameters. We note that, with PL-VINI , IIAS ap-
proaches the underlying network in both observed throughput and
variability of the result. Running IIAS on PL-VINI provides a 4X
increase in throughput and reduces variability by over 80%.
Focusing on fine-grained behavior of IIAS on PlanetLab, Table 5
presents results using ping. IIAS clearly introduces significant vari-
ability in the latency measurements when run with the default share:
the standard deviation in PL-VINI ping times is over 20X that of the
network. PL-VINI again improves IIAS’s overall behavior, reduc-
ing maximum latency by two-thirds and standard deviation by over
90%. In this case IIAS introduces a small amount of additional la-
tency, and the variability in ping times is roughly double that of the
underlying network.
Table 6 shows the effects of PL-VINI on jitter in the IIAS overlay.
The experiment sends CBR streams between 1Mb/s and 50Mb/s on
the network and overlay; jitter did not appear to be correlated with
stream size and so we report the the jitter results across all streams.
Here we see that running IIAS on PL-VINI halves the mean jitter
and reduces the variation in test results by 75%.
11
Indianapolis
Chicago
Houston
fail
link
Atlanta
Seattle
Sunnyvale
Los Angeles
Kansas CityDenver
after link failure
default route
Washington D.C.
New York
Figure 7: Abilene Topology
70
80
90
100
110
120
0 10 20 30 40 50
Ping RTT (ms)
Seconds
Figure 8: Observing OSPF route convergence (using ping)
Figure 6 shows packet loss in the same set of experiments. In-
terestingly, with the default share on PlanetLab, IIAS loses packets
dramatically as the traffic rate increases as shown in Figure 6(a).
Our hypothesis is that this is due to scheduling latency of the Click
process: packets are arriving at a constant rate on the UDP tunnel,
and Click needs to read them at a faster rate than they are arriving
or else the UDP socket buffer will overflow and the kernel will drop
packets. However, if Click’s scheduling latency is high, it may not
get to run before packets are dropped. This hypothesis is confirmed
by running IIAS on P L-VINI : Figure 6(b) shows packet loss with
PL-VINI comparable to that measured in Abilene itself.
We conclude from these microbenchmarks that PL-VINI and
IIAS together provide a close approximation of the underlying net-
work’s behavior. Clearly, running traffic through an overlay does
introduce some overhead and additional variability. In the next ex-
periment we try to demonstrate that the value of being able to run
IIAS using PL-VINI outweighs this additional overhead.
5.2 Intra-domain Routing Changes
To validate that together IIAS and PL-VINI provide a reasonable
environment for network experiments, we use them to conduct an
intra-domain routing experiment on the PlanetLab nodes co-located
with the eleven routers in the Abilene backbone, as shown in Fig-
ure 7. To conduct a realistic experiment, we configure IIAS with
the same topology and OSPF link weights as the underlying Abi-
lene network, as extracted from the configuration state of the eleven
Abilene routers. That is, each virtual link maps directly to a sin-
gle physical link between two Abilene routers. Analyzing rout-
ing traces collected directly from the Abilene routers enables us to
verify that the underlying network did not experience any routing
changes during our experiment.
0
2
4
6
8
10
12
0 10 20 30 40 50
Megabytes transferred
Seconds
Packet received
(a) Total bytes transferred
2.1
2.15
2.2
2.25
2.3
2.35
2.4
2.45
17.5 18 18.5 19 19.5 20
Megabytes in stream
Seconds
Packet received
(b) TCP slow-start restart after new route is found
Figure 9: TCP throughput during OSPF routing convergence
Our experiment injects a failure, and subsequent recovery, of the
link between Denver and Kansas City, and measures the effects on
end-to-end traffic flows. For this experiment, we “fail” the link
by dropping packets within Click on the virtual link (UDP tunnel)
connecting two Abilene nodes. We use ping, iperf, and tcpdump to
measure the effects on data traffic.
Figure 8 shows the effect on ping times between D.C. and Seattle
of failing the link between Kansas City and Denver 10 seconds in to
the experiment, and restoring the link at time 34 seconds. Initially,
IIAS routes packets from D.C. through New York, Chicago, Indi-
anapolis, Kansas City, and Denver to Seattle, with a mean round-
trip time (RTT) of 76ms. At time 17, 7 seconds after the link fails,
OSPF briefly nds a path with 110ms RTT before settling on a new
route through Atlanta, Houston, Los Angeles, and Sunnyvale with
a mean RTT of 93ms
3
. A few seconds after the link comes back up
at time 34, we see that OSPF briefly finds a path with 87ms before
falling back to the original path.
Figure 9 shows the performance of TCP during the same experi-
ment by using iperf to send a bulk TCP transfer from Washington,
D.C. to Seattle. The TCP receiver window size is set to iperfs de-
fault of 16 KB, so TCP’s throughput is limited to roughly 3 Mb/s.
The figure plots the arrival time of data packets at the receiver, as
reported by tcpdump. Figure 9(a) shows that packets stop getting
through when the link fails at time 10, and resume at time 18 when
OSPF nds the a new route. Figure 9(b) shows what happens at
3
For this experiment, the interval between OSPF hello packets is set at 5
seconds, and the “router dead” interval is 10 seconds.
12
time 18 in more detail; the y axis shows the position in the byte
stream of each arriving TCP packet. The figure shows TCP slow-
start restart in action, then a retransmitted packet, and slow start
again. Figure 9(a) also shows some disruption in the TCP through-
put when OSPF falls back to the original path around time 38.
These experiments do not illustrate any new discoveries about
OSPF or its interaction with TCP. Rather, we argue that they
demonstrate one could make such discoveries using PL-VINI and
IIAS, since PL-VINI enables IIAS to behave like a real network
on PlanetLab. Experiments such as this can help researchers study
routing pathologies that are difficult to observe on a real network,
where a researcher has no control over network conditions.
6. Ongoing Work
In this section, we discuss our ongoing work on VINI. Specifi-
cally, we discuss some of the design goals from Section 3 that we
have yet to address and describe possible solutions to these prob-
lems. First, we discuss two ways to improve the realism of VINI’s
experiments: by exposing the underlying topology changes and by
enabling experiments to exchange routing-protocol messages with
neighboring domains. We then describe our ongoing efforts to pro-
vide researchers with better experimental control by allowing seam-
less migration from simulators (e.g., ns-2) and emulation environ-
ments (e.g., Emulab) to VINI. Finally, we propose techniques to
provide better isolation between experiments.
6.1 Improving Realism
Exposing n etwork failures and topology changes: The fail-
ure or recovery of a physical component should affect each of the
associated virtual components, as discussed in Section 3.1. Our
PL-VINI prototype does not achieve this goal because the under-
lying network automatically reroutes the traffic between two IIAS
nodes when the topology changes. Although masking failures is
desirable to most applications, researchers using VINI may want
their protocols and services to adapt to these events themselves, in
different ways; at a minimum, the researchers would want to know
that these events happened, since they may affect the results of the
experiments. As we continue working with NLR and Abilene, we
are exploring ways to expose the topology changes to VINI in real
time, and extending our software to perform “upcalls” to notify the
affected slices.
Participating in routing with neighboring networks: As dis-
cussed in Section 3.4, multiple VINI experiments may want to ex-
change reachability information with neighboring networks in the
real Internet. Having each virtual node maintain separate BGP ses-
sions introduces problems with scaling (because the number of ses-
sions may be large as the number of experiments grows), manage-
ment (because both sides of the BGP session must be configured),
and stability (unstable, experimental software could introduce in-
stability into neighboring networks and the rest of the Internet).
To avoid these potential issues, we are designing and implement-
ing a multiplexer that manages BGP sessions with neighboring net-
works and forwards (and filters) routing protocol messages between
the external speakers and the BGP speakers on the virtual nodes.
Each experiment might have its own portion of a larger address
block that has already been allocated to VINI. The multiplexer en-
sures that each virtual node announces only its own address space
and may also impose limits on the rate of BGP update messages
that are propagated from each experiment. Our current implemen-
tation of the BGP multiplexer is implemented as multiple instances
of XORP, each running in UML and communicating with a single
external speaker. Each instance of XORP maintains BGP sessions
with the routing software running on the virtual nodes, to allow ex-
periments to exchange BGP messages with neighboring domains.
6.2 Improving Control
Better isolation: VINI should be able to support multiple simul-
taneous experiments with strict resource guarantees for each slice,
as discussed earlier in Section 3.4. Adding support for CPU reser-
vations and real-time priority helps isolate a PL-VINI experiment
from other slices, but PL-VINI arguably needs better isolation. The
first step is to implement a non-work-conserving scheduler that en-
sures that each experiment always receives the same CPU allocation
(i.e., neither less nor more), which is necessary for repeatable ex-
periments. To allow researchers to vary link capacities, we also plan
to add support for setting link bandwidths, either via configuration
of traffic shapers in Click, or in the kernel itself.
Experiment specification: Beyond the existing support for con-
structing arbitrary topologies and failing links, VINI should also
provide the ability to specify experiments. In an ns simulation [13],
an experimenter can generate traffic and routing streams, specify
times when certain links should fail, and define the traces that
should be collected. VINI should provide similar facilities for cre-
ating an experiment. We envision that VINI experiments would be
specified using the same type of syntax that is used to construct ns
or Emulab [15] experiments, so that researchers can move an ex-
periment from Emulab to VINI as seamlessly as possible, as part of
a natural progression. We are currently working on such a speci-
fication for IIAS, which already allows an experimenter to specify
the underlying topology, the intradomain routing adjacencies and
internal BGP sessions, and the times these links and sessions fail.
We envision that aspects of a VINI experiment, such as topolo-
gies, routing configurations, and failures, could be driven by “real
world” routing configurations and measurements. PL-VINI’s cur-
rent machinery for mirroring the Abilene topology automatically
generates the necessary XORP and Click configurations (and deter-
mines the appropriate co-located nodes at Abilene PoPs) for a VINI
experiment from the actual Abilene routing configuration, exploit-
ing the configuration-parsing functionality from previous work on
router configuration checking [12]. Eventually, we intend to aug-
ment VINI to incorporate more of the routing configuration into
XORP and Click and also support playback of routing traces.
7. Conclusion
This paper has described the design of VINI, a virtual network
infrastructure for supporting experimentation with network proto-
cols and architectures in a realistic network environment. VINI
complements the current set of tools for network simulation and
emulation by providing a realistic network environment whereby
real routing software can be evaluated under realistic network con-
ditions and traffic loads with closed-loop experimentation. We first
outlined the case for VINI, providing both design principles and
an implementation-agnostic design. Based on this high-level VINI
design, we have presented one instantiation of VINI on the Plan-
etLab testbed, PL-VINI . Our preliminary experiments in Section 5
demonstrate that PL-VINI is both efficient and a reasonable reflec-
tion of network conditions.
Once VINI is capable of allowing users to run multiple virtual
networks on a single physical infrastructure, it may also ultimately
serve as a substrate for new network protocols and services (making
it useful not only for research, but also for operations). Because
VINI also provides the ability to virtualize any component of the
network, it may lower the barrier to innovation for network-layer
13
services and facilitate new usage modes for existing protocols. We
now briefly speculate on some of these possible usage modes.
First, VINI allows a network operator to simultaneously run
different routing protocols (and even different forwarding mecha-
nisms) for different network services. Previous work has observed
that operators occasionally route external destinations with an inter-
nal routing protocol (e.g., OSPF, IS-IS) that scales poorly but con-
verges quickly for applications that require fast convergence (e.g.,
voice over IP) [12]. With VINI, a network operator could run multi-
ple routing protocols in parallel on the same physical infrastructure
to run different routing protocols for different applications.
Second, VINI could be used to help a network operator with
common management tasks. For example, operators routinely per-
form planned maintenance operations that may involve tweaking
the configurations across multiple network elements (e.g., changing
IGP link costs to redirect traffic for a planned maintenance event).
Similarly, they may occasionally wish to incrementally deploy new
versions of routing software, or test bleeding-edge code. A VINI-
enabled network could allow a network operator to run multiple
routing protocols (or routing protocol versions) on the same phys-
ical network, controlling the forwarding tables in the network ele-
ments in one virtual network at any given time, while providing the
capability for atomic switchover between virtual networks.
VINI’s future appears bright, both as a platform for both exper-
imentation and more flexible network protocols and services. This
paper has demonstrated VINI’s feasibility, as well as its potential
for enabling a new class of controlled, realistic routing experiments.
The design requirements we have specified, and the lessons we have
learned from our initial deployment, should prove useful as we con-
tinue to develop VINI and deploy it in various forms.
Acknowledgments
We thank Changhoon Kim, Ellen Zegura, and the anonymous
reviewers for their comments and suggestions. This work was
supported by HSARPA (grant 1756303), the NSF (grants CNS-
0519885 and CNS-0335214), and DARPA (contract N66001-05-
8902). We would also like to thank the many people at NLR and
Abilene who are providing VINI with physical infrastructure, in-
cluding rack space in their PoPs, bandwidth between sites, and up-
stream connectivity to the Internet.
REFERENCES
[1] D. G. Andersen, H. Balakrishnan, M. F. Kaashoek, and
R. Morris, “Resilient Overlay Networks,” in Proc.
Symposium on Operating Systems Principles, pp. 131–145,
October 2001.
[2] N. Feamster, D. Andersen, H. Balakrishnan, and M. F.
Kaashoek, “Measuring the effects of Internet path faults on
reactive routing,” in Proc. ACM SIGMETRICS, June 2003.
[3] L. Peterson, T. Anderson, D. Culler, and T. Roscoe, “A
blueprint for introducing disruptive technology into the
Internet,” in Proc. SIGCOMM Workshop on Hot Topics in
Networking, October 2002.
[4] A. Bavier, M. Bowman, D. Culler, B. Chun, S. Karlin,
S. Muir, L. Peterson, T. Roscoe, T. Spalink, and
M. Wawrzoniak, “Operating System Support for
Planetary-Scale Network Services,” in Proc. Networked
Systems Design and Implementation, March 2004.
[5] J. Touch and S. Hotz, “The X-Bone,” in Proc. Global
Internet Mini-Conference, pp. 75–83, November 1998.
[6] X. Jiang and D. Xu, “Violin: Virtual internetworking on
overlay infrastructure,” in Proc. International Symposium on
Parallel and Distributed Processing and A pplications,
pp. 937–946, 2004.
[7] The GENI Initiative. http://www.nsf.gov/cise/geni/.
[8] GENI: Global Environment for Network Innovations.
http://www.geni.net/.
[9] M. Handley, E. Kohler, A. Ghosh, O. Hodson, and
P. Radoslavov, “Designing extensible IP router software,” in
Proc. N et worked Systems Design and Implementation, May
2005.
[10] E. Kohler, R. Morris, B. Chen, J. Jannotti, and M. F.
Kaashoek, “The Click modular router,ACM Transactions
on Computer Systems, vol. 18, pp. 263–297, August 2000.
[11] “OpenVPN: An open source SSL VPN solution.
http://openvpn.net/.
[12] N. Feamster and H. Balakrishnan, “Detecting BGP
configuration faults with static analysis,” in Proc. Networked
Systems Design and Implementation, pp. 49–56, May 2005.
[13] “ns-2 Network Simulator.
http://www.isi.edu/nsnam/ns/.
[14] “SSFNet.http://www.ssfnet.org/.
[15] B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad,
M. Newbold, M. Hibler, C. Barb, and A. Joglekar, “An
integrated experimental environment for distributed systems
and networks,” in P roc. Symposium on Operating Systems
Design and Implementation, pp. 255–270, December 2002.
[16] “DETER: A laboratory for security research.
http://www.isi.edu/deter/.
[17] A. Vahdat, K. Yocum, K. Walsh, P. Mahadevan, D. Kostic,
J. Chase, and D. Becker, “Scalability and accuracy in a
large-scale network emulator,” in Proc. Symposium on
Operating Systems Design and Implementation, December
2002.
[18] “WAIL: Wisconsin Advanced Internet Laboratory.
http://wail.cs.wisc.edu/.
[19] “Open Network Laboratory (ONL).
http://onl.arl.wustl.edu/.
[20] M. Hibler, R. Ricci, L. Stoller, J. Duerig, S. Guruprasad,
T. Stack, K. Webb, and J. Lepreau, “Feedback-directed
Virtualization Techniques for Scalable Network
Experimentation, Tech. Rep. FTN-2004-02, University of
Utah, May 2002.
http://www.cs.utah.edu/flux/papers/virt-ftn2004-02.pdf.
[21] Linux VServers Project. http://linux-vserver.org/.
[22] D. Lowenthal, “PlanetLab Sirius Calendar Service.
https://snowball.cs.uga.edu/
dkl/pslogin.php.
[23] Linux Advanced Routing and Traffic Control.
http://lartc.org/.
[24] M. Huang, “VNET: PlanetLab Virtualized Network Access,” Tech.
Rep. PDN–05–029, PlanetLab Consortium, June 2005.
[25] L. Peterson, A. Bavier, M. E. Fiuczynski, and S. Muir, “Experiences
Building PlanetLab, Tech. Rep. TR–755–06, Princeton University,
June 2006.
[26] “User-Mode Linux.
http://user-mode-linux.sourceforge.net/.
[27] A. Bavier, M. Huang, and L. Peterson, “An overlay data plane for
PlanetLab, in Proc. Advanced Industrial Conference on
Telecommunications, July 2005.
[28] “Iperf 1.7.0: The TCP/UDP bandwidth measurement tool.
http://dast.nlanr.net/Projects/Iperf/.
14
... Overlay networking approaches, e.g. VINI [5,14] or point to point connectivity between remote cloud locations [41,52], represent a flexible way for deploying experimental networks and protocols on top of the existing infrastructure. Through encapsulation of network packets on top of UDP packets and tunneling across participating nodes, they allow for the quickest solution to implement experimental protocols Virtual Layer Name Layer Physical Layer Figure 1: NOVN layers of abstraction. ...
... As the topology information is made available at a global scale through the NRS and can be dynamically retrieved from participating resources, the scope of what information is required to share at each layer of the network infrastructure is limited in comparison to other solutions, e.g. [5]. This allows two core issues to be handled separately: the local problem of mapping virtual to physical resources and the global problem of coordinating the virtualized logic across domains. ...
... Consider the case of overlay based solutions (e.g. VINI [5]) where virtual router interfaces are assigned private IP addresses and then mapped to public ones that can be used to tunnel packets across participating resources ( Figure 4). Due to the nature of IP addresses, any configuration change due to failure or resource migration requires the tunnel to be reconfigured, the new information to be propagated across all the participating resources, causing the loss of all ongoing traffic. ...
Article
Full-text available
Achieving advanced Mobile Edge Computing (MEC) services such as dynamic resource assignment and slicing, maintaining Quality of Service (QoS), and enabling heterogeneous virtual functions are some of the technical challenges associated with edge-cloud enhanced 5G architectures now under consideration. This paper proposes a named-object based virtual network (NOVN) architecture to support low-latency applications in the MEC. Software router implementation running on the ORBIT testbed validates the named-object approach, showing low VN processing and control overhead, and making it possible to achieve low latency. A latency performance improvement of 30% is achieved as compared to the baseline implementation without NOVN. The results also validate feasibility of using the advanced MEC services for an example latency constrained edge cloud scenario.
... Tempest [10], ForCES [6], RCP [5], OpenFlow [9], NOX [14], SoftRouter [39], PCE [40], POF [15] 4D [41], IRSCP [42] Network Virtualization Tempest [10], MBone [16], Open vSwitch [46], 6Bone [43], RON [45], Mininet [47], Planet Lab [17],Impasse [44], FlowVisor [20], GENI [18], VINI [19] NVP [21] Network Operating Cisco IOS [24], JUNOS [25], NOX [14], Onix [22], Systems ...
... Similarly, MBone [16] was one of the early initiatives that targeted the creation of virtual network topologies on top of legacy networks, or overlay networks. This work was followed by several other projects such as Planet Lab [17], GENI [18] and VINI [19]. It is also worth mentioning FlowVisor [20] as one of the first recent initiatives to promote a hypervisor-like virtualization architecture for network infrastructures, resembling the hypervisor model common for computing and storage. ...
Technical Report
Legacy IPv4 networks are strenuous to manage and operate. Network operators are in need to minimize the capital expenditure(CapEX) and operational expenditure(OpEX) of running network infrastructure. The implementation of Software-Defined Networking (SDN) addresses these issues by minimizing the expenditure in the long run. Legacy networks need to integrate with the SDN networks for the migration towards the fully functional SDN environment. In this project, we compare the network performance of the legacy network with the SDN network for IP routing in order to determine the feasibility of the SDN deployment for the Internet service provider (ISP) network. The simulation of the network is performed in the Mininet test-bed and the network traffic is generated using distributed Internet traffic generator (D-ITG). Open network operating system (ONOS) is used as a controller for the SDN network in which SDN-IP application is running for IP routing. Round trip time (RTT), bandwidth and packet transmission rate from both of these networks are first collected and then the comparison is done. We found that SDN-IP provides better bandwidth and latency compared to legacy routing. The experimental analysis of interoperability between SDN and legacy networks shows that SDN implementation in production level carrier-grade ISP network is viable and progressive.
... In this study, the Mininet emulator [33,34] was used to implement the proposed method. The SDN was deployed utilizing the SDN-Ryu controller [35]. The SDN application was designed for flow routing optimization that depends on a statistical analysis of network performance. ...
Article
Full-text available
Software-defined networks (SDNs) have the capabilities of controlling the efficient movement of data flows through a network to fulfill sufficient flow management and effective usage of network resources. Currently, most data center networks (DCNs) suffer from the exploitation of network resources by large packets (elephant flow) that enter the network at any time, which affects a particular flow (mice flow). Therefore, it is crucial to find a solution for identifying and finding an appropriate routing path in order to improve the network management system. This work proposes a SDN application to find the best path based on the type of flow using network performance metrics. These metrics are used to characterize and identify flows as elephant and mice by utilizing unsupervised machine learning (ML) and the thresholding method. A developed routing algorithm was proposed to select the path based on the type of flow. A validation test was performed by testing the proposed framework using different topologies of the DCN and comparing the performance of a SDN-Ryu controller with that of the proposed framework based on three factors: throughput, bandwidth, and data transfer rate. The results show that 70% of the time, the proposed framework has higher performance for different types of flows.
Chapter
Testbeds are representative of WSNs, they support the diversity of their hardware and software constituents, they are deployed in the same conditions and would-be environment, and they make use of the protocols to be used at a larger scale. Testbeds are intended to safeguard would-be implemented WSNs from malfunctions that may not be seen in theoretical simulations. Malfunctions may be in inconvenient hardware, buggy software, and deployment prone to energy depletion and radio interferences. By momentarily tolerating faults, which cannot be accepted in everyday actual WSNs, testbeds find the curing solutions. In the literature many testbeds are reported, not all are typically implemented, and not all are available now. Knowledge is to be acquired from who got it by researching, trying, and experimenting; this chapter considers testbeds with authentic information even if they ceased to subsist. Pioneering testbeds, as fully illustrated, continue to offer models in concepts, implementation, and applications. Some of the testbeds are built for general use, while others are meant for typical applications such as visual surveillance. As fully detailed in this chapter, based on the researchers’ and practitioners’ interests, testbeds can be classified under several categories. They may be full-scale or miniaturized, deployed on a 2D or 3D pattern, mobile or static, provide Web services or are just accessible from the deployment location, limited to homogeneous platforms or they are extended to support heterogeneity, provide for hybrid simulation as a tool for enhanced analysis or are contented with experimentation analysis. Testbeds and simulators are complementary; ideally, getting benefits from both of them is the best option. Theoretical simulation studies provide numerical metrics that are truly needed for practical testbed implementation and deployment. But is the topmost approach always possible? Not all the wishes are usually attainable. Testbeds are the expensive choice, both in money and effort; simulation is realistically the less risky resort when budgets and time are short and when typical deployment is not insisting. Simulators are the inevitable tools for analysis; they help in previewing the performance metrics needed for proper testbed deployment. The next chapter considers and compares in full details the most common WSN simulators.
Chapter
Deploying alternative networks such as Information Centric Network (ICN) in a production/commercial network with real users is challenging due to the experimental nature of these novel proposals. To meet these challenges, we adopted an ICN-IP dual stack approach. However, this was at the cost of introducing unpredictable emergent behavior. This behavior can be dealt with by a set of requirements presented in this paper. These requirements specify a constraining discipline on the deployment and operational processes for the dual stack making such processes tractable. The requirements are extracted from our experience of a lab deployment followed by a field deployment with real users as part of the three-year long EU-funded architectuRe for an Internet For Every-body (RIFE) project. We summarize them in a form that can be used by other practitioners in their own ICN/alternative network stack deployments and by tool developers for such deployments. These presented requirements compliment the current discussions within the Centric Networking Research Group (ICNRG) of the Internet Research Task Force (IRTF).
Article
Phasor measurement units (PMUs) have been integrated into the smart grid for monitoring the operational state of system and improving the reliability. Due to the high cost of PMU installment, the optimal placement strategies have attracted considerable attention in the literature. However, the impacts of cyber threats on the placement have been largely ignored owing to the cyber complexities. This paper initializes the study on the optimal PMU placement in a smart grid under the cyber threats. A probabilistic model is developed for assessing the unobservable risk of the power grid. We characterize the impacts of several cyber factors on the PMU placements including the number of directly attacked PMUs, the dependence among attack outcomes, and risk propagation. We further study the impacts of cyber attacks on the allocation strategies under a bi-level placement model. In particular, a novel ‘greedy’ algorithm for PMU placement is introduced with the presence of cyber risks. Our studies show that the cyber risk can significantly increase the unobservability risk of a power system which in turn requires additional PMU allocations, and the dependence among cyber attacks can lead to more unobservable risk.
Chapter
Testbeds are representative of WSNs, they support the diversity of their hardware and software constituents, they are deployed in the same conditions and would be environment, and they make use of the protocols to be used at a larger scale. Testbeds are intended to safeguard would be implemented WSNs from malfunctions that may not be seen in theoretical simulations. Malfunctions may be in inconvenient hardware, buggy software, and deployment prone to energy depletion and radio interferences. By momentarily tolerating faults, that cannot be accepted in everyday actual WSNs, testbeds find the curing solutions.
Conference Paper
Full-text available
The Open Network Laboratory (ONL) is a remotely accessible network testbed designed to enable networking faculty, students and researchers to conduct experiments using high performance routers and applications. The system is built around a set of extensible, high-performance routers and has a graphical interface that enables users to easily configure and run experiments remotely. ONL's Remote Laboratory Interface (RLI) allows users to easily configure a network topology, configure routes and packet filters in the routers, assign flows or flow aggregates to separate queues with configurable QoS and attach hardware monitoring points to real-time charts. The remote visualization features of the RLI make it easy to directly view the effects of traffic as it moves through a router, allowing the user to gain better insight into system behavior and create compelling demonstrations. Each port of the router is equipped with an embedded processor that provides a simple environment for software plugins allowing users to extend the system's functionality. This paper describes the general facilties and some networking experiments that can be carried out. We hope that you and your collegues and students will check out the facility and register for an account at our web site onl.arl.wustl.edu</u
Article
The X-Bone is a system for the rapid, automated deployment and management of overlay networks. Overlay networks use encapsulation to enable virtual infrastructure, and they are being used more frequently to implement experimental networks and dedicated subnets over the Internet. Existing overlays, such as the M-Bone for multicast IP and 6-Bone for IPv6, require manual configuration and management, both to establish connectivity and to ensure efficient resource utilization. The X-Bone uses a graphical interface and multipoint control channel to manage overlay deployment at the IP layer, much like multimedia sessions are controlled via the session directory (sd) tool. The X-Bone enables rapidly deployable virtual infrastructure, that is critical to the development of both network and application services, and that is also useful for deploying isolated infrastructure for restricted purposes. This document describes the architecture of the X-Bone. 1
Article
Linux contains a wildly powerful system for shaping traffic and distributing it according to elaborate rules. This paper serves a dual pur- pose: to explain how to do this as a user and how to write a scheduler in the kernel.
Article
Three experimental environments traditionally support network and distributed systems research: network emulators, network simulators, and live networks. The continued use of multiple approaches highlights both the value and inadequacy of each. Netbed, a descendant of Emulab, provides an experimentation facility that integrates these approaches, allowing researchers to configure and access networks composed of emulated, simulated, and wide-area nodes and links. Netbed's primary goals are ease of use, control, and realism, achieved through consistent use of virtualization and abstraction.By providing operating system-like services, such as resource allocation and scheduling, and by virtualizing heterogeneous resources, Netbed acts as a virtual machine for network experimentation. This paper presents Netbed's overall design and implementation and demonstrates its ability to improve experimental automation and efficiency. These, in turn, lead to new methods of experimentation, including automated parameter-space studies within emulation and straightforward comparisons of simulated, emulated, and wide-area scenarios.
Article
The Internet is composed of many independent autonomous systems (ASes) that exchange reachability information to destinations using the Border Gateway Protocol (BGP). Network operators in each AS configure BGP routers to control the routes that are learned, selected, and announced to other routers. Faults in BGP configuration can cause forwarding loops, packet loss, and unintended paths between hosts, each of which constitutes a failure of the Internet routing infrastructure. This paper describes the design and implementation of rcc, the router configuration checker, a tool that finds faults in BGP configurations using static analysis. rcc detects faults by checking constraints that are based on a high-level correctness specification. rcc detects two broad classes of faults: route validity faults, where routers may learn routes that do not correspond to usable paths, and path visibility faults, where routers may fail to learn routes for paths that exist in the network. rcc enables network operators to test and debug configurations before deploying them in an operational network, improving on the status quo where most faults are detected only during operation. rcc has been downloaded by more than sixty-five network operators to date, some of whom have shared their configurations with us. We analyze network-wide configurations from 17 different ASes to detect a wide variety of faults and use these findings to motivate improvements to the Internet routing infrastructure.
Conference Paper
A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today's wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics. Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON's routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5% of the transfers doubled their TCP throughput and 5% of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.