Conference PaperPDF Available

Performance Analysis and Comparison of Different DNS64 Implementations for Linux, OpenBSD and FreeBSD

Authors:

Abstract

The transition mechanisms for the first phase of IPv6 deployment are surveyed and the most important DNS64 solutions are selected. The test environment and the testing method are described. As for the selected DNS64 implementations, the performance of both BIND9 and TOTD running under Linux, OpenBSD and FreeBSD are measured and compared. The stability of all the tested DNS64 solutions was analyzed under serious overload conditions to test if they may be used in production environments with strong response time requirements.
Performance Analysis and Comparison … See last page for copyright! IEEE AINA 2013, Barcelona, Spain
Performance Analysis and Comparison of Different DNS64 Implementations for
Linux, OpenBSD and FreeBSD
Gábor Lencse
Department of Telecommunications
Széchenyi István University
Győr, Hungary
lencse@sze.hu
Sándor Répás
HunNet-Média Ltd.
Budapest, Hungary
RSandor@AHOL.co.hu
AbstractThe transition mechanisms for the first phase of
IPv6 deployment are surveyed and the most important DNS64
solutions are selected. The test environment and the testing
method are described. As for the selected DNS64 implementa-
tions, the performance of both BIND9 and TOTD running
under Linux, OpenBSD and FreeBSD are measured and com-
pared. The stability of all the tested DNS64 solutions was ana-
lyzed under serious overload conditions to test if they may be
used in production environments with strong response time
requirements.
IPv6 deployment, IPv6 transition solutions, performance
analysis, DNS64, BIND9, TOTD, Linux, OpenBSD, FreeBSD
I. INTRODUCTION
The performance and stability of the different DNS64 [1]
implementations will be an important topic for network op-
erators in the following years because on the one hand the
global IPv4 Address Pool is being depleted1 and on the other
hand the vast majority of the Internet still uses IPv4 only.
Thus from the many issues of the co-existence of IPv4 and
IPv6, the communication of an IPv6 only client with an IPv4
only server is the first practical task to solve in the upcoming
phase of the IPv6 deployment because internet service pro-
viders (ISPs) can still supply the relatively small number of
new servers with IPv4 addresses from their own pool but the
huge number of new clients can get IPv6 addresses only. The
authors believe that DNS64 and NAT64 [2] are the best
available techniques that make it possible for an IPv6 only
client to communicate with an IPv4 only server.
Different free DNS64 implementations were considered
and two of them (BIND9 and TOTD) were selected for test-
ing. The aim of our research was to compare the perfor-
mance of the selected implementations running on different
free operating systems (Linux, OpenBSD and FreeBSD) and
to analyze their behavior under heavy load conditions. Our
results are expected to give valuable information to many
network administrators when selecting the appropriate IPv6
transition solution for their networks.
1IANA delegated the last five “/8” IPv4 address blocks to the five Regional
Internet Registries in 2011 [3], of which APNIC has already depleted its
IPv4 address pool in 2011 and RIPE NCC did so during the writing of this
paper on September 14, 2012 [4]. It means that RIPE NCC also uses a
more strict allocation policy for its very last /8 block.
The performance analysis and comparison of some se-
lected NAT64 implementations under Linux and OpenBSD
was also a part of our research, however, the amount of the
results would exceed the space limitations of this paper thus
they are planned to be published in another paper [20].
The remainder of this paper is organized as follows: first,
some possible techniques are mentioned for the communica-
tion of an IPv6 only client with an IPv4 only server, then the
operation of the DNS64+NAT64 solution is introduced and a
short survey of the results of the most current publications is
given, second, the selection of the DNS64 implementations
is discussed, third, our test environment is described, fourth,
the performance measurement method of the DNS64 imple-
mentations is detailed, fifth, the DNS64 results are presented
and discussed, and finally, our conclusions are given.
II. IPV6 TRANSITION MECHANISMS FOR THE FIRST
PHASE OF IPV6 DEPLOYMENT
A. The Most Important Solutions
The authors conceive that the deployment of IPv6 will
take place by a long co-existence of the two versions of the
Internet Protocol and in the first phase of the IPv6 transition,
the main issue will be the communication of an IPv6 only
client with an IPv4 only server. Several mechanisms can be
used for this task, of which the most notable ones are:
1. NAT-PT/NAPT-PT [5] started its life as a proposed
standard in 2000 but due to several issues it was put
to historic status in 2007 [6].
2. The use of an Application Level Gateway [7] is an
operable alternative, however, it is rather expensive as
ALGs have to be both developed and operated for all
the different applications.
3. The most general and flexible solution is the use of a
DNS64 [1] server and a NAT64 [2] gateway.
B. The Operation of DNS64 and NAT64
To enable an IPv6 only client to connect to an IPv4 only
server, one can use a DNS64 server and a NAT64 gateway.
The DNS64 server should be set as the DNS server of the
IPv6 only client. When the IPv6 only client tries to connect
to any server, it sends a recursive query to the DNS64 server
to find the IPv6 address of the given server. The DNS64
server uses the normal DNS system to find out the IP address
of the server.
Performance Analysis and Comparison … See last page for copyright! IEEE AINA 2013, Barcelona, Spain
If the answer contains an IPv6 address then the
DNS64 server returns the IPv6 address as its answer
to the recursive query.
If the answer contains only an IPv4 address then the
DNS64 server constructs and returns a special IPv6
address called IPv4-Embedded IPv6 Address [8] con-
taining the IPv4 address of the server in the last 32
bits. In the first 96 bits, it may contain the NAT64
Well-known Prefix or a network specific prefix from
the network of the client.
The route towards the network with the given IPv6 prefix
should be set in the IPv6 only client (and in all of the routers
along the route from the client to the NAT64 gateway) so
that the packets go through the NAT64 gateway.
The IPv6 only client uses the received IPv6 address to
communicate with the desired (IPv4 only) server. The traffic
of the IPv6 only client and the IPv4 only server travels
through the NAT64 gateway, which makes their communica-
tion possible by constructing and forwarding the appropriate
version IP packets.
For a more detailed but still easy to follow introduction,
see [9] and for the most accurate and detailed information,
see the relating RFCs: [1] and [2].
C. A Short Survey of the Current Research Results
Several papers were published in the topic of the perfor-
mance of DNS64 and NAT64 in 2012. The performance of
the TAYGA NAT64 implementation (and implicitly of the
TOTD DNS64 implementation) is compared to the perfor-
mance of NAT44 in [10]. The performance of the Ecdysis
NAT64 implementation (that has its own DNS64 implemen-
tation) is compared to the performance of the authors‟ own
HTTP ALG in [11]. The performance of the Ecdysis NAT64
implementation (and implicitly the performance of its
DNS64 implementation) is compared to the performance of
both the NAT-PT and an HTTP ALG in [12]. All of these
papers deal with the performance of a given DNS64 imple-
mentation with a given NAT64 implementation. On the one
hand this is natural, as both services are necessary for the
operation, on the other hand this is a kind of “tie-in sale” that
may hide the real performance of a given DNS64 or NAT64
implementation by itself. Even though both services are ne-
cessary for the complete operation, in a large network they
are usually provided by separate, independent devices;
DNS64 is provided by a name server and NAT64 is per-
formed by a router. Thus the best implementation for the two
services can be and also should be selected independent-
ly. The performance of the BIND DNS64 implementation
and that of the TAYGA NAT64 implementation are ana-
lyzed separately and also their stability is tested in [13].
However, only one implementation was considered for each
service, so even if they were proved to be stable and fast
enough, more research is needed for the comparison of the
performance (and also the stability) of multiple DNS64 and
NAT64 implementations. Due to space limitations, this paper
deals with DNS64 only, our research results concerning
NAT64 implementations will be published in a separate pa-
per [20].
III. THE SELECTION OF DNS64 IMPLEMENTATIONS
Only free software [14] (also called open source [15])
implementations were considered.
As BIND [16], the most widely used DNS implementa-
tion, contains native DNS64 support from version 9.8, it was
a must that it will be chosen.
As BIND is a large and complex software containing all
the different DNS functionalities (authoritative, recursive,
DNSSEC support, etc.) our other choice was a lightweight
one, namely TOTD [17].
Both BIND and TOTD were tested under all the three
operating system, namely: Linux, OpenBSD and FreeBSD.
IV. THE TEST ENVIRONMENT FOR DNS64 PERFORMANCE
MEASUREMENTS
The aim of our tests was to examine and compare the
performance of the selected DNS64 implementations. We
were also interested in their stability and behavior under
heavy load conditions. (For testing the software, some hard-
ware had to be used, but our aim was not the performance
analysis of any hardware.)
A. The Structure of the Test Network
A test network was set up in the Infocommunications La-
boratory of the Department of Telecommunications, Széche-
nyi István University. The topology of the network is shown
in Fig. 1. The central element of the test network is the
DNS64 server.
Dell Precision 490
8x Dell Precision 490
192.168.100.106/24
193.225.151.75/28
2001:738:2c01:8001::1/64
2001:738:2c01:8001::111/64 2001:738:2c01:8001::118/64
Intel PIII 800MHz
. . .
192.168.100.105/24
DNS64
server
Authoritative DNS
server
teacherb.tilb.sze.hu
client computers
for all the tests
3com Baseline Switch
2948-SFP Plus
client1 client8
Figure 1. Topology of the DNS64 test network.
Performance Analysis and Comparison … See last page for copyright! IEEE AINA 2013, Barcelona, Spain
For the measurements, we needed a namespace that:
can be described systematically
can be resolved to IPv4 only
can be resolved without delay
The 10-{0..10}-{0..255}-{0..255}.zonat.tilb.sze.hu name
space was used for this purpose. This namespace was
mapped to the 10.0.0.0 10.10.255.255 IPv4 addresses by
the name server at 192.168.100.105.
The DNS64 server mapped these IPv4 addresses to the
IPv6 address range 2001:738:2c01:8001:ffff:ffff:0a00:0000
2001:738:2c01:8001:ffff:ffff:0a0a:ffff.
The DELL IPv6 only workstations at the bottom of the
figure played the role of the clients for the DNS64 measure-
ments.
B. The Configuration of the Computers
A test computer with special configuration was put to-
gether for the purpose of the DNS64 server in order that the
clients will be able to produce high enough load for over-
loading it. The CPU and memory parameters were chosen to
be as little as possible from our available hardware base in
order to be able to create an overload situation with a finite
number of clients, and only the network cards were chosen to
be fast enough. The configuration of the test computer was:
Intel D815EE2U motherboard
800MHz Intel Pentium III (Coppermine) processor
128MB, 133MHz SDRAM
Two 3Com 3c940 Gigabit Ethernet NICs
Note that the speed of the Gigabit Ethernet could not be
fully utilized due to the limitations of the PCI bus of the mo-
therboard, but the speed was still enough to overload the
CPU.
For all the other purposes (the 8 client computers and the
IPv4 DNS server) standard DELL Precision Workstation 490
computers were used with the following configuration:
DELL 0GU083 motherboard with Intel 5000X chip-
set
Two Intel Xeon 5130 2GHz dual core processors
2x2GB + 2x1GB 533MHz DDR2 SDRAM (accessed
dual channel)
Broadcom NetXtreme BCM5752 Gigabit Ethernet
controller (PCI Express)
Debian Squeeze 6.0.3 GNU/Linux operating system was
installed on all the computers (including the Pentium III test
computer when it was used under Linux). The version of the
OpenBSD and FreeBSD operating systems installed on the
test computer were 5.1 and 9.0, respectively.
V. DNS64 PERFORMANCE MEASUREMENT METHOD
A. IPv4 DNS Server Settings
The DNS server was a standard DELL Linux workstation
using the 192.168.100.105 IP address and the symbolic name
teacherb.tilb.sze.hu. BIND was used for authoritative
name server purposes in all the DNS64 experiments. The
version of BIND was 9.7.3 as this one can be found in the
Debian Squeeze distribution and there was no need for spe-
cial functions (unlike in the case of the DNS64 server).
The 10.0.0.0/16-10.10.0.0/16 IP address range was regis-
tered into the zonat.tilb.sze.hu zone with the appropriate
symbolic names. The zone file was generated by the follow-
ing script:
#!/bin/bash
cat > db.zonat.tilb.sze.hu << EOF
\$ORIGIN zonat.tilb.sze.hu.
\$TTL 1
@ IN SOA teacherb.tilb.sze.hu. kt.tilb.sze.hu. (
2012012201 ; Serial
28800 ; Refresh
7200 ; Retry
604800 ; Expire
2 ) ; Min TTL
@ 86400 IN NS teacherb.tilb.sze.hu.
EOF
for a in {0..10}
do
for b in {0..255}
do
echo '$'GENERATE 0-255 10-$a-$b-$ IN A \
10.$a.$b.$ >> db.zonat.tilb.sze.hu
done
done
echo "" >> db.zonat.tilb.sze.hu
The first general line of the zone file (describing the
symbolic name resolution) was the following one:
$GENERATE 0-255 10-0-0-$ IN A 10.0.0.$
A line of this kind is equivalent to 256 traditional “IN A”
lines; the $GENERATE directive was used for shorthand pur-
poses.
As it can be seen from the script above and as it has been
mentioned earlier, these symbolic names have only “A”
records and no “AAAA” records, so the generation of the
IPv6 addresses was the task of the DNS64 server.
B. The operation mode of the DNS servers
If a DNS (or DNS64) server receives a recursive query, it
can act in two ways: it may resolve the query itself by per-
forming a series of iterative queries or it may ask another
name server to resolve the query. A name server that re-
solves the recursive queries is called recursor and a name
server that asks another name server to resolve them is called
forwarder. While BIND can be either of them, TOTD can
act only as a forwarder. (See more details later.)
C. DNS64 Server Settings
Several combinations of the DNS64 implementations and
operating systems were tested, e.g. both BIND 9.8 and
TOTD were tested under Linux, OpenBSD and FreeBSD.
1) Preparation of the Linux test system
The network interfaces of the freshly installed Debian
Squeeze Linux operating system on the Pentium III comput-
er were set according to Fig. 1.
In order to facilitate the IPv6 SLAAC (Stateless Address
Autoconfiguration) of the clients, radvd (Router Advertise-
ment Daemon) was installed on the test computer.
The settings in the file /etc/radvd.conf were the fol-
lowing ones:
interface eth2
{
AdvSendAdvert on;
Performance Analysis and Comparison … See last page for copyright! IEEE AINA 2013, Barcelona, Spain
AdvManagedFlag off;
AdvSendAdvert on;
prefix 2001:738:2c01:8001::/64
{ AdvOnLink off; };
RDNSS 2001:738:2c01:8001::1 {};
};
Now, the DNS64 server was functionally ready for the
operation, however, during our preliminary tests, the con-
ntrack table of the netfilter of the test computer providing the
DNS64 service became full and the name resolution stopped
functioning. As DNS64 does not require netfilter, the remov-
al of the netfilter module from the kernel of the computer can
be a feasible solution. If one needs netfilter for any reason
then either the size of the conntrack table may be increased
(it is necessary to increase the value of the hashsize parame-
ter proportionally, too) or the value of the timeout can be
decreased. As the first one has a resource (memory) re-
quirement, the second one was chosen. The timeout for the
UDP packets was decreased from 30 seconds to 1 second.
The exact name of the changed kernel parameter was:
/proc/sys/net/netfilter/nf_conntrack_udp_timeout
2) Preparation of the BSD test systems
Similarly to the Linux test system, the network interfaces
of the BSD systems were set up as shown in Fig. 1. The con-
tent of the /etc/rtadvd.conf file was set as follows for
both OpenBSD and FreeBSD:
default:\
:chlim#64:raflags#0:rltime#1800:rtime#0:retrans#0:\
:pinfoflags="la":vltime#2592000:pltime#604800:mtu#0:
sk1:\
:addr="2001:738:2c01:8001::":prefixlen#64:tc=default:
The limitation of the UDP timeout and the increase of the
maximum number of connections were achieved by the fol-
lowing two lines in /etc/pf.conf:
set timeout interval 1
set limit states 40000
The following lines are optional, but they were used too:
pass in on sk0 inet proto udp from any port 53 to \
193.225.151.75 no state
pass out on sk0 inet proto udp from 193.225.151.75 to \
any port 53 no state
pass in on sk1 inet6 proto udp from any to \
2001:738:2c01:8001::1 port 53 no state
pass out on sk1 inet6 proto udp from \
2001:738:2c01:8001::1 port 53 to any no state
In this way, PF does not record the state of the DNS re-
quests and answers. (It is not necessary, as the possible num-
ber of states was already increased to 40000 above.)
3) The set up of the BIND DNS64 server
The BIND 9.8 was compiled from source under Linux
and OpenBSD. FreeBSD version 9.0 already contained the
9.8.1-P1 version of BIND.
The 2001:738:2c01:8001:ffff:ffff::/96 (network specific)
prefix was set to BIND for the DNS64 function using the
dns64 option in the file /etc/bind/named.conf.options.
Now, BIND was ready to operate as a recursor. To make the
performance of BIND and TOTD comparable, BIND was
also set as a forwarder. It was done by the following addi-
tional settings in the named.conf file:
forwarders { 192.168.100.105; };
forward only;
4) The set up of the TOTD DNS64 server
As TOTD is just a DNS forwarder and not a DNS recur-
sor it was set to forward the queries to the BIND running on
the teacherb computer. The content of the /etc/totd.conf
file was set as follows:
forwarder 192.168.100.105 port 53
prefix 2001:738:2c01:8001:ffff::
retry 300
D. Client Settings
Debian Squeeze was installed on the DELL computers
used for client purposes, too. On these computers, the
DNS64 server was set as name server in the following way:
echo "nameserver 2001:738:2c01:8001::1" > \
/etc/resolv.conf
E. DNS64 Performance Measurements
The CPU and memory consumption of the DNS64 server
was measured in the function of the number of requests
served. The measure of the load was set by starting test
scripts on different number of client computers (1, 2, 4 and
8). In order to avoid the overlapping of the namespaces of
the client requests (to eliminate the effect of the DNS cach-
ing), the requests from the number i client used target ad-
dresses from the 10.$i.0.0/16 network. In this way, every
client could request 216 different address resolutions. For the
appropriate measurement of the execution time, 256 experi-
ments were done and in every single experiment, 256 ad-
dress resolutions were performed using the standard host
Linux command. The execution time of the experiments was
measured by the GNU time command. (Note that this com-
mand is different from the time command of the bash shell.)
The clients used the following script to execute the 256
experiments:
#!/bin/bash
i=`cat /etc/hostname|grep -o .$`
rm dns64-$i.txt
do
for b in {0..255}
do
/usr/bin/time -f "%E" -o dns64-$i.txt \
a ./dns-st-c.sh $i $b
done
done
The synchronized start of the client scripts was done by
using the “Send Input to All Sessions” function of the ter-
minal program of KDE (called Konsole).
The dns-st-c.sh script (taking two parameters) was re-
sponsible for executing a single experiment with the resolu-
tion of 256 symbolic names:
#!/bin/bash
for c in {0..252..4} # that is 64 iterations
do
host 10-$1-$2-$c.zonat.tilb.sze.hu &
host 10-$1-$2-$((c+1)).zonat.tilb.sze.hu &
host 10-$1-$2-$((c+2)).zonat.tilb.sze.hu &
host 10-$1-$2-$((c+3)).zonat.tilb.sze.hu
done
Performance Analysis and Comparison … See last page for copyright! IEEE AINA 2013, Barcelona, Spain
In every iteration of the for cycle, four host commands
were started, from which the first three were started asyn-
chronously (“in the background”) that is, the four commands
were running in (quasi) parallel; and the core of the cycle
was executed 64 times, so altogether 256 host commands
were executed. (The client computers had two dual core
CPUs that is why four commands were executed in parallel
to generate higher load.)
In the series of measurements, the number of clients was
increased from one to eight (the used values were: 1, 2, 4 and
8) and the time of the DNS resolution was measured. The
CPU and memory utilization were also measured on the test
computer running DNS64.
Under Linux, the following command line was used:
dstat -t -c -m -l -p --unix --output load.csv
Under the BSD operating systems, the command line
was:
vmstat -w 1 >load.txt
VI. DNS64 PERFORMANCE RESULTS
The results are presented in similar tables for all the
tested DNS64 implementation and operating system combi-
nations. A detailed explanation is given for the first table
only the others are to be interpreted in the same way.
A. DNS64 Performance Results of BIND
First, BIND implementing DNS64 was used as a recur-
sor, as it is a natural solution to complete the whole task.
However, as TOTD can act only as a forwarder, later BIND
was also tested as a forwarder to be able to compare their
performance.
1) Linux, BIND is a recursor
The performance results of the DNS64 server realized by
BIND used as a recursor running under Linux were summa-
rized in Table I. The first row of the table shows the number
of clients. (The load of the DNS64 server was increasing in
the function of this parameter.) The second, third and fourth
rows shows the average, the standard deviation and the max-
imum value of the execution time of the execution of 256
host commands (this is called one experiment). The results
show little deviation and the maximum values are always
close the average.
Rows number five and six show the average value and
the standard deviation of the CPU utilization, respectively.
TABLE I. DNS64 PERFORMANCE: BIND, LINUX, RECURSOR
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
1.242
1.862
7.541
3
standard deviation
0.018
0.050
0.282
4
maximum value
1.550
2.260
12.690
5
CPU utilization (%)
67.66
96.63
100.00
6
standard deviation
1.2
2.3
0.0
7
DNS64 memory
consumption (MB)
36
49
50
8
Number of requests
served (request/s)
206
275
272
Row number seven shows the estimated memory con-
sumption of DNS64. (This parameter can be measured with
high uncertainty, as its value is not too high and other
processes than DNS64 may also influence the size of
free/used memory of the Linux box.)
The number of DNS64 requests per second, served by the
test computer, was calculated using the number of clients (in
row 1) and the average execution time values (in row 2) and
it is displayed in the last row of the table.
On the basis of the results above, we can state that:
The increase of the load does not cause serious per-
formance degradation and the system does not at all
tend to collapse due to overload. Even when the CPU
utilization is about 100% the response time increases
approximately linearly with the load (that is, with the
number of clients)
We cannot give an exact estimation for the memory
consumption of DNS64, but it is visibly moderate
even for extremely high loads.
It can be seen from the last row of the table that the
maximum of the number of requests served was
achieved using two clients. The further increase in the
number of clients caused only increase in the re-
sponse time, but the number of requests per second
could not increase. The reason for this was that the
test program did not send a new request until the last-
ly started one of the four host commands (running in
parallel) received an answer.
The results presented above are very important, because
they show that the behavior of the DNS64 system realized by
BIND as a recursor running under Linux complies with the
so called graceful degradation [18] principle; if there are not
enough resources for serving the requests then the response
time of the system increases only linearly with the load.
Another very important observation is that even for 8
clients, the standard deviation of the execution time (0.28s)
is less than 4% of the average (7.54s) and also the maximum
value of the execution time (12.69s) is less then than double
of the average. This means that the system shows stability
even in a very serious overload situation.
These two observations make BIND running under Linux
a good candidate for DNS64 server solution in a production
network with strong response time requirements.
2) OpenBSD, BIND is a recursor
The performance results of the DNS64 server realized by
BIND used as a recursor running under OpenBSD were
summarized in Table II. Even though the average values for
the execution time or for the number of requests served in a
second are not far from that of the Linux system, there is a
serious difference: for eight clients the standard deviation
(7.3s) of the execution time of 256 host commands is about
as high as its average value (7.6s) and the maximum value of
the execution time (52.7s) is about 7 times higher than the
average. Let us see the distribution of the execution time
more precisely. Fig. 2 shows a histogram of the execution
time of 256 host commands (that is one experiment) for two
Performance Analysis and Comparison … See last page for copyright! IEEE AINA 2013, Barcelona, Spain
TABLE II. DNS64 PERFORMANCE: BIND, OPENBSD, RECURSOR
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
1.259
2.013
7.626
3
standard deviation
0.009
0.051
7.305
4
maximum value
1.300
2.180
52.690
5
CPU utilization (%)
72.94
96.90
98.12
6
standard deviation
1.8
2.2
5.6
7
DNS64 memory
consumption (MB)
35
45
45
8
Number of requests
served (request/s)
203
254
269
clients (each client executed 256 experiments thus 512 ex-
ecution time values were produced). The execution time val-
ues are located in a relatively narrow range. Fig. 3 shows the
histogram of the execution time values for 8 clients (there are
8 times 256 = 2048 values). The results are scattered in a
wide range.
Because of the huge deviation of the results, we do not
recommend the use of BIND under OpenBSD for DNS64
services in a production environment.
3) FreeBSD, BIND is a recursor
The performance results of the DNS64 server realized by
BIND used as a recursor running under FreeBSD were
summarized in Table III. From the three analyzed platforms,
BIND produced the poorest performance results under
FreeBSD considering the average execution time or the
number of requests served in a second. It is very likely
caused by the program execution in a jail environment [19]
under FreeBSD for security reasons. However, BIND under
FreeBSD showed even greater stability than under Linux. As
for the execution time of one experiment in the case of eight
clients, the value of the standard deviation (0.09s) is less than
1% of the average (11.92s) and the maximum value (12.14s)
is very close to the average. The system also complied with
the graceful degradation principles. Thus if the FreeBSD
platform is preferred for security or some other reasons and
one can accept the performance sacrifice, BIND under
FreeBSD can be a choice for DNS64 service in a production
system.
0
100
200
1,83
1,87
1,90
1,94
1,97
2,01
2,04
2,08
2,11
2,15
2,18
More
Frequency
Bin Range (s)
Execution time of one experiment,
BIND, OpenBSD, 2 clients
Figure 2. Distribution of the execution time of one experiment for BIND,
OpenBSD, 2 clients
0
1000
2000
1,4
6,5
11,6
16,8
21,9
27,0
32,2
37,3
42,4
47,6
52,7
More
Frequency
Bin Range (s)
Execution time of one experiment,
BIND, OpenBSD, 8 clients
Figure 3. Distribution of the execution time of one experiment for BIND,
OpenBSD, 8 clients.
TABLE III. DNS64 PERFORMANCE: BIND, FREEBSD, RECURSOR
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
1.815
2.979
11.923
3
standard deviation
0.018
0.034
0.090
4
maximum value
2.000
3.070
12.140
5
CPU utilization (%)
78.58
97.51
100.00
6
standard deviation
1.8
1.6
0.0
7
DNS64 memory
consumption (MB)
30
36
53
8
Number of requests
served (request/s)
141
172
172
The performance of BIND as a recursor under the three
platforms is visualized for comparison in Fig. 4. The huge
maximum values of the execution time under OpenBSD to-
tally discredit the platform for BIND under heavy load.
4) Linux, BIND is a forwarder
The performance results of the DNS64 server realized by
BIND used as a forwarder running under Linux were sum-
marized in Table IV. They appear here mainly for a fair per-
formance comparison with TOTD. There is not much
difference but a slight (about 20%) increase of the peak per-
formance can be observed comparing the values to Table I. It
is due to handing over the task of the recursion to the BIND
running on teacherb.tilb.sze.hu.
Figure 4. The Execution time of one experiment (256 host commands)
using BIND as a recursor under Linux, OpenBSD and FreeBSD
TABLE IV. DNS64 PERFORMANCE: BIND, LINUX, FORWARDER
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
1.127
1.542
6.318
3
standard deviation
0.049
0.037
0.104
4
maximum value
1.650
1.680
6.470
5
CPU utilization (%)
61.77
95.56
100.00
6
standard deviation
4.1
2.3
0.0
7
DNS64 memory
consumption (MB)
40
58
57
8
Number of requests
served (request/s)
227
332
324
Performance Analysis and Comparison … See last page for copyright! IEEE AINA 2013, Barcelona, Spain
TABLE V. DNS64 PERFORMANCE: BIND, OPENBSD, FORWARDER
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
1.211
1.749
7.291
3
standard deviation
0.008
0.051
7.800
4
maximum value
1.230
1.910
37.890
5
CPU utilization (%)
67.46
97.21
98.48
6
standard deviation
1.9
2.4
6.0
7
DNS64 memory
consumption (MB)
37
52
47
8
Number of requests
served (request/s)
211
293
281
5) OpenBSD, BIND is a forwarder
The performance results of the DNS64 server realized by
BIND used as a forwarder running under OpenBSD were
summarized in Table V. Similarly to the Linux system, there
is not much difference but a slight increase of the perfor-
mance can be observed comparing the values to Table II.
And also the deviation was increased further; for 8 clients it
is even larger than the average.
6) FreeBSD, BIND is a forwarder
The performance results of the DNS64 server realized by
BIND used as a forwarder running under FreeBSD were
summarized in Table VI. The stability of BIND under
FreeBSD remained, but the peak performance increased only
about 7% compared to the values in Table III.
B. DNS64 Performance Results of TOTD
As it was mentioned earlier, TOTD can act only as a
forwarder.
1) Linux, TOTD is a forwarder
The performance results of the DNS64 server realized by
TOTD used as a forwarder running under Linux were sum-
marized in Table VII. TOTD under Linux excels with both
its very low memory consumption and its high average per-
formance. The low memory consumption is very likely
caused by the lack of caching. As our experiments were de-
signed to eliminate the effect of caching by using different IP
addresses in each query, the lack of caching caused no per-
formance penalty. However, in a real life system, the average
performance of TOTD may be worse than BIND that uses
caching. But the very low memory consumption of TOTD
can be an advantage in a small embedded system.
TOTD under Linux can perform well if the load can be
limited. For one client, the maximum value of the execution
time was acceptable (less than twice of the average) and
TABLE VI. DNS64 PERFORMANCE: BIND, FREEBSD, FORWARDER
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
1.711
2.831
11.126
3
standard deviation
0.014
0.037
0.099
4
maximum value
1.790
2.950
11.390
5
CPU utilization (%)
77.15
97.32
100.00
6
standard deviation
1.8
1.5
0.0
7
DNS64 memory
consumption (MB)
31
37
37
8
Number of requests
served (request/s)
150
181
184
TABLE VII. DNS64 PERFORMANCE: TOTD, LINUX, FORWARDER
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
0.791
1.310
4.348
3
standard deviation
0.038
3.863
5.301
4
maximum value
1.370
64.950
68.540
5
CPU utilization (%)
38.00
58.05
84.79
6
standard deviation
2.4
27.1
29.2
7
DNS64 memory
consumption (MB)
1.0
1.1
0.8
8
Number of requests
served (request/s)
324
391
471
TOTD served 324 requests per second, while BIND running
under Linux as a forwarder could serve only 227 (see Table
IV).
However, the results of TOTD for 2, 4 and 8 clients are
unacceptable: the maximum values of the execution time of
one experiment are always higher than a minute. This phe-
nomenon was investigated and it was found that in high load
situations, TOTD occasionally stopped responding for about
a minute and continued the operation afterwards. TOTD pro-
duced similar behavior under all the three operating systems.
It was also checked that the authoritative DNS server at tea-
cherb was still responsive. So without the limitation of the
load, TOTD is not safe to be used in a production system.
We believe that due to its excellent average performance,
TOTD would deserve a thorough code review and bug fix.
2) OpenBSD, TOTD is a forwarder
The performance results of the DNS64 server realized by
TOTD used as a forwarder running under OpenBSD were
summarized in Table VIII. Comparing TOTD running under
OpenBSD to TOTD running under Linux, it can be said that
the average performance results are similar but its stability is
somewhat better under OpenBSD than under Linux; it pro-
duced acceptable maximum execution time values for two
clients under OpenBSD. It can also be observed that the
maximum values of the execution time of one experiment are
less than a minute under OpenBSD (but they are still too
high, so is the standard deviation). For these reasons, if one
chooses to use TOTD with load limitation, it is worth using it
under OpenBSD rather than under Linux.
3) FreeBSD, TOTD is a forwarder
The performance results of the DNS64 server realized by
TOTD used as a forwarder running under FreeBSD were
summarized in Table IX.
TABLE VIII. DNS64 PERFORMANCE: TOTD, OPENBSD, FORWARDER
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
0.808
1.118
4.445
3
standard deviation
0.010
0.141
5.468
4
maximum value
0.830
3.060
41.080
5
CPU utilization (%)
48.99
79.98
78.34
6
standard deviation
2.2
6.6
11.4
7
DNS64 memory
consumption (MB)
0.2
0.1
0.4
8
Number of requests
served (request/s)
317
458
461
Performance Analysis and Comparison … See last page for copyright! IEEE AINA 2013, Barcelona, Spain
TABLE IX. DNS64 PERFORMANCE: TOTD, FREEBSD, FORWARDER
1
Number of clients
1
2
8
2
Exec. time of 256 host
commands (s)
0.923
1.127
4.750
3
standard deviation
0.676
0.061
6.181
4
maximum value
5.800
1.300
69.700
5
CPU utilization (%)
45.01
78.66
83.23
6
standard deviation
6.4
5.2
34.2
7
DNS64 memory
consumption (MB)
3.0
0.7
0.9
8
Number of requests
served (request/s)
277
454
431
It can be observed that the average performance results of
TOTD are quite similar under all the three platforms. For
two clients, TOTD under FreeBSD even outperformed
TOTD under Linux by serving 454 requests vs. 391 requests.
However, the validity of these numbers is rather low, be-
cause the average values are highly influenced by the phe-
nomenon of the unresponsiveness of TOTD. The numbers of
the served requests for one client are much more faithfully
representing the performance of TOTD under the two plat-
forms: it can serve 324 requests per second under Linux and
only 227 requests per second under FreeBSD.
The authors have another recommendation about TOTD.
As this tiny DNS64 solution used very little memory under
all the three operating systems and it outperformed BIND
and worked well while the load was limited, TOTD is a good
candidate for a DNS64 solution on the client side. The load
of one computer is surely limited and its low memory and
CPU requirements make it the best choice for a client side
DNS64 service.
VII. CONCLUSIONS
BIND9 running under Linux showed the best overall per-
formance characteristics from among the tested DNS64 solu-
tions. It was stable under serious overload conditions and its
memory requirement was found moderate. BIND9 running
under Linux is our number one recommendation for a
DNS64 solution in a production system.
BIND9 running under FreeBSD was very stable but
showed less good performance than under Linux, due to run-
ning in jail. It can also be a choice if FreeBSD platform is
preferred for security reasons.
The memory usage of TOTD running under all the three
operating systems was very low thus it can be a good DNS64
solution for small embedded systems but it needs an external
tool for load limitation otherwise it may lose its stability.
Due to its average performance, TOTD would deserve a
thorough code review and bug fix. It is also a good candidate
for a client side DNS64 solution.
This is the author‟s version of the paper. For personal use only, not for
redistribution. The definitive version can be found in the Proceedings of the
IEEE 27th International Conference on Advanced Information Networking
and Applications (AINA 2013), (Barcelona, Spain, March 25-28, 2013.)
pp. 877-884. © IEEE Computer Society
ACKNOWLEDGMENTS
TÁMOP-4.2.2.C-11/1/KONV-2012-0012: “Smarter
Transport” IT for co-operative transport system The
Project is supported by the Hungarian Government and co-
financed by the European Social Fund.
The publication of this paper was supported by the
TÁMOP-4.2.2/B-10/1-2010-0010 project and by the Széche-
nyi István University (15-3202-08).
REFERENCES
[1] M. Bagnulo, A Sullivan, P. Matthews and I. Beijnum, “DNS64: DNS
extensions for network address translation from IPv6 clients to IPv4
servers”, IETF, April 2011. ISSN: 2070-1721 (RFC 6147)
[2] M. Bagnulo, P. Matthews and I. Beijnum, “Stateful NAT64: Network
address and protocol translation from IPv6 clients to IPv4 servers”,
IETF, April 2011. ISSN: 2070-1721 (RFC 6146)
[3] The Number Resource Organization, “Free pool of IPv4 address
space depleted” http://www.nro.net/news/ipv4-free-pool-depleted
[4] RIPE NCC, “RIPE NCC begins to allocate IPv4 address space from
the last /8”, http://www.ripe.net/internet-coordination/news/ripe-ncc-
begins-to-allocate-ipv4-address-space-from-the-last-8
[5] G. Tsirtsis and P. Srisuresh, “Network Address Translation - Protocol
Translation (NAT-PT)”, IETF, February 2000. (RFC 2766)
[6] C. Aoun and E. Davies, “Reasons to move the Network Address
Translator - Protocol Translator (NAT-PT) to historic status”, IETF,
July 2007. (RFC 4966)
[7] P. Srisuresh and M. Holdrege, IP Network Address Translator
(NAT) terminology and considerations”, IETF, August 1999. (RFC
2663)
[8] C. Bao, C. Huitema, M. Bagnulo, M Boucadair and X. Li, “IPv6
addressing of IPv4/IPv6 translators”, IETF, October 2010. ISSN:
2070-1721 (RFC 6052)
[9] M. Bagnulo, A. Garcia-Martinez and I. Van Beijnum, “The
NAT64/DNS64 tool suite for IPv6 transition”, IEEE Communications
Magazine, vol. 50, no. 7, July 2012, pp. 177-183.
[10] K. J. O. Llanto and W. E. S. Yu, "Performance of NAT64 versus
NAT44 in the context of IPv6 migration", Proceedings of the Interna-
tional MultiConference of Engineers and Compuer Scientists 2012
Vol I. (IMECS 2012, March 14-16, 2012), Hong Kong, pp. 638-645
[11] C. P. Monte et al, "Implementation and evaluation of protocols trans-
lating methods for IPv4 to IPv6 transition", Journal of Computer
Science & Technology; vol. 12, no. 2, pp. 64-70
[12] S. Yu, B. E. Carpenter, "Measuring IPv4 IPv6 translation tech-
niques", Technical Report 2012-001, Department of Computer
Science, The University of Auckland, January 2012
[13] G. Lencse and G Takács, “Performance analysis of DNS64 and
NAT64 solutions”, Infocommunications Journal, vol. 4. no 2.
[14] Free Software Fundation, “The free software definition”,
http://www.gnu.org/philosophy/free-sw.en.html
[15] Open Source Initiative, “The open source definition”,
http://opensource.org/docs/osd
[16] Internet Systems Consortium, “Berkeley Internet Name Daemon
(BIND)” , https://www.isc.org/software/bind
[17] Feike W. Dillema, “Trick Or Treat Daemon (TOTD)”,
http://www.dillema.net/software/totd.html
[18] NTIA ITS, “Definition of „graceful degradation‟
http://www.its.bldrdoc.gov/fs-1037/dir-017/_2479.htm
[19] FreeBSD Handbook, Chapter 16 Jails
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails.html
[20] G. Lencse and S. Répás, Performance analysis and comparison of
the TAYGA and of the PF NAT64 implementations”, unpublished
... Its purpose was to test the stability of a DNS64 implementation, more specifically, to check if its behavior under heavy load complies with the graceful degradation [20] principle. The same method was used for stability testing and also performance comparison of two DNS64 implementations [21]. The basic idea of the method is to test the DNS64 servers by sending many AAAA record requests from a namespace which: ...
... Although the performance comparison of the different implementations is not our main goal, we have to mention that the performances of TOTD and BIND9 are now in a reverse order than they were found earlier in [21,22,23]. We see two possible reasons. ...
... Average (also called mean or arithmetic mean) and median are both often used as summarizing functions of the results of multiple experiments. In our earlier papers [21,22,23], we used average (together with standard deviation), but it had been chosen without any special considerations. Whereas average is more inclusive and less sensitive to noise, it is more sensitive to outliers. ...
Article
DNS64 is an important IPv6 transition technology used in convergence with NAT64 to enable IPv6-only clients to communicate with IPv4-only servers. Several DNS64 implementations have been proposed as a solution. Their performance is an important decision factor for network operators with regard to choosing the most appropriate one among them. To that end, this article proposes a methodology for measuring their performance. The number of resolved queries per second is proposed as performance metric and a step by step procedure is given for its measurement. The design considerations behind the method are also disclosed and the performance requirements for the tester device are specified. The feasibility of our method is proven and its execution is demonstrated in two case studies, which include an empirical analysis of the tester as well as that of three open-source DNS64 implementations. The influence of the rate of existing AAAA records on the performance of the DNS64 server, as well as the influence of the cache hit rate of the DNS64 server on the performance of the DNS64 server are also measured and modeled. Our results and their precision may serve as a reference for further tests.
... We considered this performance unsatisfactory on the basis of our previous benchmarking experience. Using the same old 800MHz Pentium III computer as DUT, our DNS64 performance measurement results were under 500 resolved queries per second [11], whereas our stateful NAT64 test results exceeded 21,000 packets per second [12]. Therefore, we decided to use DPDK [13] to ensure the highest possible performance. ...
Article
Full-text available
The Benchmarking Working Group of IETF has defined a benchmarking methodology for IPv6 transition technologies including stateless NAT64 (also called SIIT) in RFC 8219. The aim of our effort is to design and implement a test program for SIIT gateways, which complies with RFC 8219, and thus to create the world's first standard free software SIIT benchmarking tool. In this paper, we overview the requirements for the tester on the basis of RFC 8219, and make scope decisions: throughput, frame loss rate, latency and packet delay variation (PDV) tests are implemented. We fully disclose our design considerations and the most important implementation decisions. Our tester, siitperf, is written in C++ and it uses the Intel Data Plane Development Kit (DPDK). We also document its functional tests and its initial performance estimation. Our tester is distributed as free software under GPLv3 license for the benefit of the research, benchmarking and networking communities.
... Even though both services are necessary for the complete operation, in a large network, they are usually provided by separate, independent devices; DNS64 is provided by a name server and NAT64 is performed by a router. Thus, the best implementation for the two services can be -and therefore should be -selected independently" [21]. We have developed a method suitable for the performance analysis of NAT64 implementations independently from DNS64 [22], and we compared the performance of TAYGA and OpenBSD PF using ICMP in [23] and using ICMP, TCP and UDP in [24]. ...
Article
Full-text available
In this paper, the viability of the throughput and frame loss rate benchmarking procedures of RFC 8219 is tested by executing them to examine the performance of three free software SIIT (also called stateless NAT64) implementations: Jool, TAYGA, and map646. An important methodological problem of the two tested benchmarking procedures is pointed out: they use improper timeout setting. A solution of individually checking the timeout for each frame is proposed to get more reasonable results, and its feasibility is demonstrated. The unreliability of the results caused by the lack of requirement for repeated tests is also pointed out, and the need for relevant number of tests is demonstrated. The possibility of an optional non-zero frame loss acceptance criterion for throughput measurement is also discussed. The benchmarking measurements are performed using two different computer hardware, and all relevant results are disclosed and compared. The performance of the kernel based Jool was found to scale up well with the number of active CPU cores and Jool also significantly outperformed the two other SIIT implementations, which work in the user space.
... Similarly, the nearly native CPU and networking performance of containers could have been utilized in the following experiments, too. Eight computers were used to provide high enough load for NAT64 performance testing in [15] and for DNS64 performance testing in [16] and [17]. We believe that containers are applicable for these type of tests, because of the nature of the testing method: the loss or some delay of a packet does not have a significant influence on the final measurement results. ...
... The benchmarking of these two solutions is required to assure the proper level of service. It was shown in [3] that their performances should be tested separately. There is an implementation of a test program for benchmarking DNS64 solutions [4]. ...
... Originally, bash shell scripts were applied using the standard Linux host command. As its default behavior, it also requested an "MX" record [13]. Then, the request for the "MX" record was eliminated in order to focus on the "AAAA" record only [14]. ...
Article
Full-text available
DNS64 and NAT64 are IPv6 transition technologies enabling IPv6 only clients to communicate with IPv4 only servers. Mtd64-ng is a novel DNS64 implementation, being a successor of MTD64. In this paper, the performance of mtd64-ng is compared with that of MTD64 and BIND. The details of the measurements are fully disclosed. It is found that under heavy load conditions mtd64-ng can answer six times as many “AAAA” record requests per second than BIND. Mtd64-ng fixed two issues of MTD64 and also outperformed its predecessor by answering 46% more “AAAA” record requests per second under heavy load conditions.
... In a situation where an IPv6 only client computer needs to communicate with an IPv4 only server, the DNS64 [13] and NAT64 [14] combination is a good solution. The performance, the stability and the application compatibility of some open source implementations of DNS64/NAT64 are examined and proved in [15][16][17]. ...
Article
Full-text available
Even though the present form of IPv6 has been existing since 1998, the adoption of the new protocol has been very slow until recently. To help the adoption of the IPv6 protocol, several transition technologies were introduced. The 6to4 protocol is one of them, and it can be used when an IPv6 enabled host resides in an IPv4 only environment and needs to communicate with other hosts in such circumstances or with native IPv6 hosts. Five open source 6to4 relay implementations were investigated: Debian Linux - sit, Debian Linux - v4tunnel, OpenWrt - sit, FreeBSD - stf, NetBSD - stf. The measurement method is fully described including our measurement scripts and the results of the measurements are disclosed in detail. The measurements have shown that there are major differences between the different types of implementations.
Chapter
Different multimedia data transfer analysis through the optical network with a routing table is performed by using the conversion to a next-generation IPv6-based protocol. IPv4 suffers from some drawbacks that may be preventing the growth of the Internet. IPv6 is developed to solve the several issues of the IPv4 such as delay, jitter, error, latency, dropped packets, address depletion, security, research, and extensibility. Hence, by using it will also give an expansion of the capabilities of the Internet and provide a variety of valuable conditions, including end–end mobile applications. The ideas are implemented for the transfer of multimedia data on the connection-oriented network TCP (Transmission Control Protocol). Different multimedia data used are image, audio, and video which is downloaded and streamed to the client systems of the established optical network. The file size is taken for the experimental purpose of image, audio, and video is 20 Mb respectively. Hence, we have analyzed with the Wireshark tool at the client systems with the Ubuntu 14.04 version.
Conference Paper
Internet Protocol version6 (IPv6) ad-hoc is a conceptual abstract to solve some of the issues of the present IP versions, say Internet Protocol version4 (IPv4). Some of the problems are delay, latency, reliability, error, address exhaustion, testing, resilience etc. The present paper will be dealing with the conversion from a protocol IPv4 to a next generation IPv6 via optical network configured with a routing table where the analysis of the liquidity of data like multimedia data transfer is done. A virtual connection path between server and client systems (as in the enterprise edition of Java -J2EE) is established using TCP (Transmission Control Protocol). The work proposed is allowed to implement networking via optical cables with a cost effective IPv4 migration to IPv6 for the multimedia communication while having a couple of optical converter devices explicitly. During experimental analysis, the tunnelling method of IPv4 to IPv6 conversion established via optical network with a routing table proved to be an easy verification routine. The duration required to ingress the data at the client end was evaluated and the results obtained while downloading an image file(.jpeg), audio file(.mp3) and video file(.mpeg4) are 0.21, 3, and 10 seconds respectively; the same selection of algorithms was also implemented with a streaming through a server at a bit rate of 10 Gbps . The file sizes of the different multimedia data is found to be constant for an image file, an audio file and a video file to be 20 Mb. Hence we have done an experimental analysis if these multimedia data is transferred via a client server configuration in the optical network by making use of our own routing table.
Article
Full-text available
Today millions of computers are interconnected using the Internet Protocol version 4 (IPv4) and can not switch to the new version, IPv6, simultaneously. For this reason the IETF has defined a number of echanisms for transitioning to the new protocol in a progressively and controlled manner. On the other hand, Internet Service Providers (ISP) will not have new IPv4 global addresses to offer in the near future due to the fact that these addresses will be exhausted [1]. A very interesting alternative for ISPs is to use IPv6 global addresses and, by some transitional method, access the current IPv4 backbone. This study aims to compare two methods of transparent access to the IPv4 Internet backbone, from networks that are "IPv6 only". To make the comparison, a software was developed, implementing an Application Layer Gateway (ALG), and Ecdysis was used to implement NAT64. Both trials used a network IPv6 Test Bed. This paper details the design principles and fundamental aspects of the ALG implementation, as well as the implementation of NAT64. Finally, we present the tests performed and conclusions drawn on the test platform.
Article
Full-text available
The need for DNS64 and NAT64 solutions is introduced and their operation is presented. A test environment for the performance analysis of DNS64 and NAT64 implementations is described. The resource requirements of the implementations are measured. The performance of DNS64 and NAT64 solutions is measured under heavy load conditions to determine if they are safe to be used in a production environment,like the network of an internet service provider.
Conference Paper
Full-text available
The transition mechanisms for the first phase of IPv6 deployment are surveyed and the most important NAT64 solutions are selected. The test environment and the testing method are described. As for the selected NAT64 implementations, the performance of the TAYGA running under Linux and of the Packet Filter (PF) of OpenBSD was measured and compared. The stability of the tested NAT64 solutions was analyzed under serious overload conditions to test if they may be used in production environments with strong response time requirements.
Article
Full-text available
It is clear that there is not enough time to upgrade existing Internet hosts to dual stack before the IPv4 address pool depletes. This implies that the IPv6 transition and co-existence must support interaction between IPv4 nodes and IPv6 nodes. In this article we describe NAT64 and DNS64, a tool suite that provides a way forward in the IPv4-to-IPv6 transition by allowing communication among unmodified IPv6 and IPv4 nodes.
Article
Network Address Translation is a method by which IP addresses are mapped from one realm to another, in an attempt to provide transparent routing to hosts. Traditionally, NAT devices are used to connect an isolated address realm with private unregistered addresses to an external realm with globally unique registered addresses. This document attempts to describe the operation of NAT devices and the associated considerations in general, and to define the terminology used to identify various flavors of NAT.
Article
This document discusses issues with the specific form of IPv6-IPv4 protocol translation mechanism implemented by the Network Address Translator - Protocol Translator (NAT-PT) defined in RFC 2766. These issues are sufficiently serious that recommending RFC 2766 as a general purpose transition mechanism is no longer desirable, and this document recommends that the IETF should reclassify RFC 2766 from Proposed Standard to Historic status.
Chapter 16 Jails http
  • Freebsd Handbook
Definition of "graceful degradation
  • Ntia Its
NTIA ITS, "Definition of "graceful degradation" " http://www.its.bldrdoc.gov/fs-1037/dir-017/_2479.htm
IPv6 addressing of IPv4/IPv6 translators
  • C Bao
  • C Huitema
  • M Bagnulo
  • X Boucadair
  • Li
C. Bao, C. Huitema, M. Bagnulo, M Boucadair and X. Li, "IPv6 addressing of IPv4/IPv6 translators", IETF, October 2010. ISSN: 2070-1721 (RFC 6052)