Figure 7 - uploaded by Antonio Pescapè
Content may be subject to copyright.
Traffic generation in (a) single flow mode and (b) multiple flow mode.

Traffic generation in (a) single flow mode and (b) multiple flow mode.

Source publication
Article
Full-text available
In the networking field, traffic generator platforms are of a paramount importance. This paper deals with the description of a distributed software platform for synthetic traffic generation over IPv4/v6 networks, called D-ITG (Distributed Internet Traffic Generator). We point our attention on the original architectural choices and evaluate the perf...

Contexts in source publication

Context 1
... single flow mode: In case a single flow must be generated, ITGSend itself manages the configuration of the experiment through the TSP protocol and transmits the packets of that flow (Figure 7(a)); • multiple flows mode: In case multiple simultaneous flows must be generated, ITGSend operates as a multi-threaded application. Figure 7(b) shows the case where all the flows share the same destination host; • daemon mode: ITGSend can be launched and stay idle waiting for instructions. ...
Context 2
... single flow mode: In case a single flow must be generated, ITGSend itself manages the configuration of the experiment through the TSP protocol and transmits the packets of that flow (Figure 7(a)); • multiple flows mode: In case multiple simultaneous flows must be generated, ITGSend operates as a multi-threaded application. Figure 7(b) shows the case where all the flows share the same destination host; • daemon mode: ITGSend can be launched and stay idle waiting for instructions. Thus, the generation process can be remotely controlled (Section 3.5). ...

Citations

... Xprobe [10] uses a fuzzy matrix to accomplish the function of identifying the host operating system. Botta et al. [11] used the D-ITG [12] tool to generate multiple traffic patterns through different combinations of PS and IDT. Although these active probing methods can obtain information about network assets to some extent, sending many packets to the target host can affect the device's functionality and even collapse [13].PLCScan, a software tool developed jointly by Positive Research and Scadastrangelove [14]. ...
Article
Full-text available
Industrial control device asset identification is essential to the active defense and situational awareness system for industrial control network security. However, industrial control device asset information is challenging to obtain, and efficient asset detection models and identification methods are urgently needed. Existing active detection techniques send many packets to the system, affecting device operation, while passive identification can only analyze publicly available industrial control data. Based on this problem, we propose an asset identification method including networked industrial control device asset detection, fingerprint feature extraction and classification. The proposed method use TCP SYN semi-networked probing in the asset detection phase to reduce the number of packets sent and remove honeypot device data. The fingerprint feature extraction phase considers the periodicity and long-term stability characteristics of industrial control device and proposes a set of asset fingerprint feature combinations. The classification phase uses an improved decision tree algorithm based on feature weight correction and uses AdaBoost ensemble learning algorithm to strengthen the classification model. The experimental results show that the detection technique proposed by our method has the advantages of high efficiency, low frequency and noise immunity.
... To generate packet flows, the authors used Iperf [11], a well-accepted benchmarking tool, and reported results for UDP throughput, jitter, and packet loss. In their work, Debbarma and Das [8] did an empirical assessment of IPv4 and IPv6 for Windows XP/Vista/7/8, using D-ITG [2], in a testbed consisting in two computers connected through a crossover cable, and reported throughput, delay, and jitter. Hadiya, Save, and Geetu [20] evaluated the performance of TCP, UDP, and game traffic (Quake3), when passing through 6to4 [7] and manual tunnels, for Windows Server 2008/2012. ...
... Hadiya, Save, and Geetu [20] evaluated the performance of TCP, UDP, and game traffic (Quake3), when passing through 6to4 [7] and manual tunnels, for Windows Server 2008/2012. They used D-ITG [2] to generate the TCP and UDP flows. Supriyanto, Sofhan, Fahrizal, and Osman [55] assessed the performance of IPv6 when using jumbo frames generated by D-ITG [2], in a controlled environment that consisted in two computers connected in an Ethernet LAN through a crossover cable, that were running Windows Server 2012. ...
... They used D-ITG [2] to generate the TCP and UDP flows. Supriyanto, Sofhan, Fahrizal, and Osman [55] assessed the performance of IPv6 when using jumbo frames generated by D-ITG [2], in a controlled environment that consisted in two computers connected in an Ethernet LAN through a crossover cable, that were running Windows Server 2012. They studied the effect of the MTU (Maximum Transmission Unit) over the throughput, delay, and jitter by changing its value (1500, 3000, 6000, 12000, 24000, 48000, and 65000 bytes). ...
Conference Paper
Since the Internet of Things (IoT) is gaining acceptance worldwide, manufacturers have proposed cheap modules and development boards for its implementation. Even if those devices have been in the markets for some years, just a few studies assess their network performance. In this work, we selected the ESP8266, a well-known IoT module, and evaluated its TCP performance in normal operation, and when facing a Denial of Service (DoS) attack. The performance evaluation was done for both IPv4 and IPv6 using two different platforms for development.
... We used D-ITG (Distributed Internet Traffic Generator) [10], a platform capa- ble to accurately generate traffic flows specified through two random processes: packet Inter Departure Time (IDT) -the time between the transmission of two consecutive packets -and Packet Size (PS) -the amount of data being trans- ferred by the packets. Both processes are modeled as i.i.d. ...
Conference Paper
Full-text available
PlanetLab is a global scale platform for experimentation of new networking applications in a real environment. It consists of several nodes, offered by academic institutions or companies spread all over the world, that can be shared by the networking community for its tests. The main drawback of PlanetLab is its scarce heterogeneity in terms of the access technologies it offers. In this paper we discuss the efforts we made in order to alleviate this problem. We first developed a tool that allowed us to integrate a WiFi testbed controllable by OMF (Orbit Management Framework) [16] in PlanetLab by means of a multi-homed PlanetLab node. OMF is a set of tools that make it easy to automatically execute experiments and collect measurements on a WiFi testbed. The tool we developed allows, more generally, to solve the issues that arises with multi-homed PlanetLab nodes (i.e. PlanetLab nodes having more than a network interface). In order to be able to fully exploit the potential of such PlanetLab nodes, there is the need for the users to add routing rules (e.g. rules to reach a destination through the WiFi interface, instead of the Ethernet interface). Such operation cannot be performed in a PlanetLab environment, as the rules a user adds would also affect other users' traffic. Therefore it arises the necessity of user-specific routing tables, i.e. routing tables whose rules are only valid for traffic belonging to that user. In this way the user is able to route his traffic through the WiFi interface, and make it traverse the OMF-controllable WiFi testbed, while other users' traffic continues to get routed through the default primary interface. We also had to support the integration of the OMF facilities (e.g. the OMF controller) into the user environment, which is called slice, in order to allow for the customization of the testbed (e.g. loading a specific disk image on each node) and the automatical execution of experiments. The software we developed to achieve such integration is in the process of being integrated in the code base of PlanetLab, so that anyone is able to integrate its wireless infrastructure in PlanetLab. © Institute for Computer Science, Social-Informatics and Telecommunications Engineering 2009.
... The measurement station is equipped with a tool, D-ITG, which is used for the generation of the testing traffic. Details on D-ITG can be found in [11] and in [12]. ...
Article
Full-text available
Researches and development efforts in wireless networking and systems are progressing at an incredible rate. Among them, measurement and analysis of performance achieved at network layer and perceived by end users is an important task. In particular, recent advances concerning IEEE 802.11b-based networks seem to be focused on the measurement of key parameters at different protocol levels in a cross-layered fashion, because of their inherent vulnerability to in-channel interference. By adopting a cross-layer approach on a real network set-up operating in a suitable experimental testbed, packet loss against signal-to-interference ratio in IEEE 802.11b-based networks is hereinafter assessed. Results of several measurements aimed at establishing the sensitivity of IEEE 802.11b carrier sensing mechanisms to continuous interfering signals and evaluating the effects of triggered interference on packet transmission.
... Harpoon [9] uses a traffic trace for self-training, and can subsequently generate synthetic traffic with certain statistical properties based on the original trace. D-ITG [10], [11] generates flows using a simple predefined (but configurable) independent sampling model for packet sizes and inter-packet intervals. While both Harpoon and D-ITG provide excellent Internet traffic generation platforms, our results indicate that the properties modeled in these systems are not adequate for reproducing realistic performance in the wireless setting. ...
Conference Paper
Full-text available
Performance predictions from wireless networking laboratory experiments rarely seem to match what is seen once technologies are deployed. We believe that one of the major factors hampering researchers' ability to make more reliable forecasts is the inability to generate realistic workloads. To redress this problem, we take a fundamentally new approach to measuring the realism of wireless traffic models. In this approach, the realism of a model is defined directly in terms of how accurately it reproduces the performance characteristics of actual network usage. This cuts through the Gordian knot of deciding which statistical features of traffic traces are significant. We demonstrate that common experimental traffic models, such as uniform constant bit-rate traffic (CBR), drastically misrepresent performance metrics at all levels of the protocol stack. We also define and explore the space of synthetic traffic models, thereby advancing the understanding of how different modeling techniques affect the accuracy of performance predictions. Our research takes initial steps that will ultimately lead to comprehensive, multi-level models of realistic wireless workloads.
... Para este caso hay un gran número de aplicaciones gratuitas como son D-ITG, KUTE, pktgen, etcétera [5]. También se puede recurrir a soluciones hardware basadas en diseños para FPGA, a tarjetas específicas de captura de tráfico e inyección como las DAG que ofrece el spin-off Endace [4], o bien a sistemas que dispongan de network processor. ...
Article
With the increase of the rate in networks, computational ability of network resources and principal servers could be inadequate. In order to prove the performance of networks, network resources and servers there have to be traffic generators. These generate traffic flows with different characteristics. It is also necessary to have network monitoring systems to inspect and process traffic. Doing this in high speed segments in an efficient way is not easy. This paper proposes a design of an architecture to inject traffic in a synthetic way and to improve the performance in network traffic analysis. This architecture tries to improve other solutions' performance using a general purpose architecture under Linux over a PC with a common network interface. The basis of this improvement is including the application in the kernel of the operating system. I. INTRODUCCIÓN Actualmente es común hablar de velocidades de 1 Gbps en las redes locales. La limitación en la evolución de las redes no la presentan las líneas de comunicación sino los equipos que interconectan unos medios con otros. La capacidad computacional de los equipos de interconexión y de los servidores es inadecuada y éstos no pueden procesar tanta información como reciben de la red. Para poder avanzar en la búsqueda de la solución para estos problemas es necesario disponer de herramientas adecuadas. Por un lado, se precisará disponer de sistemas de inyección de tráfico para poder probar los productos en condiciones de estrés. Por otro, es imprescindible disponer de sistemas de monitorización de tráfico que faciliten el estudio de la distribución que sigue el tráfico tanto en el segmento de red correspondiente como dentro del equipo. En este trabajo se propone el diseño de un sistema de inyección de tráfico y de otro de monitorización basado en una arquitectura de propósito general. Existen soluciones en el mercado que pueden ser integradas en esta arquitectura, pero, en general, sus rendimientos presentan problemas de escalabilidad para inyectar y monitorizar tráfico en segmentos de alta capacidad. Esta propuesta busca mejorar este rendimiento, básicamente, trasladando la lógica al núcleo del sistema, más cerca del hardware. Se pretende que el sistema sea lo más genérico posible y que permita realizar pruebas de rendimiento en los sistemas de interconexión y en la red de datos en unas condiciones de tráfico determinadas. Para el caso de inyección los parámetros principales que hay que tener en cuenta son la tasa de inyección y el modelo de tráfico. Se pretende que el sistema de inyección pueda alcanzar tasas que saturen enlaces Gigabit Ethernet. En cuanto a la monitorización, aunque tiene múltiples aplicaciones, en este caso, en un primer momento, se pretende que sirva para validar el sistema de inyección. El tema se desarrolla siguiendo un esquema dividido en secciones. En primer lugar se presentan las soluciones tecnológicas disponibles separadas en dos apartados. Primero, los posibles sistemas de inyección y, segundo, los sistemas de monitorización. El siguiente apartado trata las distribuciones de tráfico que resultan de interés para la generación de tráfico sintético para este caso. A continuación se expone la propuesta de diseño donde se explica la solución en líneas generales. En los siguientes dos apartados se presentan las arquitecturas de referencia tanto para la monitorización como para la inyección. Por último se proponen en un apartado los casos de aplicación y la experimentación. Primero se describen los casos de aplicación de referencia. En segundo lugar se presenta el escenario de pruebas.
Article
Software defined network architecture offers scalability and resilience as the significant advantages to data center networks. This increases the fault tolerance ability of traditional data center network architectures. Massive amounts of mobile network data as well as e-commerce application data requests are the key sources for data centers which recurrently desire attention. Researchers are yet to design a suitable prototype with functional intelligence to support traffic optimization techniques in SDDC. In this research work, we are proposing an intelligent traffic management prototype for software defined data center by means of reinforcement learning approach through the integration of the functionalities such as controller positioning, traffic load balancing, routing and energy efficiency. These are the key areas where traffic optimization becomes essential to improve network performance. The proposed prototype provides a complete framework for enterprises to deploy applications in an efficient manner. We model the prototype to handle dynamic network data applications such as information retrieval, communication and banking applications. We focus in this article on how communication happens among the data center nodes as an inter-data center communication process upon receiving requests from the applications considered. To further enhance the novelty and efficiency of our research work, we adopt multiple reinforcement learning agents to lever load balancing and routing functionalities. Moreover, to assess and ensure the optimized network performance, we evaluate the energy consumption of the network achieved through our proposed prototype.
Article
Nowadays, it is desirable to generate traffic which can reflect realistic network traffic environment for performance evaluation of network equipments. Existing traffic generation solutions mainly include special test equipments, software traffic generators and field programmable gate array (FPGA)-based traffic generators. However, special test equipments are generally too expensive, software traffic generators could not achieve high data rates and FPGA-based traffic generators mostly lack flexibility. This paper presents a novel traffic generation solution according to an aggregated process-based model to overcome the weakness of above methods. The traffic generator can generate real-time Poisson, two-state Markov-modulated Poisson process (MMPP-2) and self-similar traffic by hardware. The main structure of the traffic generator is presented and statistical properties of the generated traffic have been evaluated. Experiment results indicate that the proposed traffic generation solution can achieve better performance of the generated traffic compared with existing FPGA-based traffic generators while the required date rates can be up to Gbps line rate.
Conference Paper
Simulation modeling of computer networks is an effective technique for evaluating the performance of network and data transfer with various internet protocols. In this paper we are focusing on the impact of used transport layer protocol transferred with new internet protocol (IPv6) against network performance, especially network router load and efficiency. Higher values of load or worse efficiency can be a problem in global IPv6 deployment. We propose methodology of traffic measurement based on different stochastic systems. We are able to generate different TCP, UDP and SCTP traffic according to uniform probability distribution for packet inter departure time and packet size. We use a generator that is able to reproduce experiment by using the same seed for random values and we focus on the correlation of packet size, inter departure time to router and communicating nodes load. All experiments were made in pure IPv4 environment and in native IPv6 environment too. We expected that the use of a new protocol lead to higher efficiency of data transfer. By using forward stepwise regression analysis we identify which variables imply the packet transfer time.
Article
Testing today’s high-speed network equipment requires the generation of network traffic which is similar to the real Internet traffic at Gbps line rates. There are many software-based traffic generators which can generate packets according to different stochastic distributions. However, they are not suitable for high-speed hardware test platforms. This paper describes FPGEN (Fast Packet GENerator), a programmable random traffic generator which is entirely implemented on FPGA (Field Programmable Gate Array). FPGEN can generate variable packet sizes and traffic with Poisson and Markov-modulated on–off statistics at OC-48 rate per interface. Our work that is presented in this paper includes the theoretical design of FPGEN, the hardware design of the FPGA-based traffic generator board (printed circuit board design and construction) and the implementation of FPGEN on FPGA. Our experimental study demonstrates that FPGEN can achieve both the desired rate and statistical properties for the generated traffic.