Figure 17 - uploaded by Sally Floyd
Content may be subject to copyright.
A simulation network with five FTP connections.

A simulation network with five FTP connections.

Source publication
Article
Full-text available
this paper we define the notion of traffic phase in a packet-switched network and describe how phase differences between competing traffic streams can be the dominant factor in relative throughput. Drop Tail gateways in a TCP/IP network with strongly periodic traffic can result in systematic discrimination against some connections. We demonstrate t...

Similar publications

Article
Full-text available
In this work, we consider the task of classifying the binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal re-weighting strategy for U data, so that a decent decision boundary can be found. In contrast, we provide a totally new paradigm to attack the binary PU task, from perspective of...
Conference Paper
Full-text available
In this paper, we discuss the necessity of new observation and control structures for organic computing systems starting from the basic contradiction between bottom-up behaviour and top-down design. An Observer/Controller architecture serves the purpose to keep emergent behaviour within predefined limits. As an illustration, a framework for reconfi...
Article
Full-text available
Deployment of wireless links (terrestrial and satellite) along with wired links has made extension of the Internet even in remote places feasible. TCP/IP protocol suite is an integral part of the Internet. Congestion control of TCP plays a vital role in the performance of the Internet. TCP's unconditional flow control in case of a packet loss has a...
Article
Full-text available
Este articulo describe el desarrollo de un prototipo de un controlador de tráfico vehicular, cuyo sistema de comunicación se basa en el protocolo TCP/IP, para monitorear y controlar remotamente el funcionamiento de las luces de los semáforos de una intersección vehicular. Los resultados muestran los tiempos de comunicación entre la central y el con...

Citations

... Additive increase multiple decrease mechanism Many standard TCP variants used a congestion control mechanism called Additive Increase Multiple Decrease (AIMD). AIMD [6] mechanism represents a feedback congestion window method is used in TCP congestion avoidance phase, and this method acquaintances a linearly window incrementing while exponentially decreasing the window if congestion status detected [7]. The AIMD algorithm recommends that a dropping in packets occurs when CWND touches the value of w (size of packet), as shown in Fig. 1 [8]. ...
Article
Full-text available
In recent days, the need to provide reliable data transmission over Internet traffics or cellular mobile systems becomes very important. Transmission Control Protocol (TCP) represents the prevailing protocol that provide reliability to data transferring in all end-to-end data stream services on the Internet and many of new networks. TCP congestion control has become the key factor manipulating the behavior and performance of the networks. TCP sender can regulates the size of the congestion window (CWND) using the congestion control mechanism and TCP dynamically adjust the window size depending on the packets acknowledgment (ACK) or by indicates the packets losses when occur. TCP congestion control includes two main phases, slow-start and congestion avoidance and these two phases even work separately, but the combination of them controls CWND and the packet injection to the network pipe. Congestion avoidance and slow-start are liberated mechanisms and using unlike objectives, but if the congestion happens, they are executed together. This article provides an efficient and reliable congestion avoidance mechanism to enhancing the TCP performance in large-bandwidth low-latency networks. The proposed mechanism also includes a facility to send multiple flows over same connection with a novel technique to estimate the number of available flows dynamically, where the all experiments to approving the proposed techniques are performed over the network simulation NS-2.
... A larger quantity of data might be lost to the process of router packet drop detection from its source. Due to packet dorp, TCP also suffers from the problem global synchronization of a source [2]. ...
Article
Full-text available
p>Congestion is one of the most important issues in communication networks which has attracted much research attention. To ensure a stable TCP network, we can use active queue management (AQM for early congestion detection and router queue length regulation. In this study, it was proposed to use the Grey Wolf Optimizer (GWO) algorithm in designing a fuzzy proportional integral (fuzzy-PI) controller as a novel AQM for internet routers congestion control and for achieving a low steady-state error and fast response. The suggested Fuzzy logic-based network traffic control strategy permit us to deploy linguistic knowledge for depicting the dynamics of probability marking functions and ensures a more accurate use of multiple inputs to depict the the network’s state. The possibility of incorporating human knowledge into such a control strategy using Fuzzy logic control methodology was demonstrated. The postulated controller was compared to proportion integral (PI) through several MATLAB simulation scenarios. The results indicated the stability of the postulated controller and its ability to attain a faster response in a dynamic network with varying network load and target queue length.</p
... A PI controller was proposed in [6] based on the linearized model and the control theory, the controller displayed better properties compared to the RED controller. In [7], a linear quadratic (LQ)-servo controller was proposed and the controller parameter was chosen by trial and error. In [8], an adaptive AQM based neural network was developed and the proposed algorithm showed a good performance. ...
Article
Full-text available
As an effective mechanism acting on the intermediate nodes to support end-to-end congestion control, Active Queue Management (AQM) takes a trade-off between link utilization and delay experienced by data packets. In this paper, a linear quadratic optimal controller was designed based on linear control theory for TCP/AQM router. The design specifications depend on choosing weighting matrices Q and R, Weighting matrices (Q and R) of the linear quadratic performance index are tuned the desired step response acquired, by using Particle Swarm Optimization (PSO) that minimizes a performance criterion The controller simulation results show the efficiency of the proposed controller.
... In additive increase phase and for every ACK received, cwnd is improved by (1/cwnd) segments. This is roughly equivalent to growing cwnd by one segment for each RTT (Floyd & Jacobson 1991). In multiple decreases phase and when congestion is detected, the transmitter will decrease the transmission rate by a multiplicative factor; for example, the congestion window is cut in half after loss. ...
... With 1500 byte packets the collapse is much faster. Second, datacenter networks are very regular, so phase effects[14]can occur, leading to unfair throughput. The dashed curves inFig- ure 2show the mean goodput of the worst performing 10% of the flows. ...
Conference Paper
Full-text available
Modern datacenter networks provide very high capacity via redundant Clos topologies and low switch latency, but transport protocols rarely deliver matching performance. We present NDP, a novel data-center transport architecture that achieves near-optimal completion times for short transfers and high flow throughput in a wide range of scenarios, including incast. NDP switch buffers are very shallow and when they fill the switches trim packets to headers and priority forward the headers. This gives receivers a full view of instantaneous demand from all senders, and is the basis for our novel, high-performance, multipath-aware transport protocol that can deal gracefully with massive incast events and prioritize traffic from different senders on RTT timescales. We implemented NDP in Linux hosts with DPDK, in a software switch, in a NetFPGA-based hardware switch, and in P4. We evaluate NDP's performance in our implementations and in large-scale simulations, simultaneously demonstrating support for very low-latency and high throughput.
... We conducted the same experiment with 18 and 27 foreground TCP flows to check the effect of multiple TCP flows over the same bottleneck. In these experiments we started 2 and 3 TCP flows for the same destination, using a random start time within the first second of the simulation in order to minimize phase effects [11]. In both experiments, we obtained 100% accurate shared bottleneck recognition (i.e., accuracy index (AI) = 1). ...
Article
Full-text available
We present a new mechanism for detecting shared bottlenecks between end-to-end paths in a network. Our mechanism, which only needs one-way delays from endpoints as an input, is based on the well-known linear algebraic approach: singular value decomposition (SVD). Clusters of flows which share a bottleneck are extracted from SVD results by applying an outlier detection method. Simulations with varying topologies and different network conditions show the high accuracy of our technique.
... In the slow-start phase of the window expansion algorithm, the source node transmits two segments for each ACK received. In the congestion avoidance phase, the source node normally transmits one segment for each ACK re- ceived ( Floyd and Jacobson, 1991). The basic principles of both slow-start and congestion avoidance phase (algorithm) and their correlation with each other plays a big contribution in this thesis as it supports the development of a new conges- tion avoidance mechanism aiming to enhance the performance of TCP in large bandwidth and low latency wired wireless links. ...
Article
Full-text available
TCP or Transmission Control Protocol represents one of the prevailing “languages” of the Internet Protocol Suite, complementing the Internet Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP provides reliability to data transferring in all end-to-end data stream services on the internet. This protocol is utilized by major internet applications such as the e-mail, file transfer, remote administration and world-wide-web. Other applications which do not require reliable data stream service may use the User Datagram Protocol (UDP), which provides a datagram service that emphasizes reduced latency over reliability. The task of determining the available bandwidth of TCP packets flow is in fact, very tedious and complicated. The complexity arises due to the effects of congestion control of both the network dynamics and TCP. Congestion control is an approved mechanism used to detect the optimum bandwidth in which the packets are to be sent by TCP sender. The understanding of TCP behaviour and the approaches used to enhance the performance of TCP in fact, still remain a major challenge. In conjunction to this, a considerable amount of researches has been made, in view of developing a good mechanism to raise the efficiency of TCP performance. The article analyses and investigates the congestion control technique applied by TCP, and indicates the main parameters and requirements required to design and develop a new congestion control mechanism.
... For instance, a slightly modified latency can lead to a radically different bandwidth sharing among two competing flows, which cannot be captured by a (continuous ) flow-level model. It turns out that we have " rediscovered " a phenomenon called phase effect [15] ...
Article
Full-text available
Researchers in the area of distributed computing conduct many of their experiments in simulation. While packet-level simulation is often used to study network protocols, it can be too costly to simulate network communications for large-scale systems and applications. The alternative is to simulate the network based on less costly flow-level models. Surprisingly, in the literature, validation of these flow-level models is at best a mere verification for a few simple cases. Consequently, although distributed computing simulators are widely used, their ability to produce scientifically meaningful results is in doubt. In this work we focus on the validation of state-of-the-art flow-level network models of TCP communication, via comparison to packet-level simulation. While it is straightforward to show cases in which previously proposed models lead to good results, instead we systematically seek cases that lead to invalid results. Careful analysis of these cases reveal fundamental flaws and also suggest improvements. One contribution of this work is that these improvements lead to a new model that, while far from being perfect, improves upon all previously proposed models. A more important contribution, perhaps, is provided by the pitfalls and unexpected behaviors encountered in this work, leading to a number of enlightening lessons. In particular, this work shows that model validation cannot be achieved solely by exhibiting (possibly many) "good cases." Confidence in the quality of a model can only be strengthened through an invalidation approach that attempts o prove the model wrong.
... Next, many low-level hardware and protocols rely on the improbability of certain phenomena to happen due to imperfections in the hardware and environmental effects of the real world. An example of this is the Phase Effect introduced by Floyd et al in [20]. erefore perfect models might even yield erroneous predictions. ...
Thesis
Distributed systems are in the mainstream of information technology. It has become standard to rely on multiple distributed units to improve the performance of the application, help tolerate component failures, or handle problems too large to fit in a single processing unit. The design of algorithms adapted to the distributed context is particularly difficult due to the asynchrony and the nondeterminism that characterize distributed systems. Simulation offers the ability to study the performance of distributed applications without the complexity and cost of the real execution platforms. On the other hand, model checking allows to assess the correctness of such systems in a fully automatic manner. In this thesis, we explore the idea of integrating a model checker with a simulator for distributed systems in a single framework to gain performance and correctness assessment capabilities. To deal with the state explosion problem, we present a dynamic partial order reduction algorithm that performs the exploration based on a reduced set of networking primitives, that allows to verify programs written for any of the communication APIs offered by the simulator. This is only possible after the development of a full formal specification with the semantics of these networking primitives, that allows to reason about the independency of the communication actions as required by the DPOR algorithm. We show through experimental results that our approach is capable of dealing with non trivial unmodified C programs written for the SimGrid simulator. Moreover, we propose a solution to the problem of scalability for CPU bound simulations, envisioning the simulation of Peer-to-Peer applications with millions of participating nodes. Contrary to classical parallelization approaches, we propose parallelizing some internal steps of the simulation, while keeping the whole process sequential. We present a complexity analysis of the simulation algorithm, and we compare it to the classical sequential algorithm to obtain a criteria that describes in what situations a speed up can be expected. An important result is the observation of the relation between the precision of the models used to simulate the hardware resources, and the potential degree of parallelization attainable with this approach. We present several case studies that benefit from the parallel simulation, and we show the results of a simulation at unprecedented scale of the Chord Peer-to-Peer protocol with two millions nodes executed in a single machine
... Under elevated traffic loads, clusters of packets that cannot be buffered are discarded. These clusters of dropped packets cause retransmission of data and synchronization of flows [1,2]. Moreover, for non-dropped packets, full queues translate into increased delays, and synchronization increases the variability of the queue. ...
... This paper proposes an AQM scheme that incorporates a technique to optimally space congestion marks to the likelihood detector without requiring the storage of per-flow state information. Spreading congestion marks far apart provides a mechanism to avoid clusters of congestion marks, therefore reducing the probability of synchronization during transient and steady states [2]. Moreover, congestion mark spacing improves efficiency by avoiding excessive marking. ...
... As Floyd and Jacobson described in [2], it is desirable for packet drops to occur homogeneously and as far apart as possible. The original motivation for this is to counteract global synchronization and allow the feedback system to react before dropping more packets. ...
Article
Active Queue Management (AQM) aims at minimizing queuing delay while maximizing the bottleneck link throughput. This paper describes two statistical principles that can be exploited to develop improved AQM mechanisms. The first principle indicates that the statistical characteristics of packet markings provide a performance bound of AQM in relation to the queue’s variance, which translates to a limitation of the traditional probabilistic marking. Based on the error diffusion algorithm, a simple marking strategy is proposed to reduce the queue’s variance by one order of magnitude from that attained with probabilistic drops. The second principle focuses on the relationship between the queue occupancy and the likelihood of congestion of the link. This principle reveals that the likelihood of congestion grows exponentially with queue occupancy, suggesting that drop rates ought to increase accordingly. These fundamental principles are used jointly in the so called Diffusion Early Marking (DEM) algorithm, an AQM scheme introduced in this work leading to faster reaction, higher bottleneck link utilization, lower drop rates and lower router buffer occupancy than other AQM algorithms.