ArticlePDF Available

The Eifel Algorithm: Making TCP Robust Against Spurious Retransmissions

Authors:

Abstract and Figures

We propose an enhancement to TCP's error recovery scheme, which we call the Eifel algorithm. It eliminates the retransmission ambiguity, thereby solving the problems caused by spurious timeouts and spurious fast retransmits. It can be incrementally deployed as it is backwards compatible and does not change TCP's congestion control semantics. In environments where spurious retransmissions occur frequently, the algorithm can improve the end-to-end throughput by several tens of percent. An exact quantification is, however, highly dependent on the path characteristics over time. The Eifel algorithm finally makes TCP truly wireless-capable without the need for proxies between the end points. Another key novelty is that the Eifel algorithm provides for the implementation of a more optimistic retransmission timer because it reduces the penalty of a spurious timeout to a single (in the common case) spurious retransmission.
Content may be subject to copyright.
A preview of the PDF is not available
... Proper estimation of TCP retransmission timers was intended to find a balance between timely detections of packet losses or delays, and avoidance of unnecessary retransmissions, and was strongly correlated to the RTT, i.e., the time interval between the transmission of a packet and the reception of its acknowledgement. The initial TCP algorithm [138] included a smoothed RTT value estimation (SRTT) using a low-pass filter on the RTT measurements, and calculated RTO as a product of the SRTT: RTO = β*SRTT, with a proposed constant value of β = 2. Since then, the research community has attempted to improve the efficiency of RTO calculation with various ways, e.g., by overcoming the ambiguities of RTT measurements for retransmitted packets [139] [140], by incorporating the measured variance of RTT in the RTO calculation and providing a more balanced estimator [141], by facing the problem of spurious retransmissions [140], or eliminating "RTO outliers" [142], etc. ...
... Proper estimation of TCP retransmission timers was intended to find a balance between timely detections of packet losses or delays, and avoidance of unnecessary retransmissions, and was strongly correlated to the RTT, i.e., the time interval between the transmission of a packet and the reception of its acknowledgement. The initial TCP algorithm [138] included a smoothed RTT value estimation (SRTT) using a low-pass filter on the RTT measurements, and calculated RTO as a product of the SRTT: RTO = β*SRTT, with a proposed constant value of β = 2. Since then, the research community has attempted to improve the efficiency of RTO calculation with various ways, e.g., by overcoming the ambiguities of RTT measurements for retransmitted packets [139] [140], by incorporating the measured variance of RTT in the RTO calculation and providing a more balanced estimator [141], by facing the problem of spurious retransmissions [140], or eliminating "RTO outliers" [142], etc. ...
... Proper estimation of TCP retransmission timers was intended to find a balance between timely detections of packet losses or delays, and avoidance of unnecessary retransmissions, and was strongly correlated to the RTT, i.e., the time interval between the transmission of a packet and the reception of its acknowledgement. The initial TCP algorithm [138] included a smoothed RTT value estimation (SRTT) using a low-pass filter on the RTT measurements, and calculated RTO as a product of the SRTT: RTO = β*SRTT, with a proposed constant value of β = 2. Since then, the research community has attempted to improve the efficiency of RTO calculation with various ways, e.g., by overcoming the ambiguities of RTT measurements for retransmitted packets [139] [140], by incorporating the measured variance of RTT in the RTO calculation and providing a more balanced estimator [141], by facing the problem of spurious retransmissions [140], or eliminating "RTO outliers" [142], etc. ...
... Proper estimation of TCP retransmission timers was intended to find a balance between timely detections of packet losses or delays, and avoidance of unnecessary retransmissions, and was strongly correlated to the RTT, i.e., the time interval between the transmission of a packet and the reception of its acknowledgement. The initial TCP algorithm [138] included a smoothed RTT value estimation (SRTT) using a low-pass filter on the RTT measurements, and calculated RTO as a product of the SRTT: RTO = β*SRTT, with a proposed constant value of β = 2. Since then, the research community has attempted to improve the efficiency of RTO calculation with various ways, e.g., by overcoming the ambiguities of RTT measurements for retransmitted packets [139] [140], by incorporating the measured variance of RTT in the RTO calculation and providing a more balanced estimator [141], by facing the problem of spurious retransmissions [140], or eliminating "RTO outliers" [142], etc. ...
... CoCoA-E [4] is an improvement on CoCoA+, which estimates RTO using the retransmission timer of Eifel [21] [22], which had initially been proposed for estimating TCP timeouts. The authors, while analysing the Eifel retransmission timer for a large sending rate, have claimed that the standard values of α (1/4), β (1/8), and K (4) (RFC 6298 [23]) would not be perfect. ...
Article
Full-text available
The Internet of things (IoT) comprises things interconnected through the internet with unique identities. Congestion management is one of the most challenging tasks in networks. The Constrained Application Protocol (CoAP) is a low-footprint protocol designed for IoT networks and has been defined by IETF. In IoT networks, CoAP nodes have limited network and battery resources. The CoAP standard has an exponential backoff congestion control mechanism. This backoff mechanism may not be adequate for all IoT applications. The characteristics of each IoT application would be different. Further, the events such as unnecessary retransmissions and packet collision caused due to links with high losses and packet transmission errors may lead to network congestion. Various congestion handling algorithms for CoAP have been defined to enrich the performance of IoT applications. Our paper presents a comprehensive survey on the evolution of the congestion control mechanism used in IoT networks. We have classified the protocols into RTO-based, queue-monitoring, and rate-based. We review congestion avoidance protocols for CoAP networks and discuss directions for future work.
... Over the years, a range of heuristics have been proposed. For TCP, this includes the fast retransmit heuristic [20], selective acknowledgements [21], the Eifel algorithm [22], recent acknowledgments [23], tail loss recovery techniques [24], [25], and others. Other reliable transport protocols have also benefited from this effort. ...
Article
Full-text available
Packet losses are common events in today’s networks. They usually result in longer delivery times for application data since retransmissions are the de facto technique to recover from such losses. Retransmissions is a good strategy for many applications but it may lead to poor performance with latency-sensitive applications compared to network coding. Although different types of network coding techniques have been proposed to reduce the impact of losses by transmitting redundant information, they are not widely used. Some niche applications include their own variant of Forward Erasure Correction (FEC) techniques, but there is no generic protocol that enables many applications to easily use them. We close this gap by designing, implementing and evaluating a new Flexible Erasure Correction (FlEC) framework inside the newly standardized QUIC protocol. With FlEC, an application can easily select the reliability mechanism that meets its requirements, from pure retransmissions to various forms of FEC. We consider three different use cases: $(i)$ bulk data transfer, $(ii)$ file transfers with restricted buffers and $(iii)$ delay-constrained messages. We demonstrate that modern transport protocols such as QUIC may benefit from application knowledge by leveraging this knowledge in FlEC to provide better loss recovery and stream scheduling. Our evaluation over a wide range of scenarios shows that the FlEC framework outperforms the standard QUIC reliability mechanisms from a latency viewpoint.
... Over the years, a range of heuristics have been proposed. For TCP, this includes the fast retransmit heuristic [20], selective acknowledgements [21], the Eifel algorithm [22], recent acknowledgments [23], tail loss recovery techniques [24], [25], and others. Other reliable transport protocols have also benefited from this effort. ...
Preprint
Full-text available
Packet losses are common events in today's networks. They usually result in longer delivery times for application data since retransmissions are the de facto technique to recover from such losses. Retransmissions is a good strategy for many applications but it may lead to poor performance with latency-sensitive applications compared to network coding. Although different types of network coding techniques have been proposed to reduce the impact of losses by transmitting redundant information, they are not widely used. Some niche applications include their own variant of Forward Erasure Correction (FEC) techniques, but there is no generic protocol that enables many applications to easily use them. We close this gap by designing, implementing and evaluating a new Flexible Erasure Correction (FlEC) framework inside the newly standardized QUIC protocol. With FlEC, an application can easily select the reliability mechanism that meets its requirements, from pure retransmissions to various forms of FEC. We consider three different use cases: $(i)$ bulk data transfer, $(ii)$ file transfers with restricted buffers and $(iii)$ delay-constrained messages. We demonstrate that modern transport protocols such as QUIC may benefit from application knowledge by leveraging this knowledge in FlEC to provide better loss recovery and stream scheduling. Our evaluation over a wide range of scenarios shows that the FlEC framework outperforms the standard QUIC reliability mechanisms from a latency viewpoint.
... La calidad de servicio en una red que utiliza protocolos IP en un proceso de handover se puede evaluar con tres parámetros: latencia, que es el tiempo por el cual no se trasmiten paquetes debido al cambio de celda; pérdida de paquetes, que es la cantidad de paquetes enviados que no se reciben que por lo general es proporcional a la latencia, y costo por cabeceras de señalización, donde las cabeceras de señalización son los paquetes, o parte de ellos para caracterizar el tráfico [35]. Con el objetivo de mejorar el desempeño y QoS de los protocolo IP cuando se realizan procesos handover los investigadores han propuesto, en un comienzo los métodos PROBE, BUFFER+FREEZE [42], TCP Westwood [44], TCP-Freeze [45], Eifel timer [46], y posteriormente, Snoop [47], ACK and Window-regulator [48] [49], control adaptativo rápido de congestión [50], uso de notificaciones explicitas e implícitas [51] y HPIN [35]. Por lo general estas propuestas sugieren un apoyo de las capas superiores para alertar a las capas de enlace y red, 2 y 3 respectivamente, sobre los procesos de handover [41] [32] donde las capas superiores son usadas con el objetivo de generar algún tipo de señalización y recolección de información que optimice el proceso. ...
Article
El artículo describe las redes 4G y muestra un mapa general de su desarrollo tecnológico. Adicionalmente se muestran dos temas que son clave en el desarrollo de este tipo de redes. Estos temas son el Handover Vertical y la convergencia hacia IP. La revisión expuesta muestra el estado del arte y los antecedentes de estos temas. Con base en esto se genera un mapa conceptual en el que se considera la variedad de tecnologías que convergen en el esquema 4G.
Article
Full-text available
TCP is a sliding window protocol that provides handling for both timeouts and retransmissions TCP performs unsatisfactorily since packet reordering and random losses may be falsely interpreted as congestive losses. This TCP triggers fast retransmission and fast recovery, leading to under-utilization of available network resources. The transmitted data does not have security .There is a lot of possible attack in original information. Data modification also possible by the user .There is no data confidentiality during data transmission. In this Paper , we propose a novel TCP variant, known as TCP for non congestive loss (TCP-NCL) ,to adapt TCP to wireless networks by using more reliable signals of packet loss and network overload for activating packet retransmission and congestion response, separately. The proposed variants are limited to sender-side TCP. To achieve a secure Data Transmission using Transmission control protocol, the serialize TCP active both security and authentication we are implementing TCP retransmission with security.
Article
Full-text available
The advent of sixth-generation (6G) networks brings unmatched speed, reliability, and capacity for massive connections, making it a cornerstone for revolutionary applications. One such application is in vehicular networks, which have their unique demands and complexities. Specifically, they face the complex issue of packet reordering due to the high-speed movement of vehicles and frequent switching of network connections. This paper examines the impact and causes of packet reordering, its threats to network efficiency, and potential countermeasures, particularly in the context of 6G-enabled vehicular networks. We introduce end-to-end methods and metrics to address packet reordering in 6G, discussing the development trends and application prospects. Our findings highlight the emergence of sophisticated strategies, such as prediction and avoidance, to manage packet reordering. They also reveal potential applications to boost network reliability, emulate traffic distributions, and enhance data security. Furthermore, we anticipate a growing integration of machine learning and data-driven optimization in tackling packet reordering. The insights provided aim to influence the future design and optimization of 6G networks, particularly concerning packet management and performance. This paper aims to assist researchers and practitioners in effectively leveraging packet reordering to promote efficient and secure operations of future 6G networks.
Article
Full-text available
TCP is a sliding window protocol that provides handling for both timeouts and retransmissions TCP performs unsatisfactorily since packet reordering and random losses may be falsely interpreted as congestive losses. This causes TCP to trigger fast retransmission and fast recovery spuriously, leading to under-utilization of available network resources. In this Project, we propose a novel TCP variant, known as TCP for non-congestive loss ,to adapt TCP to wireless networks by using more reliable signals of packet loss and network overload for activating packet retransmission and congestion response, separately. TCP-NCL can thus serve as a unified solution for effective congestion control, sequencing control, and loss recovery. The proposed variants are limited to sender-side TCP only, thereby facilitating possible future wide deployment.
Conference Paper
Full-text available
It is well-known that TCP performance may degrade over paths that include wireless links, where packet losses are often not related to congestion. We examine this problem in the context of the GSM digital cellular network, where the wireless link is protected by a reliable link layer protocol. We propose the use of multi-layer tracing as a powerful methodology to analyze the complex protocol interactions between the layers. Our measurements show that TCP throughput over GSM is mostly ideal and that spurious timeouts are extremely rare. The multi-layer tracing tool we developed allowed us to identify the primary causes of degraded performance: (1) inefficient interactions with TCP/IP header compression, and (2) excessive queuing caused by overbuffered links. We conclude that link layer solutions alone can solve the problem of `TCP over wireless links'. We further argue that it is imperative to deploy active queue management and explicit congestion notification mechanisms in wide-area wireless networks; which we expect will be the bottleneck in a future Internet.
Conference Paper
The more information about current network conditions available to a transport protocol, the more efficiently it can be use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements performed by the connection endpoints. We consider two basic transport estimation problems: determination the setting of the retransmission timer (RTO) for a reliable protocol, and estimating the bandwidth available to a connection as it begins. We look at both of these problems in the context of TCP, using a large TCP measurement set [Pax97b] for trace-driven simulations. For RTO estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially-weighted moving average estimators commonly used. For bandwidth estimation, we explore techniques previously sketched in the literature [Hoe96, AD98] and find that in practice they perform less well than anticipated. We then develop a receiver-side algorithm that performs significantly better.