Figure 2 - uploaded by Go Hasegawa
Content may be subject to copyright.
TCP proxy mechanism 

TCP proxy mechanism 

Source publication
Conference Paper
Full-text available
Interest in the TCP overlay network, which controls the data transmission quality at the transport layer, has grown as the user demand for sophisticated and diversified services in the Internet has increased. In the TCP overlay network, TCP proxy is a fundamental mechanism that transparently splits a TCP connection between sender and receiver hosts...

Context in source publication

Context 1
... packets from the sender host to the receiver host via the split TCP connections. In the present paper, we investigate the performance of the TCP proxy mechanism through experiments using the actual public network. The TCP proxy mechanism is shown to enhance the data transfer throughput without any change in the TCP/IP protocol stack of the endhost. In addition, we evaluate the performance of gentle High-Speed TCP, as proposed in a previous study, on the TCP connection between the TCP proxy nodes. The remarkable degree to which the Internet has grown is due in part to access/backbone network technologies such as xDSL and optical fiber. In addition, user demand for diversified services has increased due to the rapid growth of the Internet popu- lation. Some of these applications require high-quality transport services in terms of, for example, end-to-end throughput, packet loss ratio, and delay. However, data transmission quality across the current Internet cannot be assured, essentially because of the best-effort basis of the Internet. IntServ [1] and DiffServ [2] are possible solutions to this problem that add control mechanisms at the network layer. For example, the DiffServ architecture is based on a simple model in which traffic entering a network is classified, and possibly con- ditioned, at the boundaries of the network and is then assigned to different behavior aggregates. However, implementation of DiffServ architecture would require additional mechanisms to be deployed to all routers through which traffic-flows traverse in order to achieve sufficient benefit from the introduction of IntServ/DiffServ into the network. Therefore, due to factors such as scalability and cost, we believe that these schemes have almost no chance of being deployed on large-scale networks. Proxy cache servers in Contents Delivery Networks (CDNs) [3] and media streaming in P2P (Peer-to-Peer) networks are typical examples of the overlay networking approach. One disad- vantage of such methods is the need for complicated control mechanisms specific to each application. In addition, parameter setting is very sensitive to various network factors. We are now investigating TCP overlay network architecture [4], which controls data transmission quality at the transport layer, meaning that the IP layer remains providing only minimum fundamental functions, such as routing and packet forwarding. One of the important mechanisms of TCP overlay networks is to divide the end-to-end TCP connection into multiple split TCP connections and relay data packets from the sender host to the receiver host via the split TCP connections (Figure 1). In the present paper, we refer to this splitting mechanism as TCP Proxy. TCP Proxy is expected to enhance the end-to- end data transfer throughput, mainly because the feedback-loop of the TCP connection becomes short, meaning that the round trip time and packet loss ratio of each split TCP connection is reduced. In some previous studies [4, 5], we confirmed the effect of the TCP proxy mechanism from simulation and mathematical analysis results. We found that, with a TCP proxy mechanism, the end-to-end throughput of data transmission is increased and the file transfer delay is shortened. In the present paper, we investigate the performance of the TCP proxy mechanism by con- ducting experiments using the public network. Furthermore, we also evaluate the performance of TCP variants for high-speed networks [6, 7] when used on a TCP connection between TCP proxy nodes. The remainder of the present paper is organized as follows. In Section 2, we explain the TCP proxy mechanism and HighSpeed TCP. In Section 3, we explain the environment and settings used in the experimental evaluation and present experimental results for the characteristics of the actual public network used in the present study. We evaluate the effect of TCP proxy and High-Speed TCP in Section 4. Section 5 summarizes the conclusions of the present study and discusses areas for future consideration. TCP proxy is a fundamental mechanism that splits a TCP connection between the sender and receiver hosts into multiple TCP connection at some nodes in the network. TCP proxy modes relay data packets from the sender host to the receiver host via the split TCP connections. TCP proxy also use a local ACK packet; a TCP proxy node sends back a pseudo ACK packet to the upward sender/proxy when it receives a data packet, without waiting to receive an ACK packet from the downward receiver/proxy. TCP proxy is expected to improve the data transfer throughput of connections by shortening the RTT. Furthermore, a TCP proxy has send/receive socket buffers for storing data packets, just like a regular TCP host. When a data packet is lost between the TCP proxy and the receiver host, the dropped packets can be retransmitted from the TCP proxy instead of the sender host. TCP proxy is also expected to improve data transfer performance, compared to regular TCP connections. Figure 2 depicts the mechanisms used in processing and forwarding TCP packets via split TCP connections, where there are two proxy nodes between the sender and receiver hosts and three split TCP connections are used. One advantage of the TCP proxy mechanism is its transparent behavior. TCP connections traversing TCP proxy nodes are split automatically, and there is no need to modify the protocol stack of sender/receiver hosts. In high-speed networks having bandwidths greater than 100 Mbps, obtaining sufficient throughput by TCP based on TCP Reno is difficult, as pointed out in [6]. Therefore, a number of TCP modifications related to High-Speed TCP that are capable of achieving high throughput by modifying the congestion control algorithm have been proposed in [6–9]. In the present paper, we evaluate the performance of High-Speed TCP (HSTCP) [6] and its improvement variant, gentle High-Speed TCP (gHSTCP) [7], on the TCP connection between TCP proxy nodes. The expected benefit of using HSTCP/gHSTCP between TCP proxy nodes is that the advantage of HSTCP/gHSTCP can be obtained, while retaining the TCP/IP stack of the sender/receiver endhosts. To overcome the problems inherent in TCP, HSTCP was proposed in [6]. The HSTCP algorithm employs the principle of Additive Increase Multiplicative Decrease (AIMD), as in standard TCP, but HSTCP is more aggressive in its increases and more conservative in its decreases. HSTCP addresses this by altering the AIMD algorithm for the congestion window adjust- ment, making it a function of the congestion window size rather than a constant, as is the case in standard TCP. That is, the increase parameter becomes larger, and the decrease parameter becomes smaller, as the congestion widow size increases (Figure 3). In this way, HSTCP can sustain a large congestion window and fully utilize the high-speed long-delay network. HSTCP is described in detail in [6]. However, a number of problems regarding HSTCP have been reported in [7]. For example, the relative fairness between standard TCP and HSTCP worsens as the link bandwidth increases. When HSTCP and TCP Reno compete for bandwidth on a bottleneck link, we do not attempt to provide the same throughput that they are capable of achieving. However, in this case, high throughput by HSTCP should not occur by excessively sacrific- ing TCP Reno throughput, i.e., HSTCP should not pillage too many resources at the expense of TCP Reno. Based on HSTCP, gHSTCP, as proposed in [7], can achieve better fairness with competing traditional TCP flows, while ex- tending the advantage of high throughput provided by HSTCP. The original HSTCP increases the congestion window size based solely on the current congestion window size. This may lead to bursty packet losses because the window size contin- ues to increase rapidly even when packets begin to be queued at the router buffer. In addition, differences in speed gains among the different TCP variants result in unfairness. To alleviate this problem, gHSTCP changes the behavior of HSTCP for speed increases so as to account for full or partial utilization of bottleneck links. In addition, gHSTCP regulates the congestion avoid- ance phase in two modes and switches between these modes based on the trend of changing RTT. When an increasing trend in the observed RTT values occurs, gHSTCP adopts the congestion control algorithm of TCP Reno (Figure 4). This is expected to reduce the rate of packet loss in the buffer of routers and improve fairness with TCP Reno. gHSTCP is described in detail in [7]. We prepared the public Internet environment between Tokyo and Osaka, as depicted in Figure 5. There are two TCP proxies in Tokyo network and Osaka net- work, and the TCP connection between the sender and receiver hosts is split into three TCP connections when the TCP proxy mechanism is activated. We compare the data transfer throughput using a single TCP connection between the sender and receiver hosts (Case 1), and that using the split TCP connections by TCP proxy 1 and TCP proxy 2 (Case 2). In case 1, data is transferred between Osaka B and Tokyo B. In Case 2, the TCP connection between Osaka A and Tokyo A is split in three connections (Osaka A - TCP proxy 1, TCP proxy 1 - TCP proxy 2, TCP proxy 2 - Tokyo A), and data is relayed via the split TCP connections with TCP proxy mechanism. Furthermore, we used a network emulator at Tokyo network to emulate the long-delay network between the sender and receiver hosts. We tested the Osaka-Tokyo case with no delay emulated at the network emulator, and the Okinawa-Tokyo case with a 25-msec delay. In the experiment, we inject TCP traffic into the network using the measurement tool iperf [10] and measure the average throughput at the receiver host. Note that we define the throughput as the amount of data arriving at the receiver host per unit time. In addition, we measure the round-trip times (RTTs) and congestion window size (cwnd) using the log of /proc/net/tcp in Linux ...

Similar publications

Article
Full-text available
The fieldbus systems have been successfully introduced in the industrial automation. Nowadays, a large community is inventing the usage of Ethernet -based local communication systems in this domain ensuring the real- time behaviour of these systems. Profinet IO provides the service definition and protocol specification for real-time communication b...
Article
Full-text available
MPI middleware glues together the components necessary for execution. Almost all implementations have a communication component also called a message progression layer that progresses outstanding messages and maintains their state. The goal of this work is to thin or eliminate this communication component by pushing the functionality down onto the...
Conference Paper
Full-text available
A considerable body of evidence indicates that the use of reliable link layer protocols over error prone wireless links dramatically improves the performance of Internet protocols and applications. While traditional link layer protocols set their timeout values assuming that they fully control the underlying link, some wireless networks allow multi...
Chapter
Full-text available
Transmission Control Protocol (TCP) with a loss-based congestion control is still dominantly used for reliable end-to-end data transfer over diverse types of network although it is ineffective when traversing lossy networks. We previously proposed an IP tunneling system across lossy networks using the TCP with Network Coding (TCP/NC tunnel) and sho...
Conference Paper
Full-text available
In supermedia enhanced Internet based teleoperation systems, the data flowing between the operator and the robot include robotic control commands, video, audio, haptic feedback and other media types. The difference between an Internet based teleoperation system and other Internet applications are that (1) there are many media types involved in tele...

Citations

... These advanced protocol to improve the TCP performance in high-speed networks and to manage efficiency friendliness methods are above, as loss based protocol using RTT metrics have been proposed. E.g., Gentle High speed TCP [12], Compound TCP [13], TCP-LP [14], TCP Africa [15]. They can adaptively switch their congestion control phase to the congestion level measurement estimate from RTT. ...
Article
Wired and Wireless networks are two types of challenging environment for TCP congestion control. Most of the congestion control algorithms have been proposed to improve the performance of TCPs in these two environments. Although these improved algorithm can improve network utilization, and perform excellently over disparate networks that contain both wired and wireless residue a good performance. In this study, the Enhanced Slow-Start algorithm, which can concentrate for avoiding heavy packet loss and improved network utilization for keeping a Congestion Window (CWND) increment/decrement manner, and perform very well while controlling packet loss with the standard TCP Reno algorithm. A series of experimental results to demonstrate the performance of Enhanced Slow-Start compared with other state-of-the-art algorithms. The performance of the proposed algorithm is proved to better when compared to the standard Slow-Start, Agile-SD, Reno, Vegas, Hybrid Congestion control Algorithms. The parameters used for testing are CWND size, packet delivery ratio, RTT value, Packet drop.
... One reason of the poor downlink TCP performance is that the hostile nature of the wireless channel and the mobile nature of wireless users interact adversely with standard TCP conges tion control mechanisms. In order to improve TCP throughput performance in networks with heterogeneous transmission link and dynamically changing available bandwidth, a series of methods using a TCP Proxy has been proposed [2]- [5]. ...
Article
Full-text available
WLANs (Wireless Local Area Networks) are widely used for Internet access. But the WLANs in locations such as airports and large conventions usually suffer poor performance in terms of downlink TCP (Transmission Control Protocol) throughput To alleviate these problems, we propose an AFCP (Adaptive Flow Control Proxy) approach, which acts like a TCP Proxy and monitors the traffic flowing through the AP. Using NS-3 simulations, we demonstrate that AFCP can improve the downlink TCP performance, provide good uplink/downlink fairness and promote the total throughput of WLAN.
... Some network equipment vendors announced a TCP throughput improvement mechanism or a new performanceenhanced TCP implemented in their products. The authors also confirmed that the experiments performed in the Internet revealed a throughput improvement that is three to ten times higher than ordinary TCP, which only gets a maximum of 10% of the available bandwidth [42]. ...
Article
Full-text available
Overlay networks are expected to be a promising technology for the realization of QoS (Quality of Service) control. Overlay networks have recently attracted considerable attention due to the following advantages: a new service can be developed in a short duration and it can be started with a low cost. The definition and necessity of the overlay network is described, and the classification of various current and future overlay networks, particularly according to the QoS feature, is attempted. In order to realize QoS control, it is considered that routing overlay and session overlay are promising solutions. In particular, session and overlay networks are explained in detail since new TCP protocols for QoS instead of current TCP protocols that control congestion in the Internet can be used within overlay networks. However, many open issues such as scalability still need further research and development although overlay networks have many attractive features and possess the potential to become a platform for the deployment of new services.
Article
The performance of thin-client systems based on TCP depends on network quality, so it becomes worse in a WAN environment; however, the effects of TCP mechanisms have not been clarified. In this paper, we first describe the download traffic of thin-client systems as a two-state model with interactive data flows in response to keystrokes and bulk data flows related to screen updates. Since users are more sensitive to the keystroke response time, our next objective is to minimise the latency of interactive data flows, especially when the network is congested. Through detailed simulation experiments, we reveal that the main delays are queuing delay in the bottleneck router and buffering delay in the server. We then enhance two TCP mechanisms: retransmission timeout calculation and SACK control, which negate the negative impacts of existing options and increase the interval between occurrences of large delays by about four times.
Article
Return network traffic (from servers to clients) in thin-client systems is modeled as a mixture of interactive data flows corresponding to keystrokes and bulk data flows related to screen updates. Users are very sensitive to delay and jitter of the former flows. Thus our goal is to minimize the latency of interactive data transfer without increasing latency of bulk data transfer. Through simulation experiments, we determine that the main factors causing end-to-end delay in the interactive data transfer are queuing delay in the router and buffering delay in the server. When we apply two techniques: priority queuing of interactive data flows at the router and using TCP SACK option, the average end-to-end delay can be reduced. However, several servers could take more than a second to send large bulk data flows; this delays the transmission of following interactive data flows. We then develop TCP optimization mechanisms: modifying recalculation of the retransmission timeout value and temporarily turning off the TCP SACK control, and demonstrate that they can overcome the negative effects of the existing techniques.
Article
Full-text available
Cette thèse s'inscrit dans une thématique pluridisciplinaire explorant les liens existants entre la théorie de la commande et les réseaux informatiques. L'idée consiste à appliquer les outils de l'Automatique pour la stabilisation du trafic dans les réseaux de communication. Premièrement, nous nous sommes intéressés à l'analyse de stabilité des systèmes à retards variables au travers de deux approches temporelles. D'une part, nous avons considéré la méthode de Lyapunov-Krasovskii dans laquelle nous avons élaboré des fonctionnelles en adéquation avec de nouvelles modélisations du système (segmentation du retard, dérivée temporelle). D'autre part, la stabilité a également été abordée avec une approche entrée-sortie, empruntant alors les outils de l'analyse robuste. Le système à retard est alors réécrit comme l'interconnexion d'une application linéaire avec une matrice constituée d'opérateurs définissant le système original. Après avoir revisité le principe de séparation quadratique, nous développons des opérateurs auxiliaires afin de caractériser au mieux la dynamique retardée et proposer des critères moins pessimistes. Deuxièmement, la méthodologie développée est ensuite utilisée pour le problème de contrôle de congestion d'un routeur lors de communications TCP. Ce protocole de bout en bout est sensible à la perte de paquet et modifie en conséquence son taux d'émission selon l'algorithme du AIMD. Il s'agit alors de commander le taux de perte par l'intermédiaire d'un mécanisme d'Active Queue Management situé au niveau du routeur afin de réguler le trafic. Les résultats théoriques sont ensuite évalués à l'aide du simulateur de réseaux NS-2.