ArticlePDF Available

An intelligent long-lived TCP based on real-time traffic regulation

Authors:

Abstract and Figures

Transmission control protocol (TCP) transaction is one of the chief protocols of Internet protocols. It is divided into two categories of data flow. (1) A small fraction of the TCP connections carried a small portion of the traffic called short-lived TCP. (2) A small fraction of the remaining TCP connections carried a large portion of the traffic called long-lived TCP. The main problem here is the transmission time; other data usually harm its flows, such as User Datagram Protocol (UDP) or short-lived TCP, and cause the unfairness in the network. In this paper, a novel framework is proposed to improve network throughput and to reduce the impact of long-lived TCP to other data flows. In this framework, each TCP connection passed by an edge network device and it will be observed for determining the long-lived TCP flows. Then, the detected long-lived TCP flow will be regulated based on predicting real-time traffic levels. Moreover, to highlight the benefits of the proposed framework, an analytical model is proposed to compare the proposed framework with the conventional TCP in terms of network performance. Experiments are conducted using the ns-2 benchmark in order to verify the results of the analytical model. The results showed that the analytical outcomes are promising and matched well with the outcomes of the ns-2 experiments. In the case of a high error rate, the proposed framework achieves higher reliability and reveals lower resource consumption.
This content is subject to copyright. Terms and conditions apply.
An intelligent long-lived TCP based on real-time
traffic regulation
Mohammad Al Shinwan
1
&Laith Abualigah
1
&Nguyen Dinh Le
2
&Chulsoo Kim
2
&
Ahmad M. Khasawneh
1
Received: 4 October 2019 /Revised: 28 February 2020 /Accepted: 13 March 2020
#Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract
Transmission control protocol (TCP) transaction is one of the chief protocols of Internet
protocols. It is divided into two categories of data flow. (1) A small fraction of the TCP
connections carried a small portion of the traffic called short-lived TCP. (2) A small fraction of
the remaining TCP connections carried a large portion of the traffic called long-lived TCP. The
main problem here is the transmission time; other data usually harm its flows, such as User
Datagram Protocol (UDP) or short-lived TCP, and cause the unfairness in the network. In this
paper, a novel framework is proposed to improve network throughput and to reduce the impact
of long-lived TCP to other data flows. In this framework, each TCP connection passed by an
edge network device and it will be observed for determining the long-lived TCP flows. Then,
the detected long-lived TCP flow will be regulated based on predicting real-time traffic levels.
Moreover, to highlight the benefits of the proposed framework, an analytical model is
proposed to compare the proposed framework with the conventional TCP in terms of network
performance. Experiments are conducted using the ns-2 benchmark in order to verify the
results of the analytical model. The results showed that the analytical outcomes are promising
and matched well with the outcomes of the ns-2 experiments. In the case of a high error rate,
the proposed framework achieves higher reliability and reveals lower resource consumption.
Keywords Long-lived TCP.Isolation .Segment-by-segment .Network performance .Anovel
framwork
1 Introduction
The transmission control protocol (TCP) conveys the vast majority of the traffic with more
than 90% of bytes. The reliability of data transmission keeps TCP remains the dominating
transport protocol in the internet network [14,27]. However, with the increase of applications
Multimedia Tools and Applications
https://doi.org/10.1007/s11042-020-08856-z
*Mohammad Al Shinwan
mohmdsh@aau.edu.jo
Extended author information available on the last page of the article
running on top of TCP and the demand for a high-speed network, TCP reveals some
shortcomings such as low network throughput due to slow ramp-up mechanism or unfairness
in network resource sharing with other data flows [11,25]. For these reasons, researchers are
encouraged to investigate TCP in more detail in order to provide a better solution for solving
these existed problems [3,4,16,20,25].
In order to understand the root causes of unfairness problem, the complexity of TCP traffic
is needed to be classified. In this study, TCP traffic is divided into short-lived and long-lived
TCP based on their size of data and their life transmission time [23,32]. In another study, by
investigating the lifetime and transfer size of each TCP flow, L. Guo and I. Matta discovered
TCP flows follow a heavy-tailed distribution in which only a small percentage (e.g., less than
20%) of flows are long-lived TCP (e.g., more than 20 packets), but they carry more than 80%
of the total traffic in bytes [6,17]. The vulnerability of long-lived TCP can state that the
unfairness problem flows in sharing network resources [5]. Because of its significant amount
of data, long-lived TCP flows need a long transmission time to finish its transaction. Hence, it
is usually affected by other flows in network resource sharing. In detail, UDP flows usually
occupy more than their fair share of the bandwidth. Short-lived TCP affects long-lived TCP by
reducing its network throughput by up to 10% as the study in [9,15,34].
The basis of TCP congestion control lies in the additive increase multiplicative decrease
mechanism (AIMD) [10]. The congestion window is halved for every window containing a
packet loss and is increased roughly by one segment per round-trip time (RTT). By this
mechanism, standard TCP is successful at low speeds; however, it is not suitable for high-
speed communication due to its slow grow-up of congestion window size when congestion
occurs. In other words, standard TCP is not efficient for high speeds of data communication,
which is currently a high demand for all TCP applications [31,35]. For these reasons, it is
crucial to develop a new transport mechanism for improving network performance and reduce
the impaction between long-lived TCP flows and other network flows [30].
In this paper, a new transport mechanism is proposed with a novel mechanism to solve the
stated unfairness problems and to improve the network throughput of long-lived TCP by
modifying the edge network Equationuipment in the following process. First, edge routers will
be responsible for classifying flows and marking packets, whether they belong to long-lived or
other flows. Second, once a flow is classified into a long-lived TCP flow, edge routers will be
able to control the flow by providing a set of traffic regulation mechanisms. A mechanism for
predicting the traffic level is firstly defined. Based on the predicted traffic level, we can decide
to regulate the long-lived TCP by either reducing its data rate or splitting its connection to
increasing the data speed. In order to highlight the advantage of the proposed transport
mechanism, we develop two probability models for estimating the packet loss rate and several
traversed hops, which respectively denote for the reliability and resource consumption and
then compare with the conventional TCP. Furthermore, we perform an extensive set of ns-2
simulation experiments in a variety of scenarios in order to verify the analytical models. The
analytical results and the supporting experimental outcomes show the following:
&When the error rate is high, ALL-TCP achieves higher reliability than the conventional
system.
&In the proposed model, the lost packets are retransmitted from the edge router instead of
retransmitting from the source node; hence the number of traversal hops will be reduced.
In other words, the proposed model will be better than conventional in terms of resource
consumption.
Multimedia Tools and Applications
&The experiments show that our proposed model obtains higher throughput than the
conventional. The reason is that the RTT of each segment is shorter than the end-to-end
RTT, and the transmission process in each segment is independent of the others.
The rest of the paper is organized as follows. Section 2 outlines related works and positions our
analysis concerning prior studies. In Section 3, we present the proposed framework in the
viewpoint of architecture and operation mechanism. In Section 4, we develop the analytical
models to highlight the advantages of our new transportation mechanism in terms of reliability
and resource consumption and further conduct an extensive of ns-2 experiments to validate the
accuracy of our analytical model. Finally, Section 5 concludes the paper with final remarks.
2 Related work
TCP has been recognized as the most important transport layer protocol for the Internet. It is
distinguished by its reliable transmission, flow control, and congestion control [8,26]. The
literature on TCP protocol is considerable; however, it is out of the scope of this paper to present
a comprehensive survey. Hence, the most related papers have been reviewed in this section in
terms of long-lived TCP regarding network performance and unfairness issues. There is a real
unfairness issue amongst long-lived TCP, UDP, and short-lived TCP in sharing network re-
sources. As mentioned earlier in the introduction of this paper, UDP flows usually use more than
their fair share of the bandwidth. However, there is no existing solution for solving the impact of
UDP to other network flows because UDP is connectionless; it does not provide mechanisms to
control the end-to-end data flow [24]. Hence, most of the researches focus on TCP traffics,
connection-oriented protocol. The complexity of TCP unfairness is not only reflected by the
competition in network resources between UDP and TCP traffics but also between the short and
long-lived TCP. Short-lived TCP flows may reduce up to 10% in the throughput of the long-lived
flows [17]. In order to resolve the unfairness problems, several types of research have been
proposed. We can classify previous works into two aspects. First, the solutions that are given for
general network traffic (i.e., TCP and UDP). Second, specific consideration will explore the
unfairness of TCP traffics in detail regarding long-lived and short-lived TCP flows.
The general unfairness problems were early recognized and mentioned in [13,28]. These
papers focused on improving network fairness by either modifying TCP protocol or by
employing non-tail-drop buffer management at routers. In particular, in [2], Morris proposes
solutions to regulate the data traffics by providing per-flow information at all routers. In [28],
Seddigh et al. propose a new packet dropping mechanism. These solutions show significant
results; however, they require modifications on all routers which are unfeasible in deploy-
ments. The studies in [1,7,19,33] aim to provide traffic metering and dropping mechanisms
at the edge routers only with expected bandwidth or fairness guarantees. These researches are
considered as an improvement to different services [22], which only provides several class
priorities. Their dropping mechanism will base on the user profiles, which supplied vary of
dropping level. Their result outcomes show the number of benefits to the network service
provider. However, it is challenging to deploy in the overall network because the idea of the
open internet is nowadays considered a significant stream in the network society. Many
research aims to provide free internet service with high performance. In this paper, we aim
to improve the network performance of long-lived TCP and reduce the unfairness in the
network but not aiming at any specific users or any individual services. Most of the routers
Multimedia Tools and Applications
nowadays are green routers, which can modify their power plan to balancing their service rate
according to network traffic, switching, and transmission capacity.
These are also several researches focused on analyzing the fairness between long-lived and
short-lived TCP. The competition between long-lived and short-lived is early mentioned in [36].
The solution for solving this problem is proposed by I. Matta and L. Guo in [18]. In this paper,
they provide a mechanism for isolating low and long TCP flows and then enhance response time
and fairness of low flows. The significant results are given. However, the bandwidth (load) control
is needed to perform at each core router. X. Wu and I. Nikolaidis proposed algorithms to classify
schemes based on lifetime (short us. long-lived flows) and RTT attributes, as well as their
combinations in order to provide a better TCP classification schemes. The double objective
served by the classification schemes is to satisfy the need for reduced response time, which is the
primary concern of short-lived flows, while at the same time ensuring fairness among long-lived
flows [4]. In [12,17], they try to improve the network fairness and to improve the network
performance of short-lived TCP flows by suggesting to either using a considerable initial window
value or sharing the network measurement information from previous records. These papers
require modifying the TCP protocol at the end-host terminals and may lead to congestion collapse.
From different approaches, we provide a new framework to improve the network performance
of long-lived TCP flow and to reduce its impaction to other network traffic flows. The following
proposed mechanisms highlight our advantages by comparing them with previous studies. First,
in order to avoid the unfairness problem, long-lived TCP is controlled separately by classifying
the TCP traffics based on the amount of data passing by edge routers. Second, the FARIMA
technology, a well-known predicting mechanism in an internet network, is used for determining
the traffic level, which is a critical input factor for regulating network traffic. Third, based on the
predicted traffic level, the edge device will decide whether it will activate the splitting mechanism
for increasing the data transmission process or activate the slow-down mechanism for reducing
the transmission speed of controlled, long-lived TCP flows when our edge network equipment is
nearly overflowed. We believe that those upgraded mechanisms will improve the performance of
long-lived TCP and significantly reduce its impaction to other data flows.
3 Methodology
In this section, the proposed methodology is represented to show the general procedure in
solving the mentioned problem. Figure 1presents an overview of our proposed framework.
Several modifications are prerequisites in the network edge routers. Because in the current
operation of almost the router operating system, all IP packets received from the Ethernet
driver will be intercepted and processed by the routing mechanism for selecting the next
forwarding hop. Hence, in the first step, a hooking function is provided in order to intercept all
IP packets and forward them into our provided ALL-TCP layer. Hooking function refers to a
variety of techniques employed to prevent calls to pre-existing employment and wind around
them to change the functions behaviour at runtime. Second, in the proposed provided layer, a
traffic level prediction function is proposed for estimating the traffics state at the next time
interval, which is an essential input factor for traffic regulation mechanisms. Next, the packet
classification mechanism is provided for selecting long-lived TCP flows from network traffics.
In this stage, we can separate TCP traffics from other traffics based on the protocol field in the
IP header and then determine the long-lived flow by counting the number of packets of each
flow passing the network edge router (i.e., more significant than 30 packets). Finally, the traffic
Multimedia Tools and Applications
regulation mechanism is designed for regulating the long-lived TCP flows based on the traffic
level predicted in the second step [33]. Algorithm 1shows the algorithm of the proposed
method. The complexity of the proposed algorithm approximates O (n*t), where n denote the
number of packet and t denote the complexity of the hooking function. The follows descrip-
tions regarding the proposed method process:
&Hooking function: The hooking function is provided to intercept all IP packets and forward
them into the suggested ALL-TCP layer.
&Traffic level prediction function: A traffic level prediction function is presented for
estimating the traffics state at the next time interval, which is an essential input factor
for traffic regulation mechanisms.
&Packet classification mechanism: The packet classification mechanism is provided for
selecting long-lived TCP flows from network traffics. In this stage, we can separate TCP
traffics from other traffics based on the protocol field in the IP header and then determine
the long-lived flow by counting the number of packets of each flow passing the network
edge router (i.e., more significant than 30 packets).
&Traffic regulation mechanism: The traffic regulation mechanism is designed for regulating the
long-lived TCP flows based on the traffic level predicted in traffic level prediction function [33].
In order to implement these mechanisms, two solutions are used depending on the operating
system. If theinternetwork operating system (IOS) is developed in the edge router, the device is
upgraded by adding the ALL-TCP layer in its upper protocol stacks. A hooking function is also
needed to integrate into the IOS kernel in order to intercept all IP packets and forward them to
the ALL-TCP layer for further processing. After processing in our provided layer, packets are
redirected to the routing and forwarding module for transmitting to the next hop. In the second
solution, the edge router implements a like-Unix operating system, so a modified IP and TCP
layers are needed to comfort with our framework. The following sub-sections discuss the
following two main mechanisms, traffic level prediction and traffic regulation in more detail.
Multimedia Tools and Applications
3.1 Traffic level prediction
It is important to choose the right prediction model amongst several current prediction
techniques (i.e., artificial neural network, autoregressive integrated moving average (ARIMA),
fractional ARIMA (FARIMA), and wavelet-based predictors). Because each prediction tech-
nology requires matched input parameters based on the characters of analytical data. In the
field of the internet network, Feng and Shu [36] compared amongst these prediction technol-
ogies by using mean square error and normalized mean square error as performance metrics
[29]. The result shows that FARIMA obtains a better accuracy result than others. Also, it may
be noted that internet traffic is mainly based on the normal distribution which can be presented
by: Y(t) = X(t) + μ,whereμis the mean rate, and X(t) is a stochastic process with a continuous
spectrum and zero mean. The FARIMA process is also in this form. Due to the two above
reasons, FARIMA is chosen for predicting the network traffic level in our edge device.
FARIMA is a prediction model, which is fitted by time series data either to better understand the
data or to predict future points in the series. The goal of FARIMA is to estimate network traffic Y(t +
τ) from the measured traffic history {Y(r) rϵ(−∞,t]}.Ifgivenτas the next control time interval.
τis a significant importance factor because it effects the trade-off between the processing time and
Fig. 1 The overview of proposed software architecture
Multimedia Tools and Applications
the accuracy of prediction decision. For reducing the overhead of the network node processor and
the update message in the routing protocol, Y(t + τ) must be predicted with an acceptable value of τ.
Assume that a confident prediction requires that the normalized interval τ-step prediction
error (error τðÞ¼
b
YtþτðÞYtþτðÞ
b
YtþτðÞ ) should not exceed a percentage ε(e.g. 20%) with a
probability Ρε. The maximum prediction interval (MPI) is defined as the follows Equation:
MPI ¼maxnτjΡ
error
τ;εðÞΡεð1Þ
where Ρerror ¼Pr error τðÞ>εðÞ.Ρerror τ;εðÞis Equation to P(Z > 0) where Z is a random
variable of the Gaussian distribution with a probability density function N εμ;σ2
ε;τ

.
Based on the Gaussian assumptions identified in this paper, the traffic could be presented
by: Y(t) = X(t) + μ,whereμis the mean rate, and X(t) is a stochastic process with a continuous
spectrum and zero mean. By applying the wolds decomposition theorem, Gaussian process
Y(t) can be represented by one-sided moving average as the follows Equation:
Yt
ðÞ¼þ
u¼0huntu
ðÞ
þμð2Þ
where n(t) and hudenote Gaussian white noise and the possibly infinite vector of moving
average weights (coefficients or parameters), respectively. The optimal τstep predictor of the
Gaussian process X(t + τ)is b
XtþτðÞ¼EXtþτðÞjXsðÞ;st½. Applying Kolmogorovsap-
proach, b
XtþτðÞcan expressed as the follows Equationuation:
b
XtþτðÞ¼þ
u¼0huþτntuðÞ:ð3Þ
Comparing to Eqs. (2) and (3), the unpredictable part in X(t + τ)is
τ1
u¼0
huntþτuðÞ.Soτ
step predictor variance c
σ2
τcan be expressed in Eq. (4).
c
σ2
τ¼τ1
u¼0h2
u¼σ2−∑þ
u¼0h2
uð4Þ
where σ2is the variance of X(t). σ2
ε;τcanbecalculatedbyusingEq.(5).
σ2
ε;τ¼τ1
u¼0h2
uþε2þ
u¼τh2
u¼1ε2

c
σ2
τþε2σ2:ð5Þ
From Eqs. (3) and (4), Perror τ;εðÞPεin Eq. (1) is equivalent to σ2
ε;τε2μ2
Ф21Pε
ðÞ
where Ф(x) is
the inverse CDF of N(0,1). From Eqs. (1) and (5), MPI can be given as:
MPI ¼max τjb
σ
2
r
σ21
C2Ф21Pε
ðÞ
1

ε2
1ε2
8
<
:9
=
;ð6Þ
where C=σ/μis the variance coefficient of Y(t).
3.2 Traffic regulation
Based on the traffic level, Y(t + τ) which is predicted at the next interval of the control time of
τ, ALL-TCP agent will decide to either accept or refuse the new long-lived TCP flows [21].
Multimedia Tools and Applications
In the first case, when the predicted traffic level is lower than the minimum threshold α.
The new long-lived TCP flows will be accepted to enter the proposed traffic regulation
module, where the long-lived TCP flow will be split into two sub-connections that work
independently. From the functionality point of view, each edge network router operates as a
destination host by sending acknowledgment messages for every received packet from the
preceding node. Also, it contains functions that operate similarly to those in the conventional
source host, in which it follows existing algorithms for sending data packets in response to
incoming ACKs, multiple duplicate acknowledgments, or retransmit timeouts.
In the second case, if the predicted traffic level excesses the maximum threshold β,ALL-TCP
will reduce the data rate of incoming traffic to avoid congestion state instead of accepting new
connections. Random early detection protocol (RED) [11] is used to reduce the data rate of
incoming traffic with a few modifications needed. In which, the algorithms for calculating the
average queue length is replaced by FARIMA. The behavior of RED with UDP and short-lived
TCP flows are remained the same as it does in original version. Howeve, we control the long-lived
TCP flows is changed where the long-lived TCP flow that has the highest amount of buffered data
in amongst existed long-lived TCP flows in edge router is selected for reducing data rate. This
behavior will guarantee the fairness between long-lived TCP flows in resource sharing. For
selecting the flow that has highest amount the stored data in buffer, we need to measure the
incoming and outgoing packets rate of each long-lived TCP at the next time interval, t + τ
separately. If the predicted incoming and outgoing packets of the preceding and the next segment
at the next time interval are Yi(t+τ), Yo(t+τ) and the current stored data in the buffer is δ,then
expected buffering data for the long-lived TCP flow can be measured as the follows Equation:
BtþτðÞ¼YitþτðÞδþYotþτðÞ ð7Þ
The predicted incoming and outgoing packets of preceding and next segments are similarly
determined based on the congestion window size at the next time interval of τ. For example, when
the current congestion window size of the preceding segment is cwndiand its incoming packets
arrive the edge router at the time before the predicted time interval, then Yi(t+τ)=cwnd
i+1
,
otherwise Yi(t+τ)=0.IfW
max presents for the maximum window size and an Ack message are
sent to the sender after receiving a number of ηdata frames, then the value of the next congestion
window size in the case of congestion free can be obtained by the follow Equation:
cwndiþ1¼Wmax;if cwndiþ1Wmax
cwndiþcwndi=η;Otherwie
ð8Þ
In order to reduce the jitter problem at the end of the host terminal and to keep the allocated
memory for each flow in the edge router at a minimal value, the data transmission rate between
sub-segments of each long-lived TCP flow should be regulated based on the following ratio:
γ¼δþcwndiþt
cwndoþt
ð9Þ
This ratio should be regulated at a value that higher or at least equal to 1 and does not exceed a
defined maximum threshold (i.e., γ2). When γis out of the allowed range, the mechanism
need to adjust the bias by delay sending packets or acknowledgment message of the sub-
segment which has higher throughput. This mechanism is similar to TCP delayed acknowl-
edgment that is proposed in [13].
Multimedia Tools and Applications
4 Analysis and simulation
In the proposed traffic regulation module, each long-lived TCP is running based on segment-
by-segment transportation (SBST). Hence, in Section A, we explore the advantages of SBST
by comparing it with conventional end-to-end transportation (ETET) regarding reliability and
resource consumption. In section B, simulated experiments by using the ns-2 tool are
conducted to verify the analytical model and to investigate the network throughput of long-
lived TCP further.
Its hard to tune the SBST and ETET, because of the SBST is related to the data
transformation to segments at the source node and reconstructed at the receiver. On the other
hand, the ETET is related to different feature such as the flow controls rate and error detection,
which make the tuning is hard to implement.
4.1 Fundamental of SBST and ETET
In this section, the network reliability and resource consumption of each transportation mecha-
nism is measured by modeling the behavior of a packet. Which is when sent from the sender to the
receiver. The reliability is presented by overall packet loss probability, while resource consump-
tion can be measured based on the number of traversed routers that a packet needs to pass by in
order to reach the receiver. The following assumptions are used in the proposed model:
&pi: Packet loss probability at router i.
&psi: Probability of a packet passing by router number i successfully.
&pe: End-to-end packet loss probability.
&ps: Probability of successfully sending a packet from the source to the destination.
&n: The total number of routers in the network.
&n1: The number of routers of the first segment.
&n2: The number of routers of the second segment.
&R: Maximum number of retransmissions.
4.1.1 Packet loss probability in ETET
The probability of sending a packet successfully from the sender to the receiver is defined as:
ps¼n
i¼11pi
ðÞ:ð10Þ
Hence, the probability of sending a packet unsuccessfully after R retransmission times is
donated as follows:
pe¼1ps
½
R¼1−∏n
i¼11pi
ðÞ

R:ð11Þ
Assume that a packet is dropped at a random position k when transmitting to the receiver, then
the probability of this event can be described as:
pek¼pkk1
i¼11pi
ðÞ:ð12Þ
In one transmission time, the number of traversed routers of event that a packet may not reach
the receiver is defined as the follows Equation:
Multimedia Tools and Applications
hf¼n1
k¼1
kpek
1−∏n
i¼11pi
ðÞ
¼n1
k¼1
kp
kk1
i¼11pi
ðÞ
1−∏n
i¼11pi
ðÞ

:ð13Þ
Hence, the expected of total traversed routers of event that a packet cannot reach the receiver
after a maximum number of retransmissions, R, is given as follows Equation:
hR¼Rhf¼Rn1
k¼1
kp
kk1
i¼11pi
ðÞ
1−∏n
i¼11pi
ðÞ

:ð14Þ
The expected of traversed routers of event that a packet reaches a receiver after a number of r
retransmissions time (r < R) are obtained by the follows Equation:
hr¼r1ðÞhfþn:ð15Þ
The probability of event that a packet may not reach the receiver after a maximum number of
retransmissions, R, is presented by the follows Equation:
pR¼1−∏n
i¼11pi
ðÞ

R:ð16Þ
The probability of event that a packet reaches to the receiver after a number of retransmissions
r (r < R) is given as the follows Equation:
pr¼r
j¼11−∏n
i¼11pi
ðÞ

j1n
i¼11pi
ðÞ:ð17Þ
From Eqs. (14), (15), (16), and (17), the total expected of traversed routers when a packet is
transmitted through an end-to-end connection is defined as the follows Equation:
ht¼R1
r¼1prhr
ðÞþpRhR:ð18Þ
4.1.2 Packet loss probability in ETET
Packet loss probability in SBST is measured by aggregating the packet loss probability in the
first and the second segments. The probability of event that a packet is transmitted successfully
through the first segment is given as the follows Equation:
psi1 ¼n1
i¼11pi
ðÞ:ð19Þ
The probability of event in which a packet cannot reach the receiver after a maximum number
of retransmissions, R, can then be:
pe1¼1psi1

R¼1−∏n1
i¼11pi
ðÞ

R:ð20Þ
Hence, the probability of an event in which a packet is transmitted successfully by passing
through the first and second segments is defined sequentially as:
ps1¼1pe1¼11−∏n1
i¼11pi
ðÞ

Rð21Þ
ps2¼1pe2¼11−∏n2
i¼11pi
ðÞ

Rð22Þ
Multimedia Tools and Applications
From Eq. (21)and(22), the probability of an event in which a packet is transmitted
successfully from the sender to the receiver is described as follows:
ps¼ps1ps2ð23Þ
Therefore, the probability of an event in which a packet cannot reach the receiver after a
maximum number of retransmissions, R can be obtained as follows:
pe¼1ps¼11−∏n1
i¼11p
i
ðÞ

R
no
11nn1
i¼11pi
ðÞ

R
no
ð24Þ
4.1.3 Resource consumption in SBST
Similar to Eq. (18), the expected total traversed routers that a packet needs to pass through the
first segment (ht1) can be calculated as:
ht1¼R1
r¼1pr1hr1

þpR1hR1;ð25Þ
where pr1;hr1represent for the packet loss probability and number of traversed routers of event
that a packet is transmitted successfully though the first segment, respectively. While, pR1,
hR1denote for the packet loss probability and number of traversed routers of event that a
packet cannot reach the receiver after a maximum number of retransmission R. The probability
of an event that a packet is transmitted successfully though the second segment after a number
of retransmissions; k can be defined as the follows Equation:
pr2¼11−∏n1
i¼11pi
ðÞ

R
no
1−∏n2
i¼11pi
ðÞ

k1ðÞ
n2
i¼11pi
ðÞ

:ð26Þ
The probability of an event that a packet cannot reach the destination after a maximum number
of retransmissions, R, is given as follows Equation:
pR2¼pr11−∏n2
i¼11pi
ðÞ

R:ð27Þ
Similar to the analyses in (14) and (15), the number of traversed routers of events that a packet
transmits through the second segment in both cases success and failure after a maximum
number of retransmissions, R, can be represented as hr1and hr2, respectively. From Eqs. (26)
and (27), the overall traversed routers of event that a packet is transmitted through the second
segment can be obtained as follows:
ht2¼R1
r2¼1pr2hr2

þpR2hR2:ð28Þ
From Eqs. (25) and (28), the overall number of traversed routers (ht) through which a packet
passes in SBST is identified as:
ht¼ht1þht2¼R1
r¼1pr1hr1

þpR1hR1þR1
r2¼1pr2hr2

þpR2hR2:ð29Þ
4.1.4 Analytical results
The comparison results can be obtained based on the established Equations, which is given
above by using MATLAB software package. The configure ration of the given parameters are
Multimedia Tools and Applications
set to n= 8, pi = 0.05, and R = 10 according to [3]. The packet loss probability of SBST and
ETET are obtained by using Eqs. (11)and(24), respectively. The results in Fig. 2show that the
SBS is more reliable than the conventional ETET. The number of traversed routers is
determined by using Eqs. (18)and(29). The results shown in Fig. 3present that, in SBST, a
packet needs to pass by a fewer number of routers to reach the receiver than in ETET. In other
words, SBST consumes less network resource than the conventional ETET.
4.2 Simulation experiments
We simulate ALL-TCP by modifying the implementation of the current Tahoe TCP in ns-2,
which generally can be summarized as follows.
First, in order to provide mechanisms for sending packets, we redesign the packet sending
module by driving from the TcpAgent class. Second, for acknowledging the received packet,
an acknowledgment module is provided by driving from the TcpSink class. Each long-lived
1 2 3 4 5 6
5
5.5
6
6.5
7
Maxiumum number of retrans mis ions
spohdesevarT
ETET
SBST
Fig. 2 Network reliability comparison
1 2 3 4 5 6
0
0.05
0.1
0.15
0.2
0.25
0.3
Maxim um number of retransmisi ons
er
uli
affoytilib
a
borP
ETET
SBST
Fig. 3 Resource consumption comparison
Multimedia Tools and Applications
TCP flow is maintained and controlled by each flow separately, which includes both sending
and acknowledgment modules. Third, because ns-2 currently provides a single point of packet
entrance called entry (Which is a connector object). Each packet that enters the Node entry is
directly forwarded to the address classifier module (i.e., Classifier instvar). If the Node is not
the final destination, the address classifier will forward the packet to the link specified in the
routing table. Therefore, we need to provide a hooking function.
In order to intercept packets and forward them to our provided ALL-TCP agent for further
processing, our framework lie on the change in transport mechanism without modifying the
original TCP protocol implemented in the end-host terminals. Hence, in a real implementation,
the edge router is responsible for adapting with a different version of TCP, such as Reno, New-
Reno, or SACK.
As we proposed in the Tcl simulated code, a network with four routers that connects
two host terminals is created. Each link is configured to 4 Mb of bandwidth capacity, and
50 ms link-delay. We use link error instead of creating congestion in routers in order to
create packet losses. The link error probability is similar to packet loss probability, p_i,
which is used in our analytical model. The ALL-TCP agent is attached to router number 2
Fig. 4 Experiment configuration
0500 1000 1500 2000
0
50
100
150
200
250
300
350
400
Number of sent pac kets
stekcappordforebmuN
Taho e TC P
ALL-TCP
Fig. 5 Drop packets with link error 0.5
Multimedia Tools and Applications
when we simulate for the proposed framework. The overview of our experiment config-
uration is shown in Fig. 4.
The first experiment aims to confirm the analytical model in section Packet Loss Probability
in ETET, where the reliability of transmitting a packet via ALL-TCP is proved better than the
conventional TCP. In this experiment, the amounts of dropped packets are measured by
sending 2.000 packets via the network with different link error. The results from Figs. 5,6,
7,and8show the amounts of dropped packets for link error: 0.025, 0.05, 0.075, and 0.1,
respectively. The results show that the packet drop probability of the proposed framework is
always smaller than the conventional Tahoe TCP. This conclusion confirms to our analytical
model in section Packet Loss Probability in ETET.
In the ALL-TCP framework, each long-lived TCP connection is split into sub-connections
that work simultaneously in the case that network traffic is lower than the minimum threshold
value. It is because the round trip time of each sub-connection is shorter than the end-to-end
RTT. Hence, ALL-TCP is expected to gain higher network throughput than the conventional
TCP. In the second experiment, we try to measure the network throughput at the TCP Sink
based on the number of received bytes in a fixed 0.8 s time scale. The results in Fig. 9showed
ALL-TCP achieves higher network throughput than the conventional Tahoe TCP and finishes
0500 1000 1500 2000
0
100
200
300
400
500
600
Number of sent pac kets
stekcappordf
o
re
b
muN
Tahoe TCP
ALL-TCP
Fig. 6 Drop packets with link error 0.75
0500 1000 1500 2000
0
100
200
300
400
500
600
700
800
Number of sent pac kets
stekcappord
f
or
e
b
m
uN
Taho e TC P
ALL-TCP
Fig. 7 Drop packets with link error 0.1
Multimedia Tools and Applications
the transmission of 2000 packets in a shorter time. The results (as shown in Fig. 10) are even
better than if there are two ALL-TCP agents are deployed in the network.
In order to highlight the adaptive ability of the proposed framework, we measure the
recovery time of congestion window size from the time of multiplicative decreasing until it
recovers to the highest value. The results in Fig. 11 shows ALL-TCP quickly retrieve the
highest congestion window size each time it incurs congestion. This advantage helps ALL-
TCP not only get higher network throughput but also it is better in adaption to congestion. As
well as, this Figure shows that ALL-TCP always get higher network throughput than Tahoe
TCP in the case of network configuration with link error set to 0.1.
010 20 30 40 50 60
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Time (in seconds)
)
c
e
s
/bM(
t
uphguohT
Tahoe TCP
ALL-TCP
Fig. 8 Throughput comparison in normal state with one ALL-TCP agent in deployment
010 20 30 40 50 60
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Tim e (i n s e c on ds )
)ces/
b
M(tuphguohT
Tahoe TCP
Wit h 1 ALL-TCP
Wit h 2 ALL-TCP
Fig. 9 Throughput comparision in normal state with two ALL-TCP agents in deployment
Multimedia Tools and Applications
5 Conclusion
In this paper, a novel protocol design is proposed to improve the network performance of long-
lived TCP (ALL-TCP). ALL-TCP firstly separates long-lived TCP from other data flows.
Secondly, the traffic regulation mechanism is proposed in which a number set of functions is
provided to regular, and control long-lived TCP connections in an edge router. The results of
the analytical models and an extensive set of ns-2 simulation showed the follows: 1) ALL-TCP
is more reliable and consumes less energy than the conventional TCP in the case of high error
rate in the network. 2) It always presents higher network throughput in both cases of
congestion state or normal operation due to its shorter RTT. 3) Our modification is on the
edge network equipment only, hence these are no effect to the end-host terminals. It could be
deployed gradually in current TCP/IP networks with high scalability. 4) The network fairness
is obtained because the long-lived TCP is separated from other data flows. For future work, the
050 100 150 200
0
10
20
30
40
50
60
70
80
90
Drop Occurrence
)
c
es(e
m
itda
e
L
Tahoe TCP
ALL-TCP
Fig. 10 Lead time to recover congestion window size after congestion
020 40 60 80 100
0
0.02
0.04
0.06
0.08
0.1
0.12
Time (in sec onds)
)ces/bM(tuphguohT
Tahoe TCP
ALL-TCP
Fig. 11 Throughput comparison with error link 0.1
Multimedia Tools and Applications
proposed mechanism can be modified to improve other TCP versions like BIC TCP or
compound TCP, in hope to improve the enhance performance and fairness of long-lived
TCP flows.
References
1. Abualigah LMQ (2019) Feature selection and enhanced krill herd algorithm for text document clustering.
Springer, Berlin, pp 1165
2. R. Braden, REquationuirements for Internet Hosts Communication Layers, RFC 1122, Oct.19.
3. Carlucci G, De Cicco L, Holmer S, Mascolo S (2017) Congestion control for web real-time communication.
IEEE/ACM Trans Networking 25(5):26292642
4. Carlucci G, De Cicco L, Mascolo S (2018) Controlling queuing delays for real-time communication: the
interplay of E2E and AQM algorithms. ACM SIGCOMM Computer Communication Review 46(3):17
5. Carlucci G, De Cicco L, Mascolo S (2018) Controlling queuing delays for real-time communication: the
interplay of E2E and AQM algorithms. ACM SIGCOMM Computer Communication Review 46(3):1
6. Cheng, J., & Grinnemo, K. J. (2017). Telco distributed DC with transport protocol enhancement for 5G
Mobile networks: a survey.
7. Clark DD, Fang W (1998) Explicit allocation of best-effort packet delivery service. IEEE/ACM Trans
Networking 6(4):362373
8. Douga Y, Bourenane M, Mellouk A, Hadjadj-Aoul Y (2016) TCP based-user control for adaptive video
streaming. Multimed Tools Appl 75(18):1134711366
9. Ebrahimi-Taghizadeh S, Helmy A, Gupta S (2005) TCP vs. TCP: a systematic study of adverse impact of
short-lived TCP flows on longlived TCP flows. In: Proceedings of INFOCOM, vol 2, Miami, pp 926937
10. S. Floyd and V Jacobson, Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM
Transactions on Networking, pp. 397413, 1993.
11. Floyd S, Jacobson V (1993) Random early detection gateways for congestion avoidance. IEEE/ACM Trans
Networking 1(4):397413
12. Heidemann J (1997) Performance interactions between P-HTTP and TCP implementations. ACM
Computer Communication Review 27(2):6573
13. P Hurley , J. L. Boudec, and P Thiran, A note on the fair-ness of additive increase and multiplicative
decrease, in 16th international T eletraffic congress (ITC-16), Edinburgh, Scotland, 1999.
14. Kakhki AM, Jero S, Choffnes D, Nita-Rotaru C, Mislove A (2017) Taking a long look at QUIC: an
approach for rigorous evaluation of rapidly evolving transport protocols. In: proceedings of the 2017
internet measurement conference, pp 290303 ACM
15. Kennedy, J., Armitage, G., & Thomas, J. (2017). Household bandwidth and the'need for speed': evaluating
the impact of active queue management for home internet traffic. Journal of Telecommunications and the
Digital Economy, 5(2), 113.
16. Kua J, Nguyen SH, Armitage G, Branch P (2017) Using active queue management to assist IoT application
flows in home broadband networks. IEEE Internet Things J 4(5):13991407
17. Liu J, Chi Y, Liu Z, He S (2019) Ensemble multi-objective evolutionary algorithm for gene regulatory
network reconstruction based on fuzzy cognitive maps. CAAI Transactions on Intelligence Technology
4(1):2436
18. Matta and L. Guo, Differentiated Predictive Fair Service for TCP Flows,inProc. ICNP2000, Osaka,
Japan, Nov. 2000.
19. Mehdi H, Pooranian Z, Vinueza Naranjo PG (2019) Cloud traffic prediction based on fuzzy ARIMA model
with low dependence on historical data Transactions on Emerging Telecommunications Technologies:e3731
20. G. L. Monoco, F. Azeem, and S. Kalyanaraman, TCP-friendly marking for scalable best-effort services on
the internet, ACM computer communication review, 2001.
21. Padmanabhan and R. Katz, TCP Fast Start: a Technique for Speeding Up Web Transfers,inProc. IEEE
Globecom98 Internet Mini-Conference, Nov. 1998.
22. Pukkala T (2019) Optimized cellular automaton for stand delineation. J For Res 30(1):107119
23. Real Ehrlich C, Blankenbach J (2019) Indoor localization for pedestrians with real-time capability using
multi-sensor smartphones. Geo-spatial Information Science 22(2):7388
24. Rejaie R, Handley M, Estrin D (1999) RAP: an end-to-end rate-based congestion control mechanism for
realtime streams in the internet. In INFOCOM'99. Eighteenth annual joint conference of the IEEE computer
and communications societies. Proceedings. IEEE 3:13371345 IEEE
Multimedia Tools and Applications
25. Roberts, J., Skandalakis, J., Foard, R., & Choi, J. (2016). A comparison of SDN based TCP congestion
control with TCP Reno and CUBIC. Technical Report.
26. Saldana J (2016) On the effectiveness of an optimization method for the traffic of TCP-based multiplayer
online games. Multimed Tools Appl 75(24):1733317374
27. N. Seddigh, B. Nandy, and P. Pieda, Study of TCP and UDP Interaction for the AF PHB, Internet Draft,
draft-nsbnpp-diffserv-tcpudpaf-01.pdf, 1999.
28. N. Seddigh, B. Nandy, and P Pieda, Study of TCP and UDP Interaction for the AF PHB, Internet Draft,
draft-nsbnpp-diffserv-tcpudpaf-01.pdf, 1999.
29. S. Shenker, A theoretical analysis of feedback flow control, in ACM/SIGCOMM90, 1990.
30. Soleimani MHM, Mansoorizadeh M, Nassiri M (2018) Real-time identification of three Tor pluggable
transports using machine learning techniques. J Supercomput 74(10):49104927
31. Sunny A, Panchal S, Vidhani N, Krishnasamy S, Anand SVR, Hegde M, Kumar A (2017) A generic
controller for managing TCP transfers in IEEE 802.11 infrastructure WLANs. J Netw Comput Appl 93:13
26
32. Tsai CS, Wang YC, Wang HK (2015) A. Transmission Control Protocol with High Throughput of Using
Low Earth-Orbit Satellite to Collect Data from the Floats on Sea Surface 26(2):7896
33. S. Yilmaz, I. Marta, "On class-based isolation of UDP, short-lived and long-lived TCP flows", 9th Int.
Symp. MASCOTs 2001, pp 415422.
34. Zaidi, S. M. R., Hassink, B. J., & Grover, L. (2017). U.S. Patent No. 9,729,454. Washington, DC: U.S.
Patent and Trademark Office.
35. Y. Zhang, L. Qiu, and S. Keshav, Speeding Up Short Data Transfers: Theory, Architecture Support, and
Simulation Results,in Proc. NOSSDAV 2000, Chapel Hill, NC, Jun. 2000.
36. Zhang Y, Hossain MS, Ghoneim A, Guizani M (2019) COCME: content-oriented caching on the mobile
edge for wireless communications. IEEE Wirel Commun 26(3):2631
Publishers note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Affiliations
Mohammad Al Shinwan
1
&Laith Abualigah
1
&Nguyen Dinh Le
2
&Chulsoo Kim
2
&
Ahmad M. Khasawneh
1
Laith Abualigah
Aligah.2020@gmail.com
Nguyen Dinh Le
nguy.le@inje.ac.kr
Chulsoo Kim
charles@inje.ac.kr
Ahmad M. Khasawneh
a.khasawneh@aau.edu.jo
1
Faculty of Computer Sciences and Informatics, Amman Arab University, Amman 11953, Jordan
2
Department of Computer Engineering, Inje University Gimhae, Seoul, Republic of Korea
Multimedia Tools and Applications
... Due to its reliability, TCP is widely utilized in numerous Internet applications such as email, remote access, and file transfer. Moreover, according to reports, TCP is used to carry up to 90% of all internet traffic [4], [5], making it a vital protocol that still needs work to be improved. ...
... The WMA given by equation (3) is a recursive function, and one can write it in terms of older weights, as provided by equation (4). Expanding equation (4) to its older value will continue until it reaches the base term Å0. ...
Article
Full-text available
With the development of wireless technology, users not only have wireless access to the Internet, but this has also sparked the emergence of Wireless Ad-hoc Networks (WANETs); this promising networking paradigm has the potential to adopt the shape of new emergent networks such as the Internet of Things (IoT), Vehicular Ad-hoc Networks (VANET) and Wireless Sensor Networks (WSN). However, channel contention (CC) is one of the key reasons why the TCP performs poorly in WANETs. This paper presents a mechanism called Cross-layer Solution for Contention Control (CSCC) to enhance TCP performance in WANETs. Each node starts marking packets in the proposed mechanism when its CC level reaches a certain threshold. As a result, the source node adjusts the congestion window (cwnd) size to a good state to control the insertion ratio of packets into the network. To provide a fair share to each flow, the flow having a large cwnd is penalized more. Numerous simulations have been conducted across several topologies to clarify the performance of the suggested mechanism. The simulation findings show that, in the presence of the Ad-hoc On-demand Distance Vector (AODV) routing and Dynamic Source Routing (DSR) protocols, the proposed CSCC mechanism outperformed TCP NewReno in terms of throughput and fairness. In comparison to TCP NewReno, the suggested mechanism has fewer retransmitted packets.
... Recently, mobile data and traffic growth pushed mobile operators and service providers to re-engineer the core mobile network and deliver salable solutions through several solutions, such as flat mobile network architecture (Bruschi et al., 2019;Hu et al., 2020;Al Shinwan et al., 2021). It is anticipated that mobile data traffic will increase double per year in the upcoming years. ...
Article
Full-text available
The current mobile network core is built based on a centralized architecture, including the S-GW and P-GW entities to serve as mobility anchors. Nevertheless, this architecture causes non-optimal routing and latency for control messages. In contrast, the fifth generation (5G) network will redesign the network service architecture to improve changeover management and deliver clients a better Quality-of-Experience (QoE). To enhance the design of the existing network, a distributed 5G core architecture is introduced in this study. The control and data planes are distinct, and the core network also combines IP functionality anchored in a multi-session gateway design. We also suggest a control node that will fully implement the control plane and result in a flat network design. Its architecture, therefore, improves data delivery, mobility, and attachment speed. The performance of the proposed architecture is validated by improved NS3 simulation to run several simulations, including attachment and inter-and intra-handover. According to experimental data, the suggested network is superior in terms of initial attachment, network delay, and changeover management.
... In recent years, the increase in mobile traffic exerted pressure on mobile operators to re-engineer next-generation core networks by proposing flat-network architecture to provide a scalable solution for billions of devices [1][2][3]. Future data traffic is expected to double every year in the next five years. More than 100 billion connections of Internet of Things (IoT) devices are envisioned to be deployed by various operators [4][5][6]. ...
Article
Full-text available
Reaching a flat network is the main target of future evolved packet core for the 5G mobile networks. The current 4th generation core network is centralized architecture, including Serving Gateway and Packet-data-network Gateway; both act as mobility and IP anchors. However, this architecture suffers from non-optimal routing and intolerable latency due to many control messages. To overcome these challenges, we propose a partially distributed architecture for 5th generation networks, such that the control plane and data plane are fully decoupled. The proposed architecture is based on including a node Multi-session Gateway to merge the mobility and IP anchor gateway functionality. This work presented a control entity with the full implementation of the control plane to achieve an optimal flat network architecture. The impact of the proposed evolved packet Core structure in attachment, data delivery, and mobility procedures is validated through simulation. Several experiments were carried out by using NS-3 simulation to validate the results of the proposed architecture. The Numerical analysis is evaluated in terms of total transmission delay, inter and intra handover delay, queuing delay, and total attachment time. Simulation results show that the proposed architecture performance-enhanced end-to-end latency over the legacy architecture.
... The work in this paper observes and analyzes the behavior of an adaptive streaming player when it is used alongside other internet applications. The purpose of this paper is to evaluate the effect of TCP long-lived flows [6] on adaptive video streaming. The paper will investigate the impact that various applications using TCP long-live flows have on streaming. ...
Article
Full-text available
Blockchain technology is one of the crypto-currency technologies that has received a lot of attention. It has also found use in various applications, including the Internet of Things (IoT) and Cloud computing. Nonetheless, Blockchain has a significant scalability issue, restricting its ability to support services with various transactions. On the other hand, cloud computing is the on-demand availability of shared computer system resources, although issues now beset it in automation, processes, management, policies, and human aspects. Combining cloud computing and blockchain technology into a single system can improve network control, task scheduling, data integrity, resource management, pricing, fair payment, and resource allocation. In this article, we offered a comprehensive and up-to-date survey of cloud computing and Blockchain integration, a critical service for business applications due to the benefits of privacy, security, and service support. The lack of a comprehensive assessment examining the significance of BaaS platforms used in cloud computing prompted this review. We focus on the various BaaS tools that are currently in use. This report also examines the most common BaaS platforms incorporating Blockchain as a cloud service, such as Alibaba, Oracle, Azure, Amazon, and IBM. Furthermore, this research highlighted some major technological issues associated with merging Blockchain with cloud computing.
Book
In recent years, metaheuristics (MHs) have become essential tools for solving challenging optimization problems encountered in industry, engineering, biomedical, image processing, and the theoretical field. Several different metaheuristics exist, and new methods are under constant development. One of the most fundamental principles in our world is the search for an optimal state. Therefore, choose the right, and correct solution technique for an optimization problem can be crucially important in finding the right solutions for a given optimization problem (unconstrained and constrained optimization problems). There exist a diverse range of MHs for optimization. Optimization techniques have been used for many years in the formulation and solution of computational problems. This book brings together outstanding research and recent developments in metaheuristics (MHs), machine learning (ML), and their applications in the industrial world. Among the subjects to be considered are theoretical developments in MHs; performance comparisons of MHs; suitable methods combining different types of approaches such as constraint programming and mathematical programming techniques; parallel and distributed MHs for multi-objective optimization; adaptation of discrete MHs to continuous optimization; dynamic optimization; software implementations; and real-life applications. Besides, machine learning (ML) is a data analytics technique to use computational methods. Therefore, recently, MHs have been combined with several ML techniques to deal with different global and engineering optimization problems, also real-world applications. Finding an optimal solution or even sub-optimal solutions is not an easy task. Chapters published in this book describe original works in different topics in science and engineering, such as metaheuristics, machine learning, soft computing, neural networks, multi-criteria decision-making, energy efficiency, sustainable development, etc. Before digging deeper into thematter, we will attempt to classify these algorithms as an overviewand discuss some basic use cases. In this book, a classification ofmetaheuristic algorithms and a rough taxonomy of global optimization methods were presented. Generally, optimization algorithms can be divided into two basic classes: deterministic and probabilistic algorithms. We will briefly introduce optimization algorithms such as particle swarm optimization, harmony search, firefly algorithm, and cuckoo search. It also presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs.
Chapter
Full-text available
The only viral thing today is the Covid 19 virus, which has severely disrupted all the economic activity around globe because of which all the businesses are experiencing irrespective of its domain or country of origin. One such major paradigm shift is contactless business, which has increased digital transaction. This in turn has given hackers and fraudsters a lot of space to perform digital scams line phishing, spurious links, malware downloads etc. These frauds have become undesirable part of increased digital transactions, which needs immediate attention and eradication from the system with instant results. In this pandemic situation where, social distancing is key to restrict the spread of the virus, digital payments are the safest and most appropriate payment method, and it needs to be safe and secure for both the parties. Artificial intelligence can be a saviour in this situation, which can help combat the digital frauds. The present study will focus on the different kinds of frauds which customers and facing, and most possible ways Artificial intelligence can be incorporated to identify and eliminate such kind of frauds to make digital payments more secure. Findings of the study suggest that inclusion of AI did bring a change in the business environment. AI used for entertainment has become an essential part in business. Transfiguration from process to platform focused business. The primary requirement of AI is to study the customer experience and how to give a better response for improving the satisfaction. But recently AIs are used not only for customer support, but it’s been observed that businesses have taken it as marketing strategy to increase demand and sales.KeywordsCOVID 19PandemicDigital fraudsArtificial intelligenceDigital transactions
Chapter
Clustering large data is a recent and popular challenge that is used in various applications, including social networking, bioinformatics, and many others. In order to manage the rapidly growing data sizes, traditional clustering algorithms must be improved. In this research, a hybrid Harris Hawks Optimizer (HHHO) with K-mean clustering and MapReduce framework is proposed to solve the various data clustering problem. The proposed scheme uses the K-means' ability to solve the various clustering problems. More specifically, the K-means are utilized as initial solutions to the traditional Harris Hawks Optimizer (HHO). In general, HHO tries to refine the candidate solutions to find the best one. MapReduce is a distributed processing computing paradigm that produces datasets using a parallel program on a cluster. In particular, it is adopted in the developed HHHO for parallelization since it offers fault tolerance, load balancing, and data locality. The performance of the presented methodology has been evaluated by means of numerical comparisons which proved the efficiency of the proposed HHHO, where it produces better results than other existing computation methods. Moreover, it has a very good ability in improving and finding optimal and converging sets of data. In addition, the accuracy and error rate of the obtained results are assessed. The proposed method is implemented and evaluated using PYTHON simulation settings.KeywordsBig data analysisHybrid Harris hawks optimizerMapReduce framework
Article
Full-text available
Multi Input Multi Output (MIMO) and phased array systems are considered a key technologies to realize the 5G communication systems. Therefore, the purpose of this research is the suggestion of a novel mm-wave Ultrawide Band (UWB) antenna design with compact and straightforward layout suitable for both MIMO and phased array systems. Hence, the designed antenna array has been studied separately as a MIMO antenna and as a phased array antenna to carefully assess the performance of each system. The single antenna design is an elliptical patch antenna where the design novelty lies in the combination of a modified inset-feed and defected ground structure to provide a large bandwidth without any compromise in the radiation performance, nor in antenna size and design simplicity. The Design process are performed using CST MWS software, where the Rogers RT/Duroid 5880 substrate is chosen to construct the antenna. A broadband characteristic of 8.7 GHz from 26 to 34.7 GHz with two resonant frequencies at 28 GHz and 33 GHz is obtained. A good radiation properties are achieved, where the gain is greater than 4.5 dB while the radiation efficiency exceeds 97% over the operating band. The MIMO and phased array antennas are made up of 12-elements of the single UWB-antenna arranged linearly along the width-edge of the smartphone mainboard. The MIMO antenna proves a high diversity performance in terms of Diversity Gain (DG), Envelope Correlation Coefficient (ECC), Total Active Reflection Coefficient (TARC), Channel Capacity Loss (CCL) and Mean Effective Gain (MEG), owing to the low mutual coupling less than − 20 dB, which is obtained using a separating slits between the elements. In addition, the suggested phased array provides a highly stable gain up to 15 dB over the entire bandwidth at broadside direction, besides the wide scanning range of ± 60° at 28 GHz and ± 40° at 33 GHz. Hence, the attained results assure that the suggested antenna could be appropriate for incorporation in 5G smartphones and other wireless devices and can be effectively used for both phased array and MIMO applications.
Article
Full-text available
Traffic prediction with high accuracy has become a vital and challenging issue for resource management in cloud computing. It should be noted that one of the prominent factors in resource management is accurate traffic prediction based on a few data points and within a short time period. The auto-regressive integrated moving average (ARIMA) model is a suitable model to predict traffic in short time periods. However, it requires a massive amount of historical data to achieve accurate results. On the other hand, the fuzzy regression model is adequate for prediction using less historical data. Aforementioned by these considerations, in this paper, a combination of ARIMA and fuzzy regression called fuzzy autoregressive integrated moving average (FARIMA) is used to forecast traffic in cloud computing. Besides, we adopt the FARIMA model by using the sliding window, called SOFA, concept to determine models with higher prediction accuracy. Accuracy comparison of these models based on the root means square error and coefficient of determination demonstrates that SOFA is about 5.4 and 0.009, respectively which is the superior model for traffic prediction.
Article
Full-text available
The localization of persons or objects usually refers to a position determined in a spatial reference system. Outdoors, this is usually accomplished with Global Navigation Satellite Systems (GNSS). However, the automatic positioning of people in GNSS-free environments, especially inside of buildings (indoors) poses a huge challenge. Indoors, satellite signals are attenuated, shielded or reflected by building components (e.g. walls or ceilings). For selected applications, the automatic indoor positioning is possible based on different technologies (e.g. WiFi, RFID, or UWB). However, a standard solution is still not available. Many indoor positioning systems are only suitable for specific applications or are deployed under certain conditions, e.g. additional infrastructures or sensor technologies. Smartphones, as popular cost-effective multi-sensor systems, is a promising indoor localization platform for the mass-market and is increasingly coming into focus. Today’s devices are equipped with a variety of sensors that can be used for indoor positioning. In this contribution, an approach to smartphone-based pedestrian indoor localization is presented. The novelty of this approach refers to a holistic, real-time pedestrian localization inside of buildings based on multi-sensor smartphones and easy-to-install local positioning systems. For this purpose, the barometric altitude is estimated in order to derive the floor on which the user is located. The 2D position is determined subsequently using the principle of pedestrian dead reckoning based on user's movements extracted from the smartphone sensors. In order to minimize the strong error accumulation in the localization caused by various sensor errors, additional information is integrated into the position estimation. The building model is used to identify permissible (e.g. rooms, passageways) and impermissible (e.g. walls) building areas for the pedestrian. Several technologies contributing to higher precision and robustness are also included. For the fusion of different linear and non-linear data, an advanced algorithm based on the Sequential Monte Carlo method is presented.
Article
Full-text available
In this paper, we aim to contribute to the policy debate on bandwidth needs by considering more closely what happens in household networks. We draw upon both social and technical studies modelling household applications and their uses to show how queue management protocols impact bandwidth needs. We stress the impact of internet traffic streams interfering with each other, and describe three different categories of internet traffic. We demonstrate how the use of active queue management can reduce bandwidth demands. In doing so we consider how, and to what degree, household internet connections are a constraint on internet use. We show that speed demand predictions are skewed by a perceived need to protect the Quality of Service experienced by latency-sensitive services when using current gateway technologies.
Article
Full-text available
Many methods aim to use data, especially data about gene expression based on high throughput genomic methods, to identify complicated regulatory relationships between genes. The authors employ a simple but powerful tool, called fuzzy cognitive maps (FCMs), to accurately reconstruct gene regulatory networks (GRNs). Many automated methods have been carried out for training FCMs from data. These methods focus on simulating the observed time sequence data, but neglect the optimisation of network structure. In fact, the FCM learning problem is multi-objective which contains network structure information, thus, the authors propose a new algorithm combining ensemble strategy and multi-objective evolutionary algorithm (MOEA), called EMOEA(FCM)-GRN, to reconstruct GRNs based on FCMs. In EMOEA(FCM)-GRN, the MOEA first learns a series of networks with different structures by analysing historical data simultaneously, which is helpful in finding the target network with distinct optimal local information. Then, the networks which receive small simulation error on the training set are selected from the Pareto front and an efficient ensemble strategy is provided to combine these selected networks to the final network. The experiments on the DREAM4 challenge and synthetic FCMs illustrate that EMOEA(FCM)-GRN is efficient and able to reconstruct GRNs accurately.
Book
Full-text available
This book puts forward a new method for solving the text document (TD) clustering problem, which is established in two main stages: (i) A new feature selection method based on a particle swarm optimization algorithm with a novel weighting scheme is proposed, as well as a detailed dimension reduction technique, in order to obtain a new subset of more informative features with low-dimensional space. This new subset is subsequently used to improve the performance of the text clustering (TC) algorithm and reduce its computation time. The k-mean clustering algorithm is used to evaluate the effectiveness of the obtained subsets. (ii) Four krill herd algorithms (KHAs), namely, the (a) basic KHA, (b) modified KHA, (c) hybrid KHA, and (d) multi-objective hybrid KHA, are proposed to solve the TC problem; each algorithm represents an incremental improvement on its predecessor. For the evaluation process, seven benchmark text datasets are used with different characterizations and complexities. Text document (TD) clustering is a new trend in text mining in which the TDs are separated into several coherent clusters, where all documents in the same cluster are similar. The findings presented here confirm that the proposed methods and algorithms delivered the best results in comparison with other, similar methods to be found in the literature.
Article
Full-text available
Forest inventories based on remote sensing often interpret stand characteristics for small raster cells instead of traditional stand compartments. This is the case for instance in the Lidar-based and multi-source forest inventories of Finland where the interpretation units are 16 m × 16 m grid cells. Using these cells as simulation units in forest planning would lead to very large planning problems. This difficulty could be alleviated by aggregating the grid cells into larger homogeneous segments before planning calculations. This study developed a cellular automaton (CA) for aggregating grid cells into larger calculation units, which in this study were called stands. The criteria used in stand delineation were the shape and size of the stands, and homogeneity of stand attributes within the stand. The stand attributes were: main site type (upland or peatland forest), site fertility, mean tree diameter, mean tree height and stand basal area. In the CA, each cell was joined to one of its adjacent stands for several iterations, until the cells formed a compact layout of homogeneous stands. The CA had several parameters. Due to high number possible parameter combinations, particle swarm optimization was used to find the optimal set of parameter values. Parameter optimization aimed at minimizing within-stand variation and maximizing between-stand variation in stand attributes. When the CA was optimized without any restrictions for its parameters, the resulting stand delineation consisted of small and irregular stands. A clean layout of larger and compact stands was obtained when the CA parameters were optimized with constrained parameter values and so that the layout was penalized as a function of the number of small stands (< 0.1 ha). However, there was within-stand variation in fertility class due to small-scale variation in the data. The stands delineated by the CA explained 66–87% of variation in stand basal area, mean tree height and mean diameter, and 41–92% of variation in the fertility class of the site. It was concluded that the CA developed in this study is a flexible new tool, which could be immediately used in forest planning.
Article
Full-text available
Tor is a widespread network for anonymity over the Internet. Network owners try to identify and block Tor flows. On the other side, Tor developers enhance flow anonymity with various plugins. Tor and its plugins can be detected by deep packet inspection (DPI) methods. However, DPI-based solutions are computation intensive, need considerable human effort, and usually are hard to maintain and update. These issues limit the application of DPI methods in practical scenarios. As an alternative, we propose to use machine learning-based techniques that automatically learn from examples and adapt to new data whenever required. We report an empirical study on detection of three widely used Tor pluggable transports, namely Obfs3, Obfs4, and ScrambleSuit using four learning algorithms. We investigate the performance of Adaboost and Random Forests as two ensemble methods. In addition, we study the effectiveness of SVM and C4.5 as well-known parametric and nonparametric classifiers. These algorithms use general statistics of first few packets of the inspected flows. Experimental results conducted on real traffics show that all the adopted algorithms can perfectly detect the desired traffics by only inspecting first 10–50 packets. The trained classifiers can readily be employed in modern network switches and intelligent traffic monitoring systems.
Article
With the rapid development of mobile networks and mobile communications, various content can be conveniently accessed. Recently, some novel caching-based approaches have been proposed to enhance the QoE in wireless communications, but it is still challenging to develop advanced content delivery in wireless communications due to limitations of conventional D2D communications and caching strategies. To address these challenges, this article proposes content-oriented caching on the mobile edge for wireless communications. Specifically, it designs novel mobile edge caching based on popular content recommendations. Sufficient experiments demonstrate that the proposed approaches can effectively decrease the traffic load by using an acceptable volume of storage resources on mobile edges.
Article
Google's Quick UDP Internet Connections (QUIC) protocol, which implements TCP-like properties at the application layer atop a UDP transport, is now used by the vast majority of Chrome clients accessing Google properties but has no formal state machine specification, limited analysis, and ad-hoc evaluations based on snapshots of the protocol implementation in a small number of environ-merits. Further frustrating attempts to evaluate QUIC is the fact that the protocol is under rapid development, with extensive rewriting of the protocol occurring over the scale of months, making individual studies of the protocol obsolete before publication. Given this unique scenario, there is a need for alternative techniques for understanding and evaluating QUIC when compared with previous transport-layer protocols. First, we develop an approach that allows us to conduct analysis across multiple versions of QUIC to understand how code changes impact protocol effectiveness. Next, we instrument the source code to infer QUIC's state machine from execution traces. With this model, we run QUIC in a large number of environments that include desktop and mobile, wired and wireless environments and use the state machine to understand differences in transport-and application-layer performance across multiple versions of QUIC and in different environments. QUIC generally outperforms TCP, but we also identified performance issues related to window sizes, re-ordered packets, and multiplexing large number of small objects; further, we identify that QUIC's performance diminishes on mobile devices and over cellular networks.
Conference Paper
Google's QUIC protocol, which implements TCP-like properties at the application layer atop a UDP transport, is now used by the vast majority of Chrome clients accessing Google properties but has no formal state machine specification, limited analysis, and ad-hoc evaluations based on snapshots of the protocol implementation in a small number of environments. Further frustrating attempts to evaluate QUIC is the fact that the protocol is under rapid development, with extensive rewriting of the protocol occurring over the scale of months, making individual studies of the protocol obsolete before publication. Given this unique scenario, there is a need for alternative techniques for understanding and evaluating QUIC when compared with previous transport-layer protocols. First, we develop an approach that allows us to conduct analysis across multiple versions of QUIC to understand how code changes impact protocol effectiveness. Next, we instrument the source code to infer QUIC's state machine from execution traces. With this model, we run QUIC in a large number of environments that include desktop and mobile, wired and wireless environments and use the state machine to understand differences in transport- and application-layer performance across multiple versions of QUIC and in different environments. QUIC generally outperforms TCP, but we also identified performance issues related to window sizes, re-ordered packets, and multiplexing large number of small objects; further, we identify that QUIC's performance diminishes on mobile devices and over cellular networks.