Article

Traffic Models in Broadband Networks

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Traffic models are at the heart of any performance evaluation of telecommunications networks. An accurate estimation of network performance is critical for the success of broadband networks. Such networks need to guarantee an acceptable quality of service (QoS) level to the users. Therefore, traffic models need to be accurate and able to capture the statistical characteristics of the actual traffic. We survey and examine traffic models that are currently used in the literature. Traditional short-range and non-traditional long-range dependent traffic models are presented. The number of parameters needed, parameter estimation, analytical tractability, and ability of traffic models to capture marginal distribution and auto-correlation structure of the actual traffic are discussed

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... State-of-the-art traffic predictors are based on different machine learning algorithms including neural networks [1], Wavelet transform [18], kernel-based methods [2], time-series analysis [3], and LASSO [20]. ARIMA is a class of statistical models for analyzing and forecasting time-series data that has been used for SRD traffic modelling and prediction [21]. FARIMA is the generalization of the ARIMA model in which non-integer values for the differencing parameter is allowed so that it can capture LRD traffic [22]. ...
... In DP clustering, a covariance matrix of size N ×N is required for calculation of the occupation number in Equation (19). Each element of the covariance matrix is the result of Equation (21). The time complexity for calculation of such a matrix in DP clustering is in the order of O(N 2 ). ...
... The main difference between these datasets is the time-scale of traffic samples, as illustrated in Table 2. Therefore, the prediction task for these datasets is to estimate target values (i.e., y i ) according to the feature vector (i.e., x i ). We compared our model with different algorithm including traditional time-series algorithms (i.e., ARIMA [21], FARIMA [22]), supervised regression methods (i.e., standard GPR [13], SVR [6], LASSO [20]), ensemble learning methods (i.e., GTB [31], RF [32], ERT [33]), and a deep learning time-series predictor (i.e., LSTM [23]). These models have been discussed in Section II. ...
Article
Full-text available
Network traffic prediction is substantial for network optimization and resource management. However, designing an efficient predictive model considering different traffic characteristics, including periodicity, nonlinearity, and nonstationarity, is challenging. Recently, ensemble learning is attracting much attention from researchers in the machine learning community. Although ensemble learning has proven exceptional performance in modelling the intricate problems, it may not be able to handle varying patterns and chaotic behaviour, which are typical properties of traffic data (and many other time-series problems). For this reason, ensemble methods show a limited prediction accuracy in network traffic modelling. We address this issue by proposing an ensemble of learners for time-series prediction that considers the accuracy of individual learners as well as diversity among their outcomes. Each learner contributes to the optimization process by finding the optimal accuracy-diversity balance in a segment of feature space. This divide-and-conquer approach avoids complicated objective functions with many local optimums while fitting the ensemble model on large datasets. Experimental results on the real traffic traces show our proposed method outperforms other state-of-the-art predictors with 12% on average in prediction accuracy for different datasets. INDEX TERMS Network traffic prediction, machine learning, ensemble learning, Gaussian process regression (GPR), Dirichlet process (DP) clustering.
... For example, when a request from a player of online game occurs, the corresponding traffic is generated by the activated application, then the traffic is transmitted to some specific server nodes deployed by game providers, where the transmission will be operated according to the game application process and network routing strategy. Existing numerous efforts have been devoted to describing application operation, where most of them focus on characterizing the traffic element in applications according to different user request features [15][16][17][18][19][20][21] . In the early literatures, Poisson, Markov or Autoregressive processes are used to describe the traffic generation model from different user request behaviors, and are applied to characterize some simple network applications [15,16] . ...
... Existing numerous efforts have been devoted to describing application operation, where most of them focus on characterizing the traffic element in applications according to different user request features [15][16][17][18][19][20][21] . In the early literatures, Poisson, Markov or Autoregressive processes are used to describe the traffic generation model from different user request behaviors, and are applied to characterize some simple network applications [15,16] . In these works, network traffic is analyzed having a short-range dependence (SRD) law, which means that the node traffic has a typical burst length tending to be smoothed by averaging over a wide time scale. ...
... Thus the flux-fluctuation law between the traffic mean and variance is portrayed as Eq. (15) , with the constant traffic rate, by ...
Article
A network application serves as the response process to the user request. The network application, taking the traffic as operation carrier, is closely related with process features. The existing investigations mainly focus on the traffic generation by capturing request feature, which is insufficient to characterize the application operation with specific processes. Thus, this paper presents a network application model with operational process feature, where the operation process is introduced in network applications in addition to characterizing traffic element from request feature. And the process feature is represented by the random, customized and routine processes, while request feature is described by the heavy-tailed ON/OFF source. Our analysis and simulation show that the traffic of our model admits the ubiquitous statistical laws: the self-similarity and the mean-variance relationship, which further validate our model. Moreover, compared with the traffic generation model without considering complex process features, where traffic distribution is found being positively correlated with node betweenness centrality (BC), the traffic of our model is both positively related with node BC, and much higher on nodes in the specific processes. The proposed model is thus beneficial for traffic control and network enhancement with complex process features.
... Store-andforward routing approaches require the connectivity and load of each link in the future to construct a contact graph or timeexpended graph (TEG). For a network with a static topology, we can predict traffic matrices using statistical methods (e.g., ARIMA [9], wavelet transform [10]) or machine learning (e.g., recurrent neural network [11], long-short term memory network [12]) and calculate link loads. However, each user's access beam and satellite are time-varying due to the mobility of satellites. ...
... The traffic matrix T t represents the traffic demand of delay sensitive flows between each pair of satellites. The predicted valueT t of T t can be calculated based on a group of historical records T t−1 , T t−2 , ...T 0 , using traffic matrix prediction approaches [9]- [12]. Then, the predicted link load can be obtained by ...
Preprint
Satellite networks provide communication services to global users with an uneven geographical distribution. In densely populated regions, Inter-satellite links (ISLs) often experience congestion, blocking traffic from other links and leading to low link utilization and throughput. In such cases, delay-tolerant traffic can be withheld by moving satellites and carried to navigate congested areas, thereby mitigating link congestion in densely populated regions. Through rational store-and-forward decision-making, link utilization and throughput can be improved. Building on this foundation, this letter centers its focus on learning-based decision-making for satellite traffic. First, a link load prediction method based on topology isomorphism is proposed. Then, a Markov decision process (MDP) is formulated to model store-and-forward decision-making. To generate store-and-forward policies, we propose reinforcement learning algorithms based on value iteration and Q-Learning. Simulation results demonstrate that the proposed method improves throughput and link utilization while consuming less than 20$\%$ of the time required by constraint-based routing.
... In addition to deep learning techniques, many researchers have considered to introduce regression models for traffic prediction. Autoregressive models, such as ARMA, were used because of their simplicity [15]. To adapt to long term traffic transitions, the regression models based on support vector machine regressor, gradient boosting and Gaussian process were also considered. ...
... We mainly evaluate the advantageous effects in terms of the reduction of learning time by extracting important variables from the datasets and the improvement of prediction accuracy by the relearning approach. Therefore, we only use RNN, elastic net, random forest, and Gaussian process as the components of the ensemble learning framework although our framework can easily support other learning models, including the methods discussed in [15]- [18]. ...
Article
Full-text available
Network function virtualization (NFV) enables network operators to flexibly provide diverse virtualized functions for services such as Internet of things (IoT) and mobile applications. To meet multiple quality of service (QoS) requirements against time-varying network environments, infrastructure providers must dynamically adjust the amount of computational resources, such as CPU, assigned to virtual network functions (VNFs). To provide agile resource control and adaptiveness, predicting the virtual server load via machine learning technologies is an effective approach to the proactive control of network systems. In this paper, we propose an adjustment mechanism for regressors based on forgetting and dynamic ensemble executed in a shorter time than that of our previous work. The framework includes a reducing training data method based on sparse model regression. By making a short list of training data derived from the sparse regression model, the relearning time can be reduced to about 57% without degrading provisioning accuracy.
... Primeiramente descrito por [9]é ainda utilizado para modelagem de tráfego de voz na rede pública de telefonia. Nestas redes, os períodos ON e OFF são modelados por distribuições de decaimento exponencial [10]. A Figura 1 mostra um dos modelos mais populares para geração de tráfego de voz, o modelo ON-OFF [10]. ...
... Nestas redes, os períodos ON e OFF são modelados por distribuições de decaimento exponencial [10]. A Figura 1 mostra um dos modelos mais populares para geração de tráfego de voz, o modelo ON-OFF [10]. Neste modelo, os pacotes são gerados somente durante o período onde o estado do sistemaé ON. ...
... The problem of saturation inevitably has a remarkable impact on traffic, services and applications in both M2M and H2H [3]. Cellular systems (smart sensors, mobile telephones, basic stations, satellite systems, etc.) have recently spread and pushing the existing technologies to their maximum [4] in terms of the complexity of their processing algorithms. Mobile operators spend $20 billion per year to overcome network failure and service degradations, according to Heavy Reading [5]. ...
... Forth "Emergency" storm: when Group (1) , Group (2) , Group (3) and Group (4) send their payloads all together: In the resuts shown in fig.9, we realize that: ...
Conference Paper
Full-text available
Due to its unique pattern and different goals, Machine-to-Machine (M2M) traffic necessitates new traffic models. The real challenge is striking a balance between model accuracy and dealing with a massive number of M2M devices that must all work in unison. On the one hand, due to their reliability, "Source traffic models" have a competitive advantage over "Aggregated traffic models". But on the other hand, their complexity is expected to make managing the exponential growth of M2M devices difficult. In this paper, we propose a Markov Modulated Poisson Processes (MMPP) framework for studying M2M heterogeneous traffic effects as well as Human-to-Human (H2H) traffic using MMPP. To characterize the H2H/M2M coexistence, Markov chains were used as a stochastic process tool. Once using the traditional evolved Node B (eNodeB), our simulation results show that the network's service completion rate will suffer significantly. In the worst-case scenario, when an accumulative storm of M2M requests tries to access the network at the same time, the degradation reaches 8%. However, by leasing 72 resources reserved for M2M traffic and using our "Coexistence of Heterogeneous traffic Analyzer and Network Architecture for Long term evolution" (CHANAL) solution, we can achieve a completion rate of 96%. 1 Introduction Machine to Machine (M2M) communications and Human-to-Human (H2H) communications are expected to play a major role in any future wireless network. Although M2M communications and H2H communications have complementary goals in different fields (e.g., civil transportation, electrical power network, medical treatment, industrial automation, etc.), but M2M communications as a proxy for replacing/limiting numerous human interventions through the Long Term Evolution-Advanced (LTE-A) intelligent systems [1]. Taking into account the fact that M2M features should meet rejuvenating technology requirements, the differences in H2H and M2M traffic features can distract LTE-A unprecedented development. The coexistence of H2H and M2M traffics involves many challenges that could arise in a common network that reduces its effectiveness as result of the incompatibility of H2H and M2M patterns. Contrary to H2H traffic, M2M traffic is highly homogeneous because it uses small chunks of data along with small transfer rates, usually with predictable times and durations of communication [2]. But with M2M synchronization behavior and a variety of applications with different payloads, times and data rates, accumulative traffic from different sources is expected to be received, which forms heterogeneous traffic that very rapidly saturates the network bandwidth. The problem of saturation inevitably has a remarkable impact on traffic, services and applications in both M2M and H2H [3]. Cellular systems (smart sensors, mobile telephones, basic stations, satellite systems, etc.) have recently spread and pushing the existing technologies to their maximum [4] in terms of the complexity of their processing algorithms. Mobile operators spend $20 billion per year to overcome network failure and service degradations, according to Heavy Reading [5]. As a result, one of the most challenges for mobile operators, researchers and the 3rd Generation Partnership Project (3GPP) community is the efficient radio communication strategy [6]. In this context, the main performance of homogenic M2M traffic and H2H traffic is characterized mathematically in our previous work [7]. We used a mathematical model called "Coexistence Analyzer and Network Architecture for Long term evolution" (CANAL) to mathematically characterize the key performance of homogeneous M2M traffic as well as H2H traffic. 2 Traffic modelling Traffic modelling can be described by processes of stochastics that match the behavior of the measured data traffic for physical quantities [8]. The models of traffic are classified as the Source traffic models (e.g., voice, video and data) and Aggregated traffic models (e.g., high-speed links, backbone networks and internet). The source traffic simulation (e.g., SimuLTE simulator [9], OPtimized Network Engineering Tool (OPNET) [10], Objective Modular NeTwork (OMNeT) [11], etc.) generate packets that reflect real traffic behavior at sizes and intervals. In [12], the OPNET modeler is used to analyze a number of typical sources of traffic models, including two-state MMPP, ON/OFF and Interrupted Poisson Process (IPP) models. Our previous work in [13] focused on M2M traffic load in disastrous situations. The ability of an evolved Node B (eNodeB) to deal with a fixed number of H2H traffics with an increasing number of M2M requests attempting to access a LTE-A network simultaneously is examined in all scenarios using a source traffic simulator such as SimuLTE. When we consider that, according to [14], it is expected to have more than 52000 devices per cell trying to send their payloads at the same time during a disaster, we realize that source traffic models become extremely heavy to be executed in such cases, which necessitates the use of aggregated traffic modelling. The goal of aggregated traffic models (i.e., Simulink simulator [15]) is to find a good approximation of the arrival process of multiple devices while maintaining a good balance between accuracy and simulation efficiency [16]. For example, in [7], we studied the mutual impact of H2H and M2M traffic in dense areas and emergency situations. We also run several simulations based on the proposed architecture in [15], assuming a single LTE-A network with average arrival rates (λ1; λ2) and service rates (µ1; µ2) for H2H and M2M traffics. According to the simulation results, a prioritized LTE-A system could handle more requests in less time for both M2M and H2H traffics.
... It attracts the interests of researchers in various fileds, ranging from computer scientists, statisticians, to physicists, see e.g., Frost and Melamed [2], Fontugne et al. [3], Beran [4,5], Markelov et al. [6], Nguyen et al. [7], just mentioning a few. The early traffic model is the fractional Gaussian motion (fGn), see Leland et al. [8], Beran et al. [9], Willinger and Paxson [10], Michiel and Laevens [11], Adas [12], Lee and Fapojuwo [13], Li et al. [14]. Denote by C fGn (τ ) the autocorrelation function (ACF) of fGn. ...
... Thus, (4.8) may yet suggest a rule to exhibit the advance of gfGn in comparison with fGn in traffic modeling. In fact, d(r, B gfGn ) is less than d(r, B fGn ) around two or three magnitude orders, see Tables D.4 The conventional model of fGn is widely used in various fields , including computer communication, see e.g., [8][9][10][11][12][13][14]16,17,46,47], earth sciences [31], and so on. The novelty of the present gfGn model is in introducing a parameter a so that the lag is of fractional. ...
Article
The highlights in this paper are in two aspects. First, we introduce a type of novel fractional noise termed generalized fractional Gaussian noise (gfGn). Its autocorrelation function, power spectrum density function, and the fractal dimension are given. The case study using gfGn for modeling real traffic traces exhibits that the gfGn model is more accurate than the conventional fractional Gaussian noise (fGn) one in traffic modeling.
... To sum up, the features extracted from one network packet form a 21-dimensional numerical vector as represented in Table II. 18 ...
... In this case, one can assume that temporal packets of device network traffic follow a statistical Markov process, where observations are emitted from a finite set of "hidden states" (unobserved states), according to a given distribution (as shown in [18]). Hence, HMMs can detect repeatable patterns of network traffic traces and model device behavior. ...
... 1 In a M/M/1 Queuing Model: The first part represents the input process, the second the service distribution, and the third the number of servers. 91 • The queue could be configured to work according to a First Input First Output (FIFO), Last Input First Output (LIFO) or by priority discipline, with a flexible buffer size. ...
... Our simulation results show that the network will be facing a huge degradation in the service completion rate when using the classical eNodeB. "Traffic modeling" can be represented by a stochastic processes that match the behavior of physical quantities of measured data traffic [91]. Traffic models are classified as Source traffic models (e.g., video, data and voice) and Aggregated traffic models (e.g., backbone networks, internet and high-speed links). ...
Thesis
Full-text available
This Ph.D. work aims to study the Machine-to-Machine (M2M) congestion overload problem and the mutual impact among M2M and Human-to-Human (H2H) traffics in IoT (Internet of Things) environments specifically during disaster events. M2M devices with their expected exponential booming in the near future, will be one of the significant factors to influence all mobile networks. Inevitably, the expected huge number of M2M devices causes saturation problems, and leads to remarkable impacts on both M2M and H2H traffics, services and applications. To study the M2M and H2H mutual influences, we create a new platform model based on Continuous-Time Markov Chain (CTMC) to simulate, analyze and measure radio access strategies due to the limitations of existing Long Term Evolution-Advanced (LTE-A) simulators (i.e, SimuLTE) in term of massive M2M devices, parameter flexibility and statistical tools. Additionally, during disaster events, a fast bandwidth depletion of the limited bandwidth assigned to M2M devices in Long Term Evolution for Machines (LTE-M) and Narrow Band for IoT (NB-IoT) networks is expected due to the high arrival request of M2M device network access. To address this problem, we propose a new approach named Adaptive eNodeB (A-eNB) for both LTE-M and NB-IoT networks. The A-eNB can solve gradually the overload problem, while keeping the H2H traffic Quality of Service (QoS) not to be affected badly. The network adaptation is provided through a dynamic LTE-M resource reservation aiming to increase the number of M2M connections accessing the LTE-M/NB-IoT network and to decrease the impact on H2H traffic.
... According to this model, traffic can be classified into two states: ON, when data is actively transmitted, and OFF, when data transmission is idle. The authors of [36] further explain that data is transmitted in fixed intervals, and this is illustrated in Fig. 2. In the figure, individual squares of different colors represent Flow 1, Flow 2, and Flow 3, denoted as F1, F2, and F3, respectively. These squares signify the packets being transmitted during periods of activity (ON), while the absence of squares represents the periods of inactivity (OFF) when no data is being sent. ...
Article
Full-text available
OpenFlow-compliant commodity switches face challenges in efficiently managing flow rules due to the limited capacity of expensive high-speed memories used to store them. The accumulation of inactive flows can disrupt ongoing communication, necessitating an optimized approach to flow rule timeouts. This paper proposes Delayed Dynamic Timeout (DDT), a Reinforcement Learning-based approach to dynamically adjust flow rule timeouts and enhance the utilization of a switch’s flow table(s) for improved efficiency. Despite the dynamic nature of network traffic, our DDT algorithm leverages advancements in Reinforcement Learning algorithms to adapt and achieve flow-specific optimization objectives. The evaluation results demonstrate that DDT outperforms static timeout values in terms of both flow rule match rate and flow rule activity. By continuously adapting to changing network conditions, DDT showcases the potential of Reinforcement Learning algorithms to effectively optimize flow rule management. This research contributes to the advancement of flow rule optimization techniques and highlights the feasibility of applying Reinforcement Learning in the context of SDN.
... According to this model, traffic can be classified into two states: ON, when data is actively transmitted, and OFF, when data transmission is idle. The authors of [33] further explain that data is transmitted in fixed intervals, and this is illustrated in Figure 2. In the figure, individual squares of different colors represent Flow 1, Flow 2, and Flow 3, denoted as F1, F2, and F3, respectively. These squares signify the packets being transmitted during periods of activity (ON), while the absence of squares represents the periods of inactivity (OFF) when no data is being sent. ...
Preprint
Full-text available
OpenFlow-compliant commodity switches face challenges in efficiently managing flow rules due to the limited capacity of expensive high-speed memories used to store them. The accumulation of inactive flows can disrupt ongoing communication, necessitating an optimized approach to flow rule timeouts. This paper proposes Delayed Dynamic Timeout (DDT), a Reinforcement Learning-based approach to dynamically adjust flow rule timeouts and enhance the utilization of a switch's flow table(s) for improved efficiency. Despite the dynamic nature of network traffic, our DDT algorithm leverages advancements in Reinforcement Learning algorithms to adapt and achieve flow-specific optimization objectives. The evaluation results demonstrate that DDT outperforms static timeout values in terms of both flow rule match rate and flow rule activity. By continuously adapting to changing network conditions, DDT showcases the potential of Reinforcement Learning algorithms to effectively optimize flow rule management. This research contributes to the advancement of flow rule optimization techniques and highlights the feasibility of applying Reinforcement Learning in the context of SDN.
... Fundamental network algorithms and key performance metrics in telecommunication networks and services, such as routing, delay, age of information, or buffer sizing, rely on accurate statistical traffic models capable of replicating the temporal and spatial correlation observable in many diverse packet streams [1,2]. Further, current networks have been architected to support data services for traffic sources that are poorly understood or still insufficiently observed, such as mMTC (massive machine-type communication) or URLLC (ultra reliable low-latency communications) in 5G/6G, for which simple, reproducible, and good traffic models are yet to be developed. ...
Article
Full-text available
In the last years of the past century, complex correlation structures were empirically observed, both in aggregated and individual traffic traces, including long-range dependence, large-timescale self-similarity and multi-fractality. The use of stochastic processes consistent with these properties has opened new research fields in network performance analysis and in simulation studies, where the efficient synthetic generation of samples is one of the main topics. Nowadays, networks have to support data services for traffic sources that are poorly understood or still insufficiently observed, for which simple, reproducible, and good traffic models are yet to be identified, and it is reasonable to expect that previous generators could be useful. For this reason, as a continuation of our previous work, in this paper, we describe efficient and online generators of the correlation structures of the generalized fractional noise process (gfGn) and the generalized Cauchy (gC) process, proposed recently. Moreover, we explain how we can use the Whittle estimator in order to choose the parameters of each process that give rise to a better adjustment of the empirical traces.
... Specifically, a Poisson process characterizes the inter-arrival duration as independently and exponentially distributed with a fixed rate parameter , i.e., ( > ) = − . (2) The Pareto distribution [21] has been applied to model self-similarity in packet traffic of the widearea network [6,18]. It models the inter-arrival time by a power-law probability distribution that follows the probability density function of ( ) = − ( +1) , where is the shape parameter and is the minimum possible value of (normally, = 1) [21]. ...
Preprint
Full-text available
In this paper, we first carry out to our knowledge the first in-depth characterization of control-plane traffic, using a real-world control-plane trace for 37,325 UEs sampled at a real-world LTE Mobile Core Network (MCN). Our analysis shows that control events exhibit significant diversity in device types and time-of-day among UEs. Second, we study whether traditional probability distributions that have been widely adopted for modeling Internet traffic can model the control-plane traffic originated from individual UEs. Our analysis shows that the inter-arrival time of the control events as well as the sojourn time in the UE states of EMM and ECM for the cellular network cannot be modeled as Poisson processes or other traditional probability distributions. We further show that the reasons that these models fail to capture the control-plane traffic are due to its higher burstiness and longer tails in the cumulative distribution than the traditional models. Third, we propose a two-level hierarchical state-machine-based traffic model for UE clusters derived from our adaptive clustering scheme based on the Semi-Markov Model to capture key characteristics of mobile network control-plane traffic -- in particular, the dependence among events generated by each UE, and the diversity in device types and time-of-day among UEs. Finally, we show how our model can be easily adjusted from LTE to 5G to support modeling 5G control-plane traffic, when the sizable control-plane trace for 5G UEs becomes available to train the adjusted model. The developed control-plane traffic generator for LTE/5G networks is open-sourced to the research community to support high-performance MCN architecture design R&D.
... The main approaches to time sequence prediction are State Space Models (SSMs) and sequential models that frequently use deep learning (DL) [29]. The most representative SSMs are Auto-Regressive Integrated Moving Average (ARIMA) models and variants of these, which have been widely adopted for mobile traffic forecasting [30][31][32]. Their major drawback is that they require manual parameter selection on a sequence-by-sequence basis. In addition, they perform poorly when inputs exhibit high variability. ...
Article
The Covid-19 pandemic has forced the workforce to switch to working from home, which has put significant burdens on the management of broadband networks and called for intelligent service-by-service resource optimization at the network edge. In this context, network traffic prediction is crucial for operators to provide reliable connectivity across large geographic regions. Although recent advances in neural network design have demonstrated potential to effectively tackle forecasting, in this work we reveal based on real-world measurements that network traffic across different regions differs widely. As a result, models trained on historical traffic data observed in one region can hardly serve in making accurate predictions in other areas. Training bespoke models for different regions is tempting, but that approach bears significant measurement overhead, is computationally expensive, and does not scale. Therefore, in this paper we propose TransMUSE (Transferable Traffic Prediction in MUlti-Service Edge Networks), a novel deep learning framework that clusters similar services, groups edge-nodes into cohorts by traffic feature similarity, and employs a Transformer-based Multi-service Traffic Prediction Network (TMTPN), which can be directly transferred within a cohort without any customization. We demonstrate that TransMUSE exhibits imperceptible performance degradation in terms of mean absolute error (MAE) when forecasting traffic, compared with settings where a model is trained for each individual edge node. Moreover, our proposed TMTPN architecture outperforms the state-of-the-art, achieving up to 43.21% lower MAE in the multi-service traffic prediction task. To the best of our knowledge, this is the first work that jointly employs model transfer and multi-service traffic prediction to reduce measurement overhead, while providing fine-grained accurate demand forecasts for edge services provisioning.
... The main approaches to time sequence prediction are State Space Models (SSMs) and sequential models that frequently use deep learning (DL) [28]. (ARIMA) models and variants of these, which have been widely adopted for mobile traffic forecasting [29,30,31]. Their major drawback is that they require manual parameter selection on a sequence-by-sequence basis. ...
Preprint
Full-text available
The Covid-19 pandemic has forced the workforce to switch to working from home, which has put significant burdens on the management of broadband networks and called for intelligent service-by-service resource optimization at the network edge. In this context, network traffic prediction is crucial for operators to provide reliable connectivity across large geographic regions. Although recent advances in neural network design have demonstrated potential to effectively tackle forecasting, in this work we reveal based on real-world measurements that network traffic across different regions differs widely. As a result, models trained on historical traffic data observed in one region can hardly serve in making accurate predictions in other areas. Training bespoke models for different regions is tempting, but that approach bears significant measurement overhead, is computationally expensive, and does not scale. Therefore, in this paper we propose TransMUSE, a novel deep learning framework that clusters similar services, groups edge-nodes into cohorts by traffic feature similarity, and employs a Transformer-based Multi-service Traffic Prediction Network (TMTPN), which can be directly transferred within a cohort without any customization. We demonstrate that TransMUSE exhibits imperceptible performance degradation in terms of mean absolute error (MAE) when forecasting traffic, compared with settings where a model is trained for each individual edge node. Moreover, our proposed TMTPN architecture outperforms the state-of-the-art, achieving up to 43.21% lower MAE in the multi-service traffic prediction task. To the best of our knowledge, this is the first work that jointly employs model transfer and multi-service traffic prediction to reduce measurement overhead, while providing fine-grained accurate demand forecasts for edge services provisioning.
... In theoretical models or simulations, Ornstein-Uhlenbeck noise may be added to a native subdiffusive CTRW process to attribute to that localization noise. The associated expectation of the time-averaged MSD becomes [247] The usage of FBM is not only restricted to hydrology, but also for the description of data traffic in local area networks [248][249][250] or for particle motion in viscoelastic or crowded environments [133,251]. However, the suitability of FBM for economic modeling is still discussed. ...
Thesis
High-precision tracking of nanoparticles via fluorescence-optical methods is a current research area with many applications. While optical methods are usually characterized by causing minimal damage to the sample, their ability to resolve the sample spatially is limited. Therefore, many modern measurement methods make use of tricks to circumvent the diffraction limit and to obtain information at the nanometer scale. In this dissertation, a fluorescence optical method for tracking single nanoparticles, called single-particle orbit tracking (SPOT), is further developed, investigated, and applied. One focus of those new developments was to update the existing experimental setup so that three-dimensional localization can take place instead of only two-dimensional tracking. Another important point of improvement was the extension of the control logic to include additional parameters and signals. The required technical modifications demanded a renewed mathematical modeling of the method, as well as analysis of the measurement errors and setup performance. While the temporal resolution of the experimental setup could be improved, an axial localization of the particles was only achievable at the expense of the accuracy in the lateral direction. Reference samples were used to experimentally validate the upgraded technique and to point out existing issues. A major problem in the application of SPOT lies in measurement artifacts, which can, for example, mask existing anomalies in the diffusion behavior of the tracked nanoparticles and would lead to misinterpretations in unknown systems. Parameter studies with variation of easily accessible quantities, such as the solvent viscosity, the particle, size or the considered time scales, offer possible remedies and concurrently show the importance of reference systems. A new field of research is the investigation of the diffusion behavior of nanoparticles in complex filter materials. In this work, SPOT was used to study nanoparticles in a nanoporous triblock terpolymer-based membrane. Using conventional methods, the non-destructive characterization of such a system at room-temperature and in a liquid-filled state, is a great challenge. With SPOT, however, the size distribution of the voids could be determined non-invasively. For this purpose, nanometer-sized polymer particles were tracked during their thermal movement through the pore structure of the filter material. At the same time, indications of a suitable statistical model for the description of the particle motion were collected. Theoretical parameters for normal and anomalous diffusion in harmonic potentials were explicitly compared with experimentally determined values. It was shown that the particle motion can be described mainly by confined Brownian motion, but there exists a weak influence of anomalous diffusion components, which can be best described by the so-called fractional Langevin equation.
... However, finding the next transmission for non-periodic STA is challenging. To find the next transmission for non-periodic (i.e., event-based) traffic, poison distribution is commonly used, where probability mass function f(d) can be calculated as [3]: ...
Preprint
Full-text available
The recent IEEE 802.11ah amendment has proven to be suitable for supporting large-scale devices in Internet of Things (IoT). It is essential to provide a minimum level of Quality of Service (QoS) for critical applications such as industrial automaton and healthcare. In this paper, we propose a QoSaware Medium Access Control (MAC) layer solution to enhance network reliability and reduce critical traffic latency by an adaptive station grouping and a priority traffic scheduling scheme. The proposed grouping scheme calculates the current traffic load and distributes among different RAW groups considering different requirements of the stations. The RAW scheduling scheme further provides priority slot access using a novel backoff scheme. Markov-chain model is developed to study the throughput and latency behaviours for the traffic generated from the critical application. The proposed protocol shows significant delay improvement for priority traffic. The overall throughput performance improves up to 12.7% over the existing RAW grouping scheme.
... Um dos modelos mais populares para geração de tráfego de vozé o modelo ON-OFF [12]. Neste modelo, os pacotes são gerados somente durante o período onde o estado do sistemá e ON. ...
... It describes the current area the MT currently lays into, i.e. RAT1-only coverage, or double coverage area, i.e. the MT may use both RAT1 and RAT2. Figure 3.6 describes this mobility model: a two-state Markov model process, representing the movement of the particular MT in the two distinct coverage areas [86]. The transition probability matrix of this model is shown in (3-1). ...
Thesis
Full-text available
Heterogeneous networks combine multiple radio access technologies (RATs) in the access part of the network. An important challenge introduced by this combination is the selection of the most appropriate RAT to serve every session. There are several approaches to tackle this challenge, like the use of mathematical functions, of fuzzy logic and/or neural networks, and of policy based schemes. This thesis proposes a novel mechanism to take such decisions. It is named UDS, in order to point out the three fundamental design choices incorporated: the mechanism is user (U) centric, it is distributed (D) amongst the user equipment (UE) and the core network, and it treats each session separately (S). The first design choice aims to involve the user to the final decision. The second one tries to have some pre-processing at the UE level, in order to minimise the messaging exchange over the radio interface. The third one involves the more flexible utilisation of the network resources. The main motivation to design the UDS mechanism was to propose a network selection solution that incorporates the user’s preferences into the final decision, without violating the network operator policies. This alone is a challenging task, since the aforementioned targets are often contradictory. At the same time, the mechanism has to be feasible to implement, easily extendable, and having no scalability problems. The outcome is a family of three UDS mechanisms, each forming an evolved version of the previous one. The first one, simple UDS (s-UDS), has an algorithmic part running at the UE and a second one at the core network. The first part processes all combinations of sessions and RATs involved and it outputs a prioritised list of the RATs that may serve each one session (ongoing or new), according to the user’s preferences. This list is then sent to the core network, triggering the corresponding algorithm running there. This algorithm takes the final decision, combining the prioritised list of the user preferences with the network operator policies. The feasibility of the mechanism is demonstrated via SDL specification and simulation, along with a test-bed implementation. In order to evaluate the performance of the proposed solution, a simulation model was used and the mechanism is compared against a pure load balancing scheme. The simulation results show that the proposed mechanism maximises the users’ satisfaction and at the same time it minimises the messages’ exchange over the radio interface. The price to pay for these performance boosts is the considerable rise of the sessions’ blocking probabilities, when compared to the load balancing scheme. The second mechanism, hybrid UDS (h-UDS), tries to combine the strong points of the s-UDS and the load balancing mechanisms. It uses a threshold, so that when the total available resources drop below this, a hybrid load balancing is used. The latter functionality picks the least loaded RAT among the ones proposed in the user priority list. Thus, the final decision never violates the user preferences, while alleviating the session blocking probabilities and keeping the strong points of s-UDS. The drawback of this mechanism is that the fixed threshold value is dependent on the network parameters, it is not the same in every case, and it is not known beforehand. This means that even when an optimum value of this threshold is selected, it cannot be used in a different case scenario. In order to alleviate the latter disadvantage, an evolved version is proposed, the adaptive UDS (a-UDS). It comes to eliminate the threshold value problem of the previous case. Instead of a fixed value, this time, the threshold is adaptive to the network conditions and it is not required to be known beforehand. The a-UDS also keeps all design choices of the aforementioned mechanism, keeping all their strong points. It maximises the users’ satisfaction, it minimises the messaging exchange at the radio interface, and it may keep the blocking probability very close to the load balancing mechanism. This is the final outcome of this thesis: a mechanism for the access network selection with all the aforementioned benefits and the provision for the network operators to fine-tune its functionality according to their policies. At the same time the final decision never violates the users’ preferences. The simulation results presented in this thesis lead to the same conclusion.
... All traffic graphs from 3 to 6 show one day of traffic-its data volume (throughput)-where the timeline resolution is 5 min. Table 2 shows the aggregated results of the different user groups on the well-known traffic describing values, 30 including PMR, 31 Squared Coefficient of Variation (SCV), 32 skewness 33 of the probability distribution function and the average data throughput in user level. The Table presents that the IoT based user-groups PMR values show significantly higher loads than the other groups, where the users are humans. ...
Article
Full-text available
The massive demand for broadband mobile network services are quite successfully covered already by 3G and 4G cellular mobile systems. The challenges for 5G are more diverse: answering the demands of Ultra‐Reliable Low Latency Communications (URLLC) and massive Machine Type Communications (mMTC) users—besides elevating mobile broadband to the next level. While generic, high‐level targets for KPIs (Key Performance Indicators) are widely communicated, it is not yet well‐understood how the various demands can affect the traffic mixture. Both the radio‐ and the core‐domains of the cellular network have to cope with traffic peaks, and have to obey various QoS (Quality of Service) guarantees. In order to cover these gaps, traffic‐related characteristics (data volume, signaling message types, and traffic peaks) should be determined, and this knowledge should be used during network planning, optimization, and service shaping. This paper aims to provide insights into user behavioral patterns for these three key application areas: enhanced Mobile Broadband (eMBB), URLLC and mMTC. Since traffic volume‐ and burst‐related user behavior is not expected to change suddenly, current targeted data collection on legacy mobile network links would provide a good basic insight for future, 5G usage—at least as traffic patterns. We have collected live pre‐5G mobile network data then analyzed them throughout this paper in order to reveal traffic patterns—and their distinguishing features—for the three key 5G application areas.
... El análisis en profundidad de este trabajo evidencia que el impacto de la LRD se trata sobre las prestaciones del modelo de servidor único en relación con el tamaño del buffer de entrada utilizando para ello un modelo de fluido [59], [63], [64] que presenta una caída hiperbólica hasta un coeficiente de corte determinado a partir del cual cae a cero. Basados en los resultados obtenidos tras numerosos experimentos de simulación, empleando tanto las trazas de video como las trazas Ethernet para distintos valores del parámetro de Hurst, diferentes coeficientes de corte y tamaños de buffer, y una amplia gama de distribuciones marginales, los autores descubren la existencia de un coeficiente de corte crítico que denominan "horizonte de correlación", tal que la tasa de pérdidas no se ve afectada si el coeficiente de corte se incrementa por encima de él. ...
Preprint
Traffic streams, sources as well as aggregated traffic flows, often exhibit long-range-dependent (LRD) properties. This paper presents the theoretical foundations to justify that the behavior of traffic in a high-speed computer network can be modeled from a self-similar perspective by limiting its scope of analysis at the network layer, given that the most relevant properties of self-similar processes are consistent for use in the formulation of traffic models when performing this specific task.
... The property of burstiness is characterized by the existence of intervals where traffic rate is rather high and intervals with low or zero intensity. Such behavior of telecommunication flows led to the creation of On-Off models [1,8,13]. On-Off models are doubly stochastic point processes having two states: "on" and "off". ...
Chapter
In this paper, we propose the multi-level Markov modulated Poisson process with arbitrary distribution of the packet length as a model of fractal traffic. For the total amount of information received in multi-level MMPP, we investigate the probability distribution and present the algorithm of calculating the first and the second moments. Using asymptotic analysis method, we build Gaussian approximation of aforementioned distribution. We show that the convergence time of the probability distribution to the Gaussian distribution forms the period where the Hurst parameter is stable and reflects the self-similarity of the multi-level MMPP.
... El análisis en profundidad de este trabajo evidencia que los autores tratan el impacto de la dependencia de largo alcance sobre las prestaciones del modelo de servidor único en relación con el tamaño del buffer de entrada empleando un modelo de fluido [71] - [73] que posee una caída hiperbólica hasta un coeficiente de corte determinado a partir del cual cae a cero. Basados en los resultados que obtienen tras la realización de numerosos experimentos de simulación empleando tanto las trazas de video VBR como las trazas Ethernet para distintos valores del parámetro de Hurst, diferentes coeficientes de corte y tamaños de buffer, y una amplia variedad de distribuciones marginales, los autores descubren la existencia de un coeficiente de corte crítico, que denominan "horizonte de correlación (CH)", tal que la tasa de pérdidas no se ve afectada si se incrementa el coeficiente de corte por encima de el. ...
Preprint
Los flujos de tráfico tanto de las fuentes como agregados presentan con frecuencia propiedades de dependencia de largo alcance (LRD). En este trabajo se modela el comportamiento del tráfico de una red de computadoras de alta velocidad desde la perspectiva autosimilar y se analiza y discute su validez restringiendo su ámbito de aplicabilidad al nivel de red. Se demuestra que las propiedades más relevantes de los procesos autosimilares son consistentes para su empleo en el modelado de tráfico cuando se realiza esta distinción. Se demuestra que desde esta perspectiva los modelos de tráfico que consideran la dependencia de largo alcance son descriptores bien definidos para condiciones de tráfico en ráfagas y facilitan su formulación. Finalmente, se demuestra que los modelos restringidos son capaces de representar eficientemente a los procesos de tráfico reales y que poseen mayor plausibilidad de interpretación física que los basados en sistemas de colas tradicionales.
... The heart of any performance evaluation of telecommunications networks is the Traffic models [7]. These models need to be accurate and able to capture the statistical characteristics of the actual traffic .Traffic models can be short-range or long-range dependent. ...
Preprint
The problem of Dimensioning in networks is one of the most consistent problems that prevails in our world. In spite of sophisticated networks coming into the market like 4G and 5G networks still it is a long standing problem of our concern .In this paper we highlight the problems that arise in the physical link of a network and also discuss about the various methods by which we can solve this problem. The problem is divided into two sub problems. The first sub problem deals with the design of the topological structure and the second sub problem is to find the network physical links optimizing the revenue given the end to end traffic and Grade of Service constraints of each service class. It also throws some light on how to dimension with respect to the relative Grade of Service constraints. An attempt is made to calculate the dimensioning of the physical l ink capacities after acquiring the smooth blocking functions. The problem is reduced by knowing the optimal physical link capacities of the network. A performance model is specified to assess the accuracy of the analytical model with respect to simulation results. Two categories of calls are concerned, a narrow-band call and a wide-band call. A narrow-band call may be a voice application and a wide-band call may be a video application. The system operates in a loss mode meaning if the incoming call finds the network resources like capacity busy, it is lost. Performance measures of the network's overall blocking probability and the blocking probabilities of the narrow-band and wide-band call categories are determined. The results from the measurements and the exact model are compared. The gradients of the objective function and the blocking function with respect to capacities are determined and the optimal physical link capacities are determined. Queue length distribution is also studied .
... Efficient network traffic control mechanisms have been fundamental to the overall performance of a network, whether fixed networks [192], [193], wireless networks [194], [195], or virtual networks [196], [197]. Network traffic engineering provides mechanisms to reduce network congestion and improve utilization by balancing the load among multiple paths [198], [199]. ...
Article
Full-text available
The growing network density and unprecedented increase in network traffic, caused by the massively expanding number of connected devices and online services, require intelligent network operations. Machine Learning (ML) has been applied in this regard in different types of networks and networking technologies to meet the requirements of future communicating devices and services. In this article, we provide a detailed account of current research on the application of ML in communication networks and shed light on future research challenges. Research on the application of ML in communication networks is described in: i) the three layers, i.e., physical, access, and network layers; and ii) novel computing and networking concepts such as Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Functions Virtualization (NFV), and a brief overview of ML-based network security. Important future research challenges are identified and presented to help stir further research in key areas in this direction.
... If one or more parameters are not met, the logical and physical structure of the network is changed and the simulation process is resumed [30]. Three categories of models are used to optimize traffic: a) models that take into account the statistical properties of traffic [31,21,22], b) models that use hybrid prediction algorithms [32] and c) models that optimize an objective function [2,20,3] by solving linear programming problems. Depending on the objectives pursued, one or more types of software applications can be used (eg. ...
Article
Full-text available
This paper presents a survey of the models and software applications used in telecommunication networks that can be used to carry out network planning and traffic engineering processes in deployable networks. Deployable networks are intended to provide communication services for organizations with responsibilities for emergency response or for government structures.
... All traffic graphs from 2 to 5 show one day of traffic, where the time line resolution is 5 minutes. Table II shows the aggregated results of the different user groups on the well known traffic describing values [22] data as Peak To Mean Ratio (PMR), Squared Coefficient of Variation (SCV), skewness and the average user throughput in user level. One of the main differences can be seen at the PMR values, where the IoT based user groups shows significantly higher loads than the other groups, where the users are probably humans. ...
... It is worth to note that choosing an appropriate traffic model prevents under-estimation or over-estimation of network performance. Pareto based traffic models are excellent candidates for networks with the unexpected demand for packet transfers since the model takes into consideration the long-term correlation in packet arrival times [38,39]. This traffic model is exploited to generate data packets in the stations. ...
Article
Full-text available
This paper presents a new adaptive prioritization and fail-over mechanism for ring network adapters (RNA). Owing to the use of a shared medium in the structure of ring networks, management of network resources is very important and cannot completely be fulfilled by fixed priority strategies. An adaptive prioritization mechanism for efficient utilization of network resources is proposed in this paper. Hop-Count and the distance between source and destination are the key parameters that are included in the priority assignment procedure. Unlike the conventional ones, the arrived traffic is not put blindly into the queues thanks to the awareness of stations form status of each other. The newly introduced fail-over mechanism is based on the Power over Ethernet concept. Each station monitors the heartbeat of its neighbors and tries to keep them alive by providing their minimum required electrical power whenever a fault occurs. By doing so, not only single failure but also any successive double failure scenarios cannot disrupt data path continuity. Several simulations are carried out to assess the behavior and performance of the proposed methods in OPNET Modeler. Moreover, a high-speed USB test board is designed using Xilinx-Spartan6-lx9 FPGA to experimentally verify the performance of the proposed mechanisms.
... ( ) = ( − ) (3) Where U is a unit step function. The Pareto distribution produces independent and identically distributed (i.i.d) IAT [9]. It is also known as double exponential [10]. ...
Conference Paper
Full-text available
... Nowadays, the development of cybercrime measurement technology has taken user behavior analysis as an important field. Therefore, intelligent technology represented by various neural network methods will play an important role in crime behavior analysis and research [1]. ...
Article
Full-text available
The purpose of this paper is to provide a quantitative analysis method for cybercrime research, and also provide a new mathematical research method for network behavior analysis. Firstly, a novel factor space research method is established by using the “medium scale”. On this basis, the concept of factor discovery of cybercrime behavior is proposed, and the corresponding cybercrime behavior analysis model is established, namely the criminal factor neural network. Secondly, the learning mechanism of network behavior neural network is discussed by using the factor discovery principle. At the same time, a network behavior learning algorithm based on diamond thinking is obtained. Finally, factor discovery thought, factor neural network learning system are applied to the research of cybercrime model analysis and prevention strategy to provide guiding decision support and problem solution for public security departments.
... "Traffic modeling" can be represented by stochastic processes that match the behavior of physical quantities of measured data traffic (Adas, 1997). Traffic models are classified as Source traffic models (e.g., video, data and voice) and Aggregated traffic models (e.g., backbone networks, internet and high-speed links). ...
Chapter
Full-text available
This chapter envisions the challenges that will face the mobile operators such as sending vehicle-to-vehicle (V2V) payloads in form of synchronized storms, the fast saturation of the limited bandwidth of long-term evolution for machines (LTE-M) and narrow band-internet of things (NB-IoT) with the rise number of machine-to-machine (M2M) devices and V2V devices, V2V congestion overload problem in IoT environments specifically during disaster events. It extends a new solution proposed by the authors named Adaptive eNodeB (A-eNB) for both LTE-M and NB-IoT networks to deal with V2V excessive traffic. The A-eNB can solve gradually V2V overload problem, while keeping the human-to-human (H2H) traffic quality of service (QoS) not to be affected badly. It corroborates a new framework model proposed by the authors called coexistence analyzer and network architecture for long-term evolution (CANAL) to study the impact on V2V, M2M, and H2H and mutual influences, based on continuous-time Markov chain (CTMC) to simulate, analyze, and measure radio access strategies.
... where W i is white noise. Nonlinear AR models (e.g., parameterized by neural networks) have gained traction and are the baseline used in this work [3,88,95,96]. The main problem with AR models is fidelity: like Markov models, they only use a limited history to predict the next sample in a time series, leading to over-simplified temporal correlations. ...
Preprint
Full-text available
Limited data access is a substantial barrier to data-driven networking research and development. Although many organizations are motivated to share data, privacy concerns often prevent the sharing of proprietary data, including between teams in the same organization and with outside stakeholders (e.g., researchers, vendors). Many researchers have therefore proposed synthetic data models, most of which have not gained traction because of their narrow scope. In this work, we present DoppelGANger, a synthetic data generation framework based on generative adversarial networks (GANs). DoppelGANger is designed to work on time series datasets with both continuous features (e.g. traffic measurements) and discrete ones (e.g., protocol name). Modeling time series and mixed-type data is known to be difficult; DoppelGANger circumvents these problems through a new conditional architecture that isolates the generation of metadata from time series, but uses metadata to strongly influence time series generation. We demonstrate the efficacy of DoppelGANger on three real-world datasets. We show that DoppelGANger achieves up to 43% better fidelity than baseline models, and captures structural properties of data that baseline methods are unable to learn. Additionally, it gives data holders an easy mechanism for protecting attributes of their data without substantial loss of data utility.
... The PU behavior is modeled using a two-state Markov chain to capture the temporal dependency between two consecutive states. 29,30 The presence and absence of the PU signal in a channel are represented by the busy and 1: sort packets in the CH node's buffer according to their priorities (two parts) 2: for each packet in the CH node's buffer do 3: sense the CC channel 4: if S rc \S Tc //the received signal is less than a threshold value: the medium is free 5: ...
Article
Full-text available
Cognitive radio sensor networks offer a promising means of meeting rapidly expanding demand for wireless sensor network applications in new monitoring and objects tracking fields. Several challenges, particularly in terms of quality of service provisioning, arise because of the inherited capability-limitation of end-sensor nodes. In this article, an efficient resource allocation scheme, improved Pliable Cognitive Medium Access Protocol, is proposed to tackle multilevel of heterogeneity in cognitive radio sensor networks. The first level is the network’s application heterogeneity, and the second level is the heterogeneity of the radio environment. The proposed scheme addresses scheduling and radio channel allocation issues. Allocation-decision making is centralized, whereas spectrum sensing is distributed, thereby increasing efficiency and limiting interference. Despite the limited capabilities of the sensor’s networks, the effectiveness of the proposed scheme also includes increasing the opportunity to utilize a wider range of the radio spectrum. improved Pliable Cognitive Medium Access protocol is quite appropriate for critical communications that gain attention in the next 5G of wireless networks. Simulation results and the comparison of the proposed protocol with other protocols indicate the robust performance of the proposed scheme. The results reveal the significant effectiveness, with only a slight trade-off in terms of complexity.
... "Traffic modeling" can be represented by stochastic processes that match the behavior of physical quantities of measured data traffic (Adas, 1997). Traffic models are classified as Source traffic models (e.g., video, data and voice) and Aggregated traffic models (e.g., backbone networks, internet and highspeed links). ...
Chapter
This chapter envisions the challenges that will face the mobile operators such as sending vehicle-to-vehicle (V2V) payloads in form of synchronized storms, the fast saturation of the limited bandwidth of long-term evolution for machines (LTE-M) and narrow band-internet of things (NB-IoT) with the rise number of machine-to-machine (M2M) devices and V2V devices, V2V congestion overload problem in IoT environments specifically during disaster events. It extends a new solution proposed by the authors named Adaptive eNodeB (A-eNB) for both LTE-M and NB-IoT networks to deal with V2V excessive traffic. The A-eNB can solve gradually V2V overload problem, while keeping the human-to-human (H2H) traffic quality of service (QoS) not to be affected badly. It corroborates a new framework model proposed by the authors called coexistence analyzer and network architecture for long-term evolution (CANAL) to study the impact on V2V, M2M, and H2H and mutual influences, based on continuous-time Markov chain (CTMC) to simulate, analyze, and measure radio access strategies.
... This includes the beta and Pareto distributions. The former has been used in modeling the distribution of packet arrivals in machine-type communication (MTC) devices [11] while the latter was suggested to model traffic in broadband networks [12]. It is important to note that Theorem 1 still inherits the constraints on the parameters as detailed before. ...
Article
Full-text available
We present two novel methods for the generation of Fox's H-function distributed random variables (RVs), which have been recently used to model fading in various wireless communication scenarios. The first proposed method is based on the use of standard normal RVs while the second is based on the use of Gamma RVs. Using Monte Carlo simulations, the proposed methods are tested and are shown to provide indistinguishable results from the corresponding analytical H-function probability density functions (PDFs) for a diverse set of parameters. Moreover, as an application, simulated ergodic capacity and average bit error rate results obtained using the proposed methods are compared to analytical expressions previously obtained in the literature and perfect agreement is observed. Our results also provide valuable insights into what constraints on the H-function parameters are needed to guarantee valid H-function PDFs.
Article
Full-text available
Forecasting of traffic in cellular network is a significant service for management of available resources strategically in an efficient way. Valuable resources such as link bandwidth and energy are increasing exponentially with increase in usage of cellular data. In this paper, we implement the design of Neural Network for identification of recurrent patterns in different metrics that can be then applied in forecasting of traffic in cellular networks. As this Neural Network design is based on memory and custom architecture, it is able to handle task of prediction in a precise and faster mode in real-time applications such as cellular network traffic forecasting. This work involves a Long Short Term Memory design of Recurrent Neural Network for traffic forecasting in cellular networks. It enhances the performance of the cellular network thereby providing a solution for the service providers as the available resources are utilized in an effective way. Same data set is involved for multiple prediction to analyze the performance of the design and found to be robust than the existing algorithms.
Thesis
Full-text available
My dissertation focuses on data and signaling traffic measurement methods and metrics and on providing methodological elements for managing future networks and services. In the first Thesis group, I created a general life cycle model for IoT devices and demonstrated its application in a telecommunications example. I have developed a model to be used for security investigation purposes related to IoT devices. One of the interesting findings of mine was that the users who were previously banned from the network pose a threat to the operators’ services with their re-appearance. I defined the predecessors of 5G use cases in pre-5G networks, and measured their footprint requirements in existing networks. I have shown correlation between the data and signaling traffic, based on the transmission characteristics of actively operating Industry 4.0 devices. I discussed that traffic transients needed to be avoided. In the second Thesis group, I identified the challenges arising from the interconnection of Industry 4.0 and mobile networks. As part of this work, I examined existing standards in detail and prepared a network architecture to meet these 3GPP recommendations. I presented the needs, requirements, and motivations of the field related to private industrial mobile networks. I determined industrial data traffic’s main characteristics and the main parameters that the 5G industrial network must meet. I designed and created a 5G NSA mobile network consisting of 3GPP standardized building blocks and performed complex measurements to determine the KPI values of the network. I created a mathematical model and algorithm, examined the joint needs of specific clients and client groups and the services provided by the network.
Chapter
With the rapid advancement of the information age, an increasing number of people have started using Wi-Fi tethering, which can turn their mobile phones into mobile access points (APs) to meet their networking needs anywhere. However, the existing energy-saving mechanisms in IEEE 802.11 mainly aim at stations (STAs) and rarely consider APs. To solve the problem of mobile APs’ high energy consumption, we propose an energy-saving protocol for mobile APs, called GreenAP. On the one hand, the protocol is compatible with the original IEEE 802.11 standard. On the other hand, the adaptive strategy in the protocol ensures that the energy consumption of the AP is reduced without affecting the user experience. The energy-saving AP protocol is implemented in NS-2. The experimental results show that GreenAP-3 can enable APs’ sleep duration up to 74.7% when the traffic intensity is low, and the energy consumption can be reduced by 64.6% with small packet delay. In the case of high traffic intensity, the protocol can ensure less packet delay by adaptively adjusting APs’ sleep time, which guarantees that the user experience is not affected under any circumstances.
Article
The investigation aimed to study various network traffic types so as to derive a mathematical description not only for a specific type of traffic, but also for the aggregated network traffic. We characterized the main types of data transmitted during network operation and compared the results with the most common mathematical models, that is, Poisson, Pareto, Weibull, exponential and lognormal distributions. We established that regardless of traffic type the volume distribution of data packets transmitted has a "long tail" and is well described by the lognormal distribution model. We evaluated the autocorrelation function, which showed that a long-range dependence characterises virtually all data, which indicates their self-similarity. We also confirmed this conclusion by calculating the Hurst exponent. At the same time, we determined that the degree of self-similarity depends not only on the type of data transmitted, but also on the data ratio in the aggregated network traffic. We selected the following models so as to compare the mathematical descriptions of traffic: classical and fractal Brownian motion, and the AR, MA, ARMA and ARIMA models. The results showed that the fractal Brownian motion model provides the most accurate mathematical description of network traffic
Article
In this paper, we propose a novel deep learning framework, Graph Attention Spatial-Temporal Network (GASTN), for accurate mobile traffic forecasting, which can capture not only local geographical dependency but also distant inter-region relationship when considering spatial factor. Specifically, GASTN considers spatial correlation through our constructed spatial relation graph and utilizes structural recurrent neural networks to model the global near-far spatial relationships as well as the temporal dependencies. In the framework of GASTN, two attention mechanisms are designed to integrate different effects in a holistic way. Besides, in order to further enhance the prediction performance, we propose a collaborative global-local learning strategy for the training of GASTN, which takes full advantage of the knowledge from both the global model and local models for individual regions and enhance the effectiveness of our model. Extensive experiments on a large-scale real-world mobile traffic dataset demonstrate that our GASTN model dramatically outperforms the state-of-the-art methods and a further improvement in the prediction performance of GASTN can be obtained by leveraging the collaborative global-local learning strategy.
Book
An introduction to theories and applications in wireless broadband networks As wireless broadband networks evolve into future generation wireless networks, it's important for students, researchers, and professionals to have a solid understanding of their underlying theories and practical applications. Divided into two parts, the book presents: 1. Enabling Technologies for Wireless Broadband Networks—orthogonal frequency-division multiplexing and other block-based transmissions; multi-input/multi-output antenna systems; ultra-wideband; medium access control; mobility resource management; routing protocols for multi-hop wireless broadband networks; radio resource management for wireless broadband networks; and quality of service for multimedia services. 2. Systems for Wireless Broadband Networks—long-term evolution cellular networks; wireless broadband networking with WiMax; wireless local area networks; wireless personal area networks; and convergence of networks. Each chapter begins with an introduction and ends with a summary, appendix, and a list of resources for readers who would like to explore the subjects in greater depth. The book is an ideal resource for researchers in electrical engineering and computer science and an excellent textbook for electrical engineering and computer science courses at the advanced undergraduate and graduate levels.
Article
Full-text available
Teletraffic (traffic in short) modeling is crucial in the analysis and design of the infrastructure of cyber-physical network systems from a view of traffic engineering. However, reports regarding traffic modeling at the large time scale of day in the duration of years are rarely seen. This paper addresses our finding in autocorrelation function modeling in the closed form of traffic at the time scale of day (daily traffic for short) in the duration of 12 years with different protocols. We shall show that the autocorrelation function of daily traffic takes the autocorrelation function form of the generalized Cauchy process based on studying the autocorrelation function modeling of daily traffic with real traffic data. Thus, the long-range dependence and local self-similarity of daily traffic are in general uncorrelated according to the theory of the generalized Cauchy process. In addition, we will exhibit that the concrete values of long-range dependence measure and local self-similarity one of daily traffic may relate to protocol types.
Article
Large and unmanaged router buffers could lead to an increase in queuing delays in the Internet, which is a serious concern for network performance and quality of service. Our focus is to conduct a performance evaluation of Compound TCP (C-TCP), in a regime where the router buffer sizes are small (i.e., independent of the bandwidth-delay product), and the queue policy is Drop-Tail. In particular, we provide buffer sizing recommendations for high speed core routers fed by well multiplexed TCP controlled flows. For this, we consider two topologies: a single bottleneck and a multi-bottleneck topology, under different traffic scenarios. The first topology consists of a single bottleneck router, and the second consists of two distinct sets of TCP flows, regulated by two edge routers, feeding into a common core router. We focus on some key dynamical and statistical properties of the underlying system. From a dynamical perspective, we first develop fluid models. A local stability analysis for these models yields a key insight: buffers sizes need to be dimensioned carefully, and smaller buffers favour stability. We also highlight that larger Drop-Tail buffers, in addition to increasing latency, are prone to inducing limit cycles in the system dynamics. These limit cycles in turn induce synchronisation among the TCP flows, which then results in a loss of link utilisation. We then empirically analyse some statistical properties of the bottleneck queues. These statistical analyses serve to validate an important modelling assumption: that in the regime considered, each bottleneck queue may be reasonably well approximated as either an M∕M∕1∕B or an M∕D∕1∕B queue. We also highlight that smaller buffers, in addition to ensuring stability and low latency, would also yield reasonable system-wide performance, in terms of throughput and flow completion times.
Article
The contributions given in this paper are in two aspects. The first is to introduce a novel random function, which we call the multi-fractional generalized Cauchy (mGC) process. The second is to dissertate its application to network traffic for studying the multi-fractal behavior of traffic on a point-by-point basis. The introduced mGC process is with the time varying fractal dimension D(t) and the time varying Hurst parameter H(t). The representations of the autocorrelation function (ACF) and the power spectrum density (PSD) of the mGC process are proposed. Besides, the asymptotic expressions of the ACF and PSD of the mGC process are presented. The computation formula of D(t) is given. The mGC model may be a new tool to describe the multi-fractal behavior of traffic. Precisely, it may be used to reveal the local irregularity or local self-similarity (LSS), which is a small-time scale behavior of traffic, and global long-term persistence or long-range dependence (LRD), which is a large-time scale behavior of traffic, on a point-by-point basis. The cast study with real traffic traces exhibits that the variance of D(t) is much greater than that of H(t). Thus, the present mGC model may provide a novel way to explain the fact that traffic has highly local irregularity while its LRD is robust.
Conference Paper
Full-text available
Variable bit rate (VBR) compressed video is expected to become one of the major loading factors in high-speed packet networks such as ATM-based B-ISDN. However, recent measurements based on long empirical traces (complete movies) revealed that VBR video traffic possesses that we had used before only for simple fractal processes. We use importance sampling techniques to efficiently estimate low probabilities of packet losses that occur when a multiplexer is fed with synthetic traffic from our self-similar VBR video model.
Article
Full-text available
We study the performance of a statistical multiplexer whose inputs consist of a superposition of packetized voice sources and data. The performance analysis predicts voice packet delay distributions, which usually have a stringent requirement, as well as data packet delay distributions. The superposition is approximated by a correlated Markov modulated Poisson process (MMPP), which is chosen such that several of its statistical characteristics identically match those of the superposition. Matrix analytic methods are then used to evaluate system performance measures. In particular, we obtain moments of voice and data delay distributions and queue length distributions. We also obtain Laplace-Stieitjes transforms of the voice and data packet delay distributions, which are numerically inverted to evaluate tails of delay distributions. It is shown how the matrix analytic methodology can incorporate practical system considerations such as finite buffers and a class of overload control mechanisms discussed in the literature. Comparisons with simulation show the methods to be accurate. The numerical results for the tails of the voice packet delay distribution show the dramatic effect of traffic variability and correlations on performance.
Article
Full-text available
The authors extend earlier work (ibid., vol.36, p.834-44, Jul. 1988) in modeling video sources using interframe coding schemes and in carrying out buffer queueing analysis for the multiplexing of several such sources. The previous models and analysis were suitable for relatively uniform activity scenes. Here, models are considered for scenes with multiple activity levels which lead to sudden changes in the coder output bit rates. Such models apply to talker-listener alternating scenes, as well as to situations where there is a mix of dissimilar services, e.g., television and videotelephony. Correlated Markov models for the corresponding sources are given. A flow-equivalent queueing analysis is used to obtain common buffer queue distributions and probabilities of packet loss. The results demonstrate the efficiency of packet video on a single link, due to the smoothing effect of multiplexing several variable-bit-rate video sources.
Article
Full-text available
As new communications services evolve, professionals must create better models to predict system performance. The article provides an overview of computer simulation modelling for communication networks, as well as some important related modelling issues. It gives an overview of discrete event simulation and singles out two important modelling issues that are germane to extant and emerging networks: traffic modelling and rare event simulation. Monte Carlo computer simulation is used as a performance prediction tool and Markov models are considered.< >
Article
Full-text available
Models and results are presented that assess the performance of statistical multiplexing of independent video sources. Presented results indicate that the probability of buffering (or delaying) video data beyond an acceptable limit drops dramatically as the number of multiplexed sources increases beyond one. This demonstrates that statistical or asynchronous time-division multiplexing (TDM) can efficiently absorb temporal variations of the bit rate of individual sources without the significant variations in reception quality exhibited by multimode videocoders for synchronous TDM or circuit-switched transmission. Two source models are presented. The first model is an autoregressive continuous-state, discrete-time Markov process, which was used to generate source data in simulation experiments. The second model is a discrete-state, continuous-time Markov process that was used in deriving a fluid-flow queuing analysis. The presented study shows that both models generated consistent numerical results in terms of queuing performance
Article
Full-text available
TES (Transform-Expand-Sample) is a versatile class of stochastic sequences which can capture arbitrary marginals and a wide variety of sample path behavior and autocorrelation functions. In TES, the initial variate is uniform on [0,1) and the next variate is obtained recursively by taking the fractional part (i.e., modulo-1 reduction) of a linear autoregressive scheme. We show how this class gives rise to uniform Markovian sequences in a general and natural way, by observing that marginal uniformity is closed under modulo-1 addition of an independent variate with arbitrary distribution. We derive the transition function of TES sequences and the autocovariance function of transformed TES sequences using Fourier and Laplace Transform methods. The autocovariance formulas are amenable to fast and accurate calculation and provide the theoretical basis for a computer-based methodology of heuristic TES modeling of empirical data. A companion paper contains various examples which show the effi...
Article
Full-text available
A number of recent empirical studies of traffic measurements from a variety of working packet networks have convincingly demonstrated that actual network traffic is self-similar or long-range dependent in nature (i.e., bursty over a wide range of time scales) -- in sharp contrast to commonly made traffic modeling assumptions. In this paper, we provide a plausible physical explanation for the occurrence of self-similarity in LAN traffic. Our explanation is based on new convergence results for processes that exhibit high variability (i.e., infinite variance) and is supported by detailed statistical analyses of real-time traffic measurements from Ethernet LAN's at the level of individual sources. This paper is an extended version of [53] and differs from it in significant ways. In particular, we develop here the mathematical results concerning the superposition of strictly alternating ON/OFF sources. Our key mathematical result states that the superposition of many ON/OFF sources (also k...
Article
TES (Transform-Expand-Sample) is a versatile class of stochastic sequences which can capture arbitrary marginals and a wide variety of sample path behavior and autocorrelation functions. In TES, the initial variate is uniform on [0,1) and the next variate is obtained recursively by taking the fractional part (i.e., modulo-1 reduction) of a linear autoregressive scheme. The uniform TES variates can then be further transformed to have arbitrary marginals. A companion paper (Part I) presented the general theory of TES processes. This paper (Part II) contains various examples which demonstrate the efficacy of the TES paradigm by comparing numerical and simulation-based calculations for a variety of TES autocorrelation functions. The results have applications to the modeling of autocorrelated sequences, particularly in a Monte Carlo simulation context.
Article
The class of autoregressive integrated moving average (ARIMA) time series models may be generalized by permitting the degree of differencing d to take fractional values. Models including fractional differencing are capable of representing persistent series (d > 0) or short-memory series (d = 0). The class of fractionally differenced ARIMA processes provides a more flexible way than has hitherto been available of simultaneously modeling the long-term and short-term behavior of a time series. In this paper some fundamental properties of fractionally differenced ARIMA processes are presented. Methods of simulating these processes are described. Estimation of the parameters of fractionally differenced ARIMA models is discussed, and an approximate maximum likelihood method is proposed. The methodology is illustrated by fitting fractionally differenced models to time series of streamflows and annual temperatures.
Article
Fractional Gaussian noises are a family of random processes such that the interdependence between values of the process at instants of time very distant from each other is small but nonnegligible. It has been shown by mathematical analysis that such interdependence has precisely the intensity required for a good mathematical model of long run hydrological and geophysical records. This analysis will now be illustrated, extended, and made practically usable, with the help of computer simulations. In this Part, we shall stress the shape of the sample functions and the relations between past and future averages.
Article
A variety of geophysical records are examined to determine the dependence upon the lag s of a quantity called ‘rescaled range,’ denoted by R(t, s)/S(t, s). If there had been no appreciable dependence between two values of the record at very distant points in time, the ratio R/S would have been proportional to s0.5. But, in fact, as first noted by Edwin Hurst, the R/S ratio of hydrological and other geophysical records is proportional to sH with H ≠ 0.5. Hurst's original claims must be tightened and hedged, and his estimates of H must be discarded, but his general idea will be shown to be correct. We have shown elsewhere that this behavior of R/S means that the strength of long-range statistical dependence in geophysical records is considerable.
Book
1. Integrated Broadband Services and Networks-An Introduction.- I: SWITCHING THEORY.- 2. Broadband Integrated Access and Multiplexing.- 3. Point-to-Point Multi-Stage Circuit Switching.- 4. Multi-Point and Generalized Circuit Switching.- 5. From Multi-Rate Circuit Switching to Fast Packet Switching.- 6. Applying Sorting for Self-Routing and Non-Blocking Switches.- II: TRAFFIC THEORY.- 7. Terminal and Aggregate Traffic.- 8. Blocking for Single-Stage Resource Sharing.- 9. Blocking for Multi-Stage Resource Sharing.- 10. Queueing for Single-Stage Packet Networks.- 11. Queueing for Multi-Stage Packet Networks.
Article
We study the performance of a statistical multiplexer whose inputs consist of a superposition of packetized voice sources and data. The performance analysis predicts voice packet delay distribu- tions, which usually have a stringent requirement, as well as data packet delay distributions. The superposition is approximated by a correlated Markov modulated Poisson process (MMPP), which is cho- sen such that several of its statistical characteristics identically match those of the superposition. Matrix analytic methods are then used to evaluate system performance measures. In particular, we obtain mo- ments of voice and data delay distributions and queue length distri- butions. We also obtain Laplace-Stieltjes transforms of the voice and data packet delay distributions, which are numerically inverted to evaluate tails of delay distributions. It is shown how the matrix analytic methodology can incorporate practical system considerations such as finite buffers and a class of overload control mechanisms discussed in the literature. Comparisons with simulation show the methods to be accurate. The numerical results for the tails of the voice packet delay distribution show the dramatic effect of traffic variability and correla- tions on performance.
Article
The effect of long-memory processes on queue length statistics of a single queue system is studied through a controlled fractionally differenced ARIMA (1,d,0) input process. This process has two parameters /spl phi//sub 1/ and d representing an auto-regressive component and a long-range dependent component, respectively. Results show that the queue length statistics studied (mean, variance and the 0.999 quantile) are proportional to e(c/sup c/spl phi/1/) e(c/sub 2/d), where (c/sub 1/, c/sub 2/) are positive constants, and c/sub 2/
Article
We analyze 20 large sets of actual variable-bit-rate (VBR) video data, generated by a variety of different codecs and representing a wide range of different scenes. Performing extensive statistical and graphical tests, our main conclusion is that long-range dependence is an inherent feature of VBR video traffic, i.e., a feature that is independent of scene (e.g., video phone, video conference, motion picture video) and codec. In particular, we show that the long-range dependence property allows us to clearly distinguish between our measured data and traffic generated by VBR source models currently used in the literature. These findings give rise to novel and challenging problems in traffic engineering for high-speed networks and open up new areas of research in queueing and performance analysis involving long-range dependent traffic models. A small number of analytic queueing results already exist, and we discuss their implications for network design and network control strategies in the presence of long-range dependent traffic
Article
This paper introduces a class of methods called TES (Transform-Expand-Sample) for generating autocorrelated variates with uniform marginals and Markovian structure. TES methods are readily implemented on a computer and have generation complexity comparable to that of the i.i.d. uniform sequence which they transform to an autocorrelated uniform sequence. For any prescribed correlation coefficient ρ, there is a TES method generating a uniform sequence with the 1-lag autocorrelation ρ, and the resultant autocorrelation is monotonic quadratic in two structural TES parameters. A simulation study reveals that TES methods give rise to autocorrelation functions with monotone decreasing as well as oscillating magnitude, bounded by monotone envelopes. The structural parameters were found to control the “amplitude” and “frequency” of the resultant autocorrelation function. A third parameter can be used to transform a TES sequence into more continuous-looking versions and to control the skewness of sample path cycles. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.
Article
The family of autoregressive integrated moving-average processes, widely used in time series analysis, is generalized by permitting the degree of differencing to take fractional values. The fractional differencing operator is defined as an infinite binomial series expansion in powers of the backward-shift operator. Fractionally differenced processes exhibit long-term persistence and antipersistence; the dependence between observations a long time span apart decays much more slowly with time span than is the case with the more commonly studied time series models. Long-term persistent processes have applications in economics and hydrology; compared to existing models of long-term persistence, the family of models introduced here offers much greater flexibility in the simultaneous modelling of the short-term and long-term behaviour of a time series.
Article
1. PRESENTATION 1.. THIS PAPER describes the four concepts in the title, first separately, then through their interactions. The "H-spectrum hypothesis" and the "infinite variance hypothesis" are divergence hypotheses introduced to account for the main erratic aspects in the behavior of economic time series. Next to be studied are "long-run linearity" and "locally Gaussian processes," new models to be added to the toolbox of econometrics to clarify the relations between the two divergence hypotheses. As a foil to the properties of economic time series, let us note that sequences of independent and identically distributed Gaussian random variables possess the following properties: (a) different sample functions of such a process look, from a distance, remarkably alike; (b) when analyzed by spectral analysis, each sample function seems to have a "white spectrum," that is, an almost constant spectral density; (c) different samples of sufficient length, taken from the same independent Gaussian process, yield essentially identical estimates of the population mean and variance. Clearly, most economic time series do not fulfill the independent Gaussian ideal of simplicity, even after possible gross nonstationarity has been elimi- nated by differencing. (For example, when dealing with prices, we shall consider sequences of price increments rather than sequences of the prices themselves.) For analysis, discrepancies from independent Gaussian processes can be divided into two classes. The first includes "high frequency effects," to which the bulk of econometrics has so far been devoted. An example of sulch effect is the existence of non-vanishing correlation between successive, or nearly successive, values of a time series. The second includes "low frequency effects," with which my own past and present work is concerned. To use a medical term, we shall say that, when various low frequency "symptoms" occur simultaneously, they add up to a "syndrome." I have tried to postpone to Sections 2 and 3 of this paper most of the tech- nical arguments, and to devote Section 1 to a comparatively informal presenta- tion of the main points. First, two "low frequency syndromes" of economics will be described. Next, they will be shown incompatible within the framework of the usual econometric models. Finally, a generalized model will be construct- ed, which allows for both these syndromes.
Conference Paper
TES (transform-expand-sample) is a versatile class of stationary stochastic processes which can model arbitrary marginals, a wide variety of autocorrelation functions, and a broad range of sample path behaviors. TES models include one set of parameters for exact fitting of the empirical distribution (histogram), and another for approximating the empirical autocorrelation function. The former is easy to determine algorithmically, but the latter involves a hard heuristic search on a large parametric function space. This paper describes an algorithmic procedure which largely automates TES modeling. The algorithm is cast in a nonlinear programming setting with the objective of minimizing a weighted square distance between the empirical autocorrelation function and its candidate TES-model counterpart. It combines a brute-force search with a steepest-descent nonlinear programming technique, and it performs well owing to the simplicity of the constraints and the nice local behavior of the objective function. Finally, we illustrate the efficacy of our approach via two examples from the domain of VBR (variable bit rate) compressed video
Conference Paper
Two new algorithms are presented for the generation of long-memory signals using lattice filter structures. Currently, the best known generation methods make use of the Levinson-Durbin recursion which requires O(N<sup>2</sup>) computations to compute the model. Two new synthesis methods for fractional difference signals are given which exploit the a priori knowledge of the partial correlation coefficients to reduce the computations by an order of magnitude from O(N<sup>2</sup>) to O(N). A synthesis technique is also given for fractional noise signals using lattice filters where the lattice coefficients are determined using the Schur algorithm, again with a computational savings over the Levinson-Durbin recursion. The lattice filter also has the distinct advantage of guaranteeing a stable system which is an important issue for long-memory signals which have a fractional order pole on the unit circle
Conference Paper
The authors analyze teleconference traffic, with moderate motion and scene changes, generated by different video codecs. These codecs differ in several aspects of coding, such as in the use of DCT and motion compensation. The results are that, even when traffic is generated using different coding schemes, the number of cells per frame can be described by a gamma (or equivalently negative binomial) distribution and a DAR(1) model determined by three traffic parameters (the mean, variance, and correlation) can be used to accurately model the source. The main contribution of the paper is in showing that the authors' previously published results on source modeling and marginal distributions, which were based on analysis of traffic generated by one type of coder, hold for coders which differ in various ways and particularly differ in the use of motion compensation
Conference Paper
Some of the source modeling and performance issues related to providing video teleconference services over asynchronous transfer mode (ATM) networks were studied. Under certain circumstances, traffic periodicity (due to the constant video frame rate) can cause different sources with identical statistical characteristics to experience cell-loss rates which can differ by several orders of magnitude. Some of this source-periodicity effect can be mitigated by appropriate buffer scheduling. For the video teleconference sequence analyzed (without scene changes or scene cuts and with moderate motion), the number of cells per frame is not normally distributed. Instead, it follows a gamma (or negative binomial) distribution. For traffic studies, an autoregressive model of order 2 and a two-state Markov chain model either underestimate or overestimate the occurrence of frames with a large number of cells, and these frames are a primary factor in determining cell-loss rates. The order-2 autoregressive model, however, fits the data well in a statistical sense. A multistate Markov chain model which can be derived from three traffic parameters (mean, correlation, and variance) is sufficiently accurate for use in traffic studies
Article
A number of empirical studies of traffic measurements from a variety of working packet networks have demonstrated that actual network traffic is self-similar or long-range dependent in nature-in sharp contrast to commonly made traffic modeling assumptions. We provide a plausible physical explanation for the occurrence of self-similarity in local-area network (LAN) traffic. Our explanation is based on convergence results for processes that exhibit high variability and is supported by detailed statistical analyzes of real-time traffic measurements from Ethernet LANs at the level of individual sources. This paper is an extended version of Willinger et al. (1995). We develop here the mathematical results concerning the superposition of strictly alternating ON/OFF sources. Our key mathematical result states that the superposition of many ON/OFF sources (also known as packet-trains) with strictly alternating ON- and OFF-periods and whose ON-periods or OFF-periods exhibit the Noah effect produces aggregate network traffic that exhibits the Joseph effect. There is, moreover, a simple relation between the parameters describing the intensities of the Noah effect (high variability) and the Joseph effect (self-similarity). An extensive statistical analysis of high time-resolution Ethernet LAN traffic traces confirms that the data at the level of individual sources or source-destination pairs are consistent with the Noah effect. We also discuss implications of this simple physical explanation for the presence of self-similar traffic patterns in modern high-speed network traffic
Article
Models for predicting the performance of multiplexed variable bit rate video sources are important for engineering a network. However, models of a single source are also important for parameter negotiations and call admittance algorithms. In this paper we propose to model a single video source as a Markov renewal process whose states represent different bit rates. We also propose two novel goodness-of-fit metrics which are directly related to the specific performance aspects that we want to predict from the model. The first is a leaky bucket contour plot which can be used to quantify the burstiness of any traffic type. The second measure applies only to video traffic and measures how well the model can predict the compressed video quality
Article
Demonstrates that Ethernet LAN traffic is statistically self-similar, that none of the commonly used traffic models is able to capture this fractal-like behavior, that such behavior has serious implications for the design, control, and analysis of high-speed, cell-based networks, and that aggregating streams of such traffic typically intensifies the self-similarity (“burstiness”) instead of smoothing it. These conclusions are supported by a rigorous statistical analysis of hundreds of millions of high quality Ethernet traffic measurements collected between 1989 and 1992, coupled with a discussion of the underlying mathematical and statistical properties of self-similarity and their relationship with actual network behavior. The authors also present traffic models based on self-similar stochastic processes that provide simple, accurate, and realistic descriptions of traffic scenarios expected during B-ISDN deployment
Article
Results are presented of a simulation study of the potential multiplexing gains from using variable-bit-rate (VBR) encoding to multiplex video teleconferencing traffic over an asynchronous transfer mode (ATM) network. Simulated traffic from several video teleconferences is fed into a buffer and multiplexed onto a higher-speed output line. The cell loss resulting from buffer overflow is observed. The major results are as follows: (1) the spacing between cells has a large role in determining the aggregate cell loss rate seen by a collection of calls. The cell loss rate is reduced when either the frame start times are evenly spaced or cells are evenly distributed in the video frame. (2) After the cell loss rate becomes nonnegligible, it grows rapidly as a function of the number of access lines. (3) The cell losses are not distributed uniformly over time but are clustered. A consequence is that the average time between clusters of cell loss and the time to first loss is greater than if the losses were spaced uniformly. (4) A simple model is found to describe when a cluster of cell losses occurs, the duration of the cluster, and how many cells are lost in the cluster
Article
Source modeling and performance issues are studied using a long (30 min) sequence of real video teleconference data. It is found that traffic periodicity can cause different sources with identical statistical characteristics to experience differing cell-loss rates. For a single-stage multiplexer model, some of this source-periodicity effect can be mitigated by appropriate buffer scheduling and one effective scheduling policy is presented. For the sequence analyzed, the number of cells per frame follows a gamma (or negative binomial) distribution. The number of cells per frame is a stationary stochastic process. For traffic studies, neither an autoregressive model of order two nor a two-state Markov chain model is good because they do not model correctly the occurrence of frames with a large number of cells, which are a primary factor in determining cell-loss rates. The order two autoregressive model, however, fits the data well in a statistical sense. A multistate Markov chain model that can be derived from three traffic parameters is sufficiently accurate for use in traffic studies
Article
An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented. Insight into the parameters is obtained by relating the model to an equivalent burst model. Results on a corresponding storage process are presented. The buffer occupancy distribution is approximated by a Weibull distribution. The model is compared with publicly available samples of real Ethernet traffic. The degree of the short-term predictability of the traffic model is studied through an exact formula for the conditional variance of a future value given the past. The applicability and interpretation of the self-similar model are discussed extensively, and the notion of ideal free traffic is introduced
Article
The authors propose a new method for the modeling and call admission control (CAC) of variable bit rate video source, which come to the front of ATM networks as hot issues nowadays. First, the modeling of video source is accomplished using the three-state Markov chains including the effects of scene change at which the bit rate of video source is abruptly increased. Also, using two AR models, they improve the defects which an AR model has in modeling a video source. In addition, they represent the analytical model of a video source so that a network manager can acquire the information which is very important in managing the entire networks. CAC is accomplished using the previously defined analytical model. A routing manager calculates the cell loss probability of a chosen VP where a new call is connected so that the routing manager decides whether this new call is accepted or not. This calculation is accomplished through the GB/D/1-S queuing system. Using BIA (bandwidth increasing algorithm), they check whether the calls rejected by the routing manager could be accepted if possible. Finally, the applicable procedures to suitable allocate bandwidth to each VP on a link are presented in detail
Article
A method of characterizing video codec sources in asynchronous transfer mode (ATM) networks as an autoregressive moving average (ARMA) process is described. Measurements of long-term mean and the autocorrelation function of cell interarrival times allow the parameter estimation of the ARMA model. The video source is then described by ARMA model. Furthermore, it is shown that the multiplexed stream of video cells is also an ARMA process. Such a cell stream is then applied to a model of a queuing system to obtain performance measures of the system. Perturbation analysis is then performed on the functional behavior of the queuing system by appropriate perturbation of the model parameters to determine cell waiting time sensitivity due to slight variations of the input process
Article
As ATM high-speed, cell-relay networks will most likely first make their impact as backbones interconnecting enterprise networks consisting of Ethernet and other LANs, their proper design and control is crucial. Recent studies of high quality, high resolution traffic measurements in Bellcore Ethernets have revealed that this aggregate Ethernet traffic is self-similar ("fractal ") in nature, quite different in "burstiness" features from traffic considered and studied up to now. This paper presents an analytical study of an ATM buffer driven with self-similar traffic. The probability of buffer occupancy is obtained. It is shown that this probability decreases with the buffer size not exponentially, as in traditionally Markovian traffic models, but algebraically . 1 Introduction Recent studies of high-quality, high resolution traffic measurements have revealed a new phenomenon with potentially important ramifications to the modeling, design and control of broadband networks. These includ...
When Traffic Measurements Defy Traditional Traffic Models (and Vice Versa): Traffic Modeling for High-Speed Networks
  • willinger
W. Willinger, "When Traffic Measurements Defy Traditional Traffic Models (and Vice Versa): Traffic Modeling for High-Speed Networks," Presentation at Georgia Tech, 1994.
Statistical Analysis and Sim-ulation Study of Video Teleconference Traffic in ATM NetworksCharacterization of Video Codecs as Autore-gressive Moving Average Processes and Related Queuing System Perfor-mance
  • [ I
  • D Heyman
  • A Tabatabai
  • T Lakshman
[I41 D. Heyman, A. Tabatabai, and T. Lakshman, "Statistical Analysis and Sim-ulation Study of Video Teleconference Traffic in ATM Networks," /€€E Trans. Circuits and Sys. for Video Tech., vol. 2, Mar. 1992, pp. 49-59. [151 M. Hayes, Statistical Digital Signal Processing and Modeling, John Wiley, 1996. [I 61 R. Grunenfelder et al., "Characterization of Video Codecs as Autore-gressive Moving Average Processes and Related Queuing System Perfor-mance," /€E€ JSAC, vol. 9, Apr. 1991, pp. 284-93.
Performance Models of Statistical Multiplexing in Packet Video CommunicationsModels for Packet Switching of Variable-Bit-Rate Video SourcesModeling and Call Admission Control Algorithm of Vari-able Bit Rate Video in ATM NetworksPerformance Modeling of Video Teleconfer-encing in ATM Networks
  • I Nikolaidis
  • I J Akyildiz
  • Hui
  • Traffic Switching
  • Theory
  • Kluwer Integrated Broadband Networks
  • B Academic
  • Maglaris
I. Nikolaidis and I. Akyildiz, "Source Characterization and Statistical Multi-plexing in ATM Networks," Tech. Rep. GIT-CC 92-24, Georgia Tech., 1992. [61 J. Hui, Switching and Traffic Theory for integrated Broadband Networks, Kluwer Academic, 1990. [71 B. Maglaris et al., "Performance Models of Statistical Multiplexing in Packet Video Communications," /€€E Trans. Commun., vol. 36, July 1988. [81 P. Sen et al., "Models for Packet Switching of Variable-Bit-Rate Video Sources," /€E€ JSAC, vol. 7, no. 5, 1989. [91 A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed., McGraw Hill, 1991. [IO] G. Box, G. Jenkins, and G. Reinsel, Time Series Analysis, 3rd ed., Pren-tice Hall, 1994. [I 11 C. Shim et al., "Modeling and Call Admission Control Algorithm of Vari-able Bit Rate Video in ATM Networks," /€E€ JSAC, vol. 12, Feb. 1994. 1121 D. Cohen and D. Heyman, "Performance Modeling of Video Teleconfer-encing in ATM Networks," /E€€ Trans. Circuits and Sys. for Video Tech., vol. 3, Dec. 1993, pp. 408-20.
On t h e Self-similar Nature o f Ethernet Traffic (Extended Version)
  • W Leland
W. Leland e t al., "On t h e Self-similar Nature o f Ethernet Traffic (Extended Version)," IEEEIACM Trans. Networking, Feb. 1994, pp. 1-1 5.
On Long-Range Dependence in NSFNET Traffic Tech. Rep. GIT-CC-94-61Some Long-Run Properties of Geo-physical Records
  • S Klivanski
  • A Mukherjee
  • C Song
S. Klivanski, A. Mukherjee, and C. Song, "On Long-Range Dependence in NSFNET Traffic," Tech. Rep. GIT-CC-94-61, Georgia Tech., 1994. [221 B. B. Mandelbrot and J. R. Wallis, "Some Long-Run Properties of Geo-physical Records," Water Resources Res., vol. 5, 1969, pp. 32140.
  • G Box
  • G Jenkins
  • G Reinsel
G. Box, G. Jenkins, and G. Reinsel, Time Series Analysis, 3rd ed., Prentice Hall, 1994.
Song On Long-Range Dependence in NSFNET Traffic
  • S Klivanski
  • A Mukherjee