Conference Paper

Machine Learning Enabling Traffic-Aware Dynamic Slicing for 5G Optical Transport Networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We demonstrate a machine-learning-based traffic-aware approach for dynamic network slicing in optical networks. Experimental results indicate that the proposed framework achieves 96% traffic prediction accuracy, 49% blocking reduction and 29% delay reduction compared with conventional solutions.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Therefore, optimal resource allocation algorithms are vital to optimize link resource utilization, meeting various QoS requirements of services and providing high admission for higher priority slices under congested scenarios [18], [19]. Currently, few works, in literature focus on the link resources management (e.g., bandwidth management) considering a multi-slice scenario with prioritized demands despite its necessity for the realization of future network services characterized by massive bandwidth requirements [20], [21], [22]. Regarding bandwidth resource management under a multislice scenario, SKM exhibits competitiveness compared to Bandwidth Allocation Models (BAMs) and best-effort algorithms, especially during congested scenarios [23]. ...
... For the case of permanent demands (without lifetime), the total acceptance ratio (AR), the total blocking probability (Bp), the total utilization (U), the acceptance ratio per slice (ARc), the blocking probability per slice (Bpc), the utilization per slice (Uc), load balancing and overloaded link can be evaluated in Eq. (18)(19)(20)(21)(22)(23)(24)(25) as below: Acceptance ratio, AR: ...
Article
Full-text available
Future networks starting from 5G will depend on network slicing to meet a wide range of network services (NSs) with various quality of service (QoS) requirements. With the powerful Network Function Virtualization (NFV) technology available, network slices can be rapidly deployed and centrally managed, giving rise to simplified management, high resource utilization, and cost-efficiency. This is achieved by realizing NSs on general-purpose hardware, hence, replacing traditional middleboxes. However, realizing fast deployment of end-to-end network slices still requires intelligent resource allocation algorithms to efficiently use the network resources and ensure QoS among different slice categories during congestion cases. This is especially important at the links of the network because of the scarcity of their resources. Consequently, this paper proposes a paradigm based on NFV architecture aimed at providing the massive computational capacity required in the NSs and supporting the resource allocation strategy proposed for multiple slice networks based on resources utilization optimization using a proposed and analyzed Squatting-Kicking model (SKM). SKM is a suitable algorithm for dynamically allocating network resources to different priority slices along paths and improving resource utilization under congested scenarios. Simulation results show that the proposed service deployment algorithm achieves 100% in terms of both overall resource utilization and admission for higher priority slices in some scenarios in bandwidth-constrained contexts, which cannot be achieved by other existing schemes due to priority constraints.
... However, we note that the feasibility of simple ANN solutions in the context of slice traffic forecasting was also investigated by some works. In particular, the work in [99] selects a three-layer ANN design to forecast the traffic load of each slice, with a 15-minute interval and a 24-hour time window, to proactively allocate resources in the optical transport network. The numerical results demonstrate that allocating the adequate amount of resources in advance provides better delay and lower blocking probability. ...
Article
Full-text available
5G and beyond networks are expected to support a wide range of services, with highly diverse requirements. Yet, the traditional “one-size-fits-all” network architecture lacks the flexibility to accommodate these services. In this respect, network slicing has been introduced as a promising paradigm for 5G and beyond networks, supporting not only traditional mobile services, but also vertical industries services, with very heterogeneous requirements. Along with its benefits, the practical implementation of network slicing brings a lot of challenges. Thanks to the recent advances in machine learning (ML), some of these challenges have been addressed. In particular, the application of ML approaches is enabling the autonomous management of resources in the network slicing paradigm. Accordingly, this paper presents a comprehensive survey on contributions on ML in network slicing, identifying major categories and sub-categories in the literature. Lessons learned are also presented and open research challenges are discussed, together with potential solutions.
... However, we note that the feasibility of simple ANN solutions in the context of slice traffic forecasting was also investigated by some works. In particular, the work in [129] selects a three-layer ANN design to forecast the traffic load of each slice, with a 15-minute interval and a 24-hour time window, to proactively allocate resources in the optical transport network. The numerical results demonstrate that allocating the adequate amount of resources in advance provides better delay and lower blocking probability. ...
Preprint
Full-text available
5G and beyond networks are expected to support a wide range of services, with highly diverse requirements. Yet, the traditional “one-size-fits-all” network architecture lacks the flexibility to accommodate these services. In this respect, network slicing has been introduced as a promising paradigm for 5G and beyond networks, supporting not only traditional mobile services, but also vertical industries services, with very heterogeneous requirements. Along with its benefits, the practical implementation of network slicing brings a lot of challenges. Thanks to the recent advances on Machine Learning (ML), some of these challenges have been addressed. In particular, the application of ML approaches is enabling the autonomous management of resources, in the network slicing paradigm. Accordingly, this paper presents a comprehensive survey on contributions on ML in network slicing, identifying major categories and subcategories in the literature. Key takeaways are also presented, and open research challenges are discussed, together with potential solutions.
... However, we note that the feasibility of simple ANN solutions in the context of slice traffic forecasting was also investigated by some works. In particular, the work in [129] selects a three-layer ANN design to forecast the traffic load of each slice, with a 15-minute interval and a 24-hour time window, to proactively allocate resources in the optical transport network. The numerical results demonstrate that allocating the adequate amount of resources in advance provides better delay and lower blocking probability. ...
Preprint
Full-text available
p>5G and beyond networks are expected to support a wide range of services, with highly diverse requirements. Yet, the traditional “one-size-fits-all” network architecture lacks the flexibility to accommodate these services. In this respect, network slicing has been introduced as a promising paradigm for 5G and beyond networks, supporting not only traditional mobile services, but also vertical industries services, with very heterogeneous requirements. Along with its benefits, the practical implementation of network slicing brings a lot of challenges. Thanks to the recent advances on Machine Learning (ML), some of these challenges have been addressed. In particular, the application of ML approaches is enabling the autonomous management of resources, in the network slicing paradigm. Accordingly, this paper presents a comprehensive survey on contributions on ML in network slicing, identifying major categories and sub-categories in the literature. Key takeaways are also presented and open research challenges are discussed, together with potential solutions.</p
... However, we note that the feasibility of simple ANN solutions in the context of slice traffic forecasting was also investigated by some works. In particular, the work in [129] selects a three-layer ANN design to forecast the traffic load of each slice, with a 15-minute interval and a 24-hour time window, to proactively allocate resources in the optical transport network. The numerical results demonstrate that allocating the adequate amount of resources in advance provides better delay and lower blocking probability. ...
Preprint
Full-text available
p>5G and beyond networks are expected to support a wide range of services, with highly diverse requirements. Yet, the traditional “one-size-fits-all” network architecture lacks the flexibility to accommodate these services. In this respect, network slicing has been introduced as a promising paradigm for 5G and beyond networks, supporting not only traditional mobile services, but also vertical industries services, with very heterogeneous requirements. Along with its benefits, the practical implementation of network slicing brings a lot of challenges. Thanks to the recent advances on Machine Learning (ML), some of these challenges have been addressed. In particular, the application of ML approaches is enabling the autonomous management of resources, in the network slicing paradigm. Accordingly, this paper presents a comprehensive survey on contributions on ML in network slicing, identifying major categories and sub-categories in the literature. Key takeaways are also presented and open research challenges are discussed, together with potential solutions.</p
... However, with the increasing number of users in recent years, the fixed structure of access network has been unable to meet the flexibility requirements of the new generation of access network, and access resources have been seriously wasted. The convergence of network slicing technology and access network technology has become an inevitable trend [27][28][29]. Figure 1 shows the proposed slicing architecture of the Fi-Wi network. Infrastructure resources of the Fi-Wi access network are mapped to the SDN controller by NFV technology and flexibly allocated to different services by the SDN controller to form different slices. ...
Article
With the development of information technology, optical access network gradually presents the characteristics of multi-domain, multi-layer and multi-vendor. The network slicing technology provides a feasible scheme for flexible management of heterogeneous resources. Fi-Wi access network has attracted much attention because of its unique advantages in integrating with network slicing technology. An elastic adaptive network slicing scheme based on multi-priority cooperative prediction in Fi-Wi access network is proposed in this paper. In this scheme, priority model is used to sort the processing order of requests to reduce the delay of delay-sensitive requests and depth-first search algorithm is used to select links with more idle resources to ensure more uniform resource allocation in access network. At the same time, LSTM model is used to predict the traffic, and the resource boundary is changed in advance to offset the extra delay caused by moving the resource isolation boundary when the in-chip reserved resources are insufficient. Simulation results show that the proposed scheme can not only preferentially process delay-sensitive requests, but also select links that reserve more resources for requests. In addition, compared with static slicing and dynamic slicing based on DES prediction, the proposed scheme has lower bandwidth blocking probablity.
... Geographic users cannot use the same block of resources to mitigate interferences [11]. This research has therefore been developed by means of a model of machine learning, which distributes radio blocks to each user. ...
Preprint
Full-text available
The demand for the fastest communication is a key concern for IoT technology with recent advances in the Internet of Things (IoT). With the advent of 5G telecommunications networks, the request for the quality of service (QoS) satisfaction in IoT communication can be bridged. Henceforth, a large number of devices will not be under limited resource assignment through the integration of the 5G telecommunications network. In this article, we address the above limitation on allocation by machine-learning, called the Artificial Network of prominent IoT devices (ANN). The adoption of the rules in ANN implies the allocation of resources to the most important devices and reduces them on the basis of priority. The simulation was conducted to test the effectiveness of the fuzzy system with 5G resources allocated to the IoT model. The findings indicate that the ANN model is more resource-allocating and energy efficient than other methods.
... Chuang Song et al. [52] describe a dynamic traffic slice model based on ML (ML-TADS). This model allows to manage traffic in the network competently -to provide it in such a way that its distribution is uniform, there is no congestion at one BS, and, at the same time, zero traffic to another. ...
Article
Full-text available
Today, the traffic amount is growing inexorably due to the increase in the number of devices on the network. Researchers analyze traffic by identifying sophisticated dependencies, anomalies, and novel traffic patterns to improve the system performance. One of the fast development niches in this domain is related to Classic and Deep Machine Learning techniques that are supposed to improve the network operation in the most complex heterogeneous environment. In this work, we first outline existing applications of Machine Learning in the communications domain and further list the most significant challenges and potential solutions while implementing those. Finally, we compare different classical methods predicting the traffic on the LTE network Edge by utilizing such techniques as Linear Regression, Gradient Boosting, Random Forest, Bootstrap Aggregation (Bagging), Huber Regression, Bayesian Regression, and Support Vector Machines (SVM). We develop the corresponding Machine Learning environment based on a public cellular traffic dataset and present a comparison table of the quality metrics and execution time for each model. After the analysis, the SVM method proved to allow for a much faster training compared to other algorithms. Gradient Boosting showed the best quality of predictions as it has the most efficient data determination. Random forest shows the worst result since it depends on the number of features that may be limited. The probabilistic approach-based Bayesian regression method showed slightly worse results than Gradient Boosting, but its training time was shorter. The performance evaluation demonstrated good results for linear models with the Huber loss function, which optimizes the model parameters better. As a standalone contribution, we offer the source code of the analyzed algorithms in Open Access.
... Nevertheless, fiber network has recently received attention to the issue of stable VNE [19]. Some research studies have tackled the VN integration issue with fiber network with special path coverage while minimizing spectrum jobs, regeneration numbers or energy consumption [20,21]. The problem was formulated using an ILP based on the way, where a disconnected backup path is pre-computed on each of the main available paths [22]. ...
Conference Paper
A virtualized multi-domain architecture supports 5G fixed-line communications. The conventional virtual network display software creates a lot of surpluses and overlays communications given a large number of fundamental network access. At the same time, numerous internet infrastructure providers are reluctant to communicate specific spatial information in consideration of economic interests. Therefore, based on the resource citation, monomeric virtual network coordination is created to optimize network resources seamlessly for every autonomous domain. Multiple connections to the virtual network single connection instruction under the premise of the VNE fiber environment and the difference between the virtual throughput can be established to decrease the cost of the implementation of the virtual network. This study found that using multiple links to single-link network connections improved the 5G network-based VNE system. Discover the advantages of multiple links to a single link network as well as static vs dynamic network configuration. First of all, it's important to consider various link resources and inter-domain connectivity capabilities for VNE fiber-based wi-fi networks. The fibers connectivity virtual network integration process is provided for efficient routing depending on the perfectly designed predator algorithm. In the second step, an algorithm with a relatively small overhead based on inter-domain maps. The simulations show that the wireless fiber-based play a fundamental role for numerous systems to multi-dominate multiple virtual networks to single domain network can embedded systems can minimize the cost of embedding the virtual network embedding of multi-domain systems and have high stability and growth.
... Classification maps the input space into pre-defined classes. The regression approach maps input space across the real-value domain [6]. The most commonly used supervised learning algorithms are: ...
Chapter
Full-text available
This paper focuses on comparing the various machine learning (ML) algorithms that can be applicable in wavelength division multiplexing (WDM) optical networks to provide better simulation outcomes. ML, combined with WDM optical networks, helps in network control and resource management that are useful in service provisioning and resource assignment. This paper gives a comprehensive review of machine learning approaches in WDM optical networks concerning support vector machine (SVM), K-nearest neighbour (K-NN), decision tree, random forest and neural networks algorithms. These algorithms’ performances are compared in terms of accuracy and AUC; further, the accuracy and AUC results show an average outcome of 99% and 0.98, respectively. Simulation can be performed on MATLAB and Net2plan tools using different data sets in terms of average accuracy and AUC for WDM optical networks. This research’s future directions can be towards ML utilization to provide optimal routing and wavelength assignment, increasing bandwidth utilization to reduce control overheads, reduce computational complexity, security, fault occurrence and monitoring schemes for WDM optical networks supporting 5G applications.
... [13]. In optical metro and core networks, time-series modelling and forecasting have been introduced for traffic prediction [14,15]. However, considering the highly dynamic nature of the traffic patterns of cellular sites in a mobile network, it is challenging to predict and characterize these patterns with high accuracy [16], and research on such networks is limited. ...
Article
Full-text available
The extremely high number of services with large bandwidth requirements and the increasingly dynamic traffic patterns of cell sites pose major challenges to optical fronthaul networks, rendering them incapable of coping with the extensive, uneven, and real-time traffic that will be generated in the future. In this paper, we first present the design of an adaptive graph convolutional network with gated recurrent unit (AGCN-GRU) network to learn the temporal and spatial dependencies of traffic patterns of cell sites to provide accurate traffic predictions, in which the AGCN model can capture potential spatial relations according to the similarity of network traffic patterns in different areas. Then, we innovatively consider how to deal with the unpredicted burst traffic and propose an AI-assisted intent-based traffic grooming scheme to realise automatic and intelligent cell sites clustering and traffic grooming. Finally, a software-defined testbed for 5G optical fronthaul network was established, on which the proposed schemes were deployed and evaluated by considering traffic datasets of existing optical networks. The experimental results showed that the proposed scheme can optimize network resource allocation, increase the average efficient resource utilization and reduce the average delay and the rejection ratio.
... In [7], a microservice-based NS architecture was implemented to design and create slices automatically according to service requirements. Song et al. in [8] applied machine learning in a traffic-aware dynamic slicing framework to dynamically allocate transport network resources for flexible resource sharing. A. Perveen et al. in [9] proposes a deep reinforcement based autonomous radio resource slicing framework with dynamic resource preservation to serve diff erent types of services. ...
Article
Full-text available
Internet of things (IoT) in smart city consists of a diversity of public utility and vertical industry services with extremely different performance requirements. As a key technology of 5G networks, network slicing (NS) is featured to provide distinct virtual networks and differentiated QoS guarantees in a shared infrastructure. Therefore, it becomes necessary to employ intelligent NS management for IoT in smart city. This work proposes a machine learning (ML) driven automatic NS framework which can intelligently scale slice according to network state. The resource preservation and slicing implementation scheme is given to trade off robustness and resource efficiency. Finally, the present study provides preliminary results via simulation and experimentation to justify the effectiveness and efficiency of the presented design.
... The transport network consist of high bandwidth optical cores [24], and can be physically sliced or virtually sliced. Physical slicing involves allocating each fiber core to a single tenant while virtual slicing is where more than one tenant share the same core. ...
Article
Full-text available
The Fifth Generation(5G) communication network is envisioned to provide heterogeneous services tailored to specific user demands. These services are diverse and can be generally categorized based on latency, bandwidth, reliability, and connection density requirements. The 5G infrastructure providers are expected to employ network function virtualization, software-defined networking, and network slicing for cost-effective and efficient network resource allocation. In the 5G network, when an infrastructure provider receives a slice request, a slice admission control scheme is applied and an optimization algorithm is used to achieve predefined objectives. To this end, a number of slice admission control objectives, strategies and algorithms have been proposed. However, there is a need to present a coherent review and bridge the gap between many aspects of slice admission control. In this paper, we present the latest developments in this research area. Thus, we begin by introducing slice admission control and discuss background concepts associated with slicing. We then extend our discussion to slice admission objectives followed by the strategies and optimization algorithms. Finally, we conclude with a summary of analysis containing the optimization algorithms.
... Access, core, Transport, Backhauls) entails slicing in both links and node resources. However, the management of link resources is a more critical part of the network slicing and presents new research challenges to be addressed (e.g., bandwidth allocation along a path, management of the prioritization on the links, and isolation between the slices in terms of traffic) compared to node resources [4][5][6][7][8][9] . ...
Article
Effective management and allocation of resources remains a challenging paradigm for future large-scale networks such as 5G, especially under a network slicing scenario where the different services will be characterized by differing Quality of Service (QoS) requirements. This makes the task of guaranteeing the QoS levels and maximizing the resource utilization across such networks a complicated task. Moreover, the existing allocation strategies with link sharing tend to suffer from inefficient network resource usage. Therefore, we focused on prioritized sliced resource management in this work and the contributions of this paper can be summarized as formally defining and evaluating a self-provisioned resource management scheme through a smart Squatting and Kicking model (SKM) for multi-class networks. SKM provides the ability to dynamically allocate network resources such as bandwidth, Label Switched Paths (LSP), fiber, slots among others to different user priority classes. Also, SKM can guarantee the correct level of QoS (especially for the higher priority classes) while optimizing the resource utilization across networks. Moreover, given the network slicing scenarios, the proposed scheme can be employed for admission control. Simulation results show that our model achieves 100% resource utilization in bandwidth-constrained environments while guaranteeing higher admission ratio for higher priority classes. From the results, SKM provided 100% acceptance ratio for highest priority class under different input traffic volumes, which, as we articulate, cannot be sufficiently achieved by other existing schemes such as AllocTC-Sharing model due to priority constraints.
... Therefore, it is necessary to give weights to these attributes and thus evaluate the alarm importance quantitatively. Recently, machine learning is playing an increasingly important role in optical communication research, and has been applied in diverse areas such as predicting equipment failure in optical network [11], reducing nonlinear phase noise [12,13], compensating physical impairments [14], monitoring optical performance [15,16], adaptive nonlinear decision at the receivers [17], adaptive demodulator [18], and traffic-aware bandwidth assignment [19]. ...
Article
Full-text available
Millions of alarms in optical layer may appear in optical transport networks every month, which brings great challenges to network operation, administration and maintenance. In this paper, we deal with this problem and propose a method of alarm pre-processing and correlation analysis for this network. During the alarm pre-processing, we use the method of combined Time series segmentation and Time sliding window to extract the alarm transactions, and then we use the algorithm of combined K-means and Back propagation neural network to evaluate the alarm importance quantitatively. During the alarm correlation analysis, we modify a classic rule mining algorithm, i.e. Apriori algorithm, into a Weighted Apriori to find the high-frequency chain alarm sets among the alarm transactions. Through the actual alarm data from the record in optical layer of a provincial backbone of China Telecom, we conducted experiments and the results show that our method is able to perform effectively the alarm compressing, alarm correlating and chain alarm mining. By parameter adjustment, the alarm compression rate is able to vary from 60% to 90% and the average fidelity of chain alarm mining keeps around 84%. The results show our approach and method is promising for trivial alarm identifying, chain alarm mining and root fault locating in existing optical networks.
Article
The widespread application of AI with high computing requirements has driven the rapid development of the computing field. Computing Power Networks (CPNs) have been recognized as solutions to providing on-demand computing services, and its service provisioning can be modeled as a network slicing deployment problem. Elastic Optical Networks (EONs) offer the flexibility to allocate spectrum resources, making them well-suited for network slicing technology. Consequently, EON-based CPNs have attracted considerable attention. However, the unbalanced distribution of computing resources leads to inefficient computing resource utilization. Meanwhile, spectrum resources may be isolated and difficult for other services. This phenomenon is known as spectrum fragmentation, leading to inefficient spectrum resource utilization. To achieve balanced and efficient resource utilization, this paper first analyzes the main reasons for load unbalance and spectrum fragmentation in CPNs: mismatched slicing deployment and inappropriate resource scheduling. Therefore, a dynamic network slicing scheme based on traffic prediction (DNS-TP) is designed. Its core highlight is cooperative optimization slicing deployment and resource scheduling based on spectrum fragmentation awareness. Simulation results show that the proposed scheme enhances the network slicing acceptance ratio, computing and spectrum resource utilization while exhibiting strong performance in resource balancing.
Article
Future networks bring higher communication requirements in latency, computations, data quality, etc. The attention to various challenges in the network field through the advances of Artificial Intelligence (AI), Machine Learning (ML) and Big Data analysis is growing. The subject of research in this paper is 4G mobile traffic collected during one year. The amount of data retrieved from devices and network management are motivating the trend toward learning-based approaches. The research method is to compare various ML methods for traffic prediction. In terms of ML, to find a solution for a regression problem using the ensemble models Random Forest, Boosting, Gradient Boosting, and Adaptive Boosting (AdaBoost). The comparison was based on the quality indicators RMSE, MAE, and coefficient of determi-nation. In the result Gradient Boosting showed the most accurate prediction. Using this ML model for mobile traffic optimization could improve network performance.
ResearchGate has not been able to resolve any references for this publication.