Fig 6 - uploaded by Weisi Guo
Content may be subject to copyright.
Spatial transfer learning among self-organization base stations.

Spatial transfer learning among self-organization base stations.

Source publication
Article
Full-text available
Artificial intelligence heralds a step-change in wireless networks. However, it may also cause irreversible environmental damage due to their high energy consumption. Here, we address this challenge in the context of 5G and beyond, where there is a complexity explosion in radio resource management (RRM). For high dimensional RRM problems in a dynam...

Similar publications

Article
Full-text available
Mass autonomy promises to revolutionise a wide range of engineering, service, and mobility industries. Coordinating complex communication between hyper-dense autonomous agents requires new artificial intelligence (AI) enabled orchestration of wireless communication services in beyond fifth generation (5G) and sixth generation (6G) mobile networks....

Citations

... ML can play a crucial role in radio resource management and energy efficiency in wireless networks by optimizing various aspects, such as resource allocation, power control, and network configuration [186]. For instance, DL models can predict user demands and traffic patterns while deep RL (DRL) techniques can help in making intelligent decisions for allocating resources to users. ...
Preprint
Full-text available
As we transition from the 5G epoch, a new horizon beckons with the advent of 6G, seeking a profound fusion with novel communication paradigms and emerging technological trends, bringing once-futuristic visions to life along with added technical intricacies. Although analytical models lay the foundations and offer systematic insights, we have recently witnessed a noticeable surge in research suggesting machine learning (ML) and artificial intelligence (AI) can efficiently deal with complex problems by complementing or replacing model-based approaches. The majority of data-driven wireless research leans heavily on discriminative AI (DAI) that requires vast real-world datasets. Unlike the DAI, Generative AI (GenAI) pertains to generative models (GMs) capable of discerning the underlying data distribution, patterns, and features of the input data. This makes GenAI a crucial asset in wireless domain wherein real-world data is often scarce, incomplete, costly to acquire, and hard to model or comprehend. With these appealing attributes, GenAI can replace or supplement DAI methods in various capacities. Accordingly, this combined tutorial-survey paper commences with preliminaries of 6G and wireless intelligence by outlining candidate 6G applications and services, presenting a taxonomy of state-of-the-art DAI models, exemplifying prominent DAI use cases, and elucidating the multifaceted ways through which GenAI enhances DAI. Subsequently, we present a tutorial on GMs by spotlighting seminal examples such as generative adversarial networks, variational autoencoders, flow-based GMs, diffusion-based GMs, generative transformers, large language models, autoregressive GMs, to name a few. Contrary to the prevailing belief that GenAI is a nascent trend, our exhaustive review of approximately 120 technical papers demonstrates the scope of research across core wireless research areas, including 1) physical layer design; 2) network optimization, organization, and management; 3) network traffic analytics; 4) cross-layer network security; and 5) localization & positioning. Furthermore, we outline the central role of GMs in pioneering areas of 6G network research, including semantic communications, integrated sensing and communications, THz communications, extremely large antenna arrays, near-field communications, digital twins, AI-generated content services, mobile edge computing and edge AI, adversarial ML, and trustworthy AI. Lastly, we shed light on the multifarious challenges ahead, suggesting potential strategies and promising remedies. Given its depth and breadth, we are confident that this tutorial-cum-survey will serve as a pivotal reference for researchers and professionals delving into this dynamic and promising domain.
... [6] uses Deep Learning Important FeaTures (DeepLIFT), a back-propagation-based XAI approach, for obtaining the importance of neurons for the pruning and quantization of DNNs. [7] simulates a multichannel orthogonal frequencydivision multiple access power allocation network using a simple two-layered DNN to show that a two to three times energy reduction can be achieved by compressing the DNN from twenty to five neurons per layer. Compressing DNNs in DL-based RRM schemes through XAI methods reduces the communication overhead for the periodic broadcast of model parameters and increases the convergence speed. ...
Article
Full-text available
Artificial intelligence (AI) is expected to be an integral part of radio resource management (RRM) in sixth-generation (6G) networks. However, the opaque nature of complex deep learning (DL) models lacks explainability and robustness, posing a significant hindrance to adoption in practice as wireless communication experts and stakeholders express reluctance, fearing potential vulnerabilities. To this end, this paper sheds light on the importance and means of achieving explainability and robustness toward trustworthy AI-based RRM solutions for 6G networks. We outline a range of explainable and robust AI techniques for feature visualization and attribution; model simplification and interpretability; model compression; and sensitivity analysis, then explain how they can be leveraged for RRM. Two case studies are presented to demonstrate the application of explainability and robustness in wireless network design. The former case focuses on exploiting explainable AI methods to simplify the model by reducing the input size of deep reinforcement learning agents for scalable RRM of vehicular networks. On the other hand, the latter case highlights the importance of providing interpretable explanations of credible and confident decisions of a DL-based beam alignment solution in massive multiple-input multiple-output systems. Analyses of these cases provide a generic explainability pipeline and a credibility assessment tool for checking model robustness that can be applied to any pre-trained DL-based RRM method. Overall, the proposed framework offers a promising avenue for improving the practicality and trustworthiness of AI-empowered RRM.
... Traditional algorithms usually generate high computational complexity. The deep reinforcement learning (DRL), as an emerging artificial intelligence (AI) algorithm, demonstrates superior computing and learning ability in solving high-complexity problems, and is easier to achieve (near-)Pareto optimal solutions of MOOP [13]- [15]. ...
Article
Full-text available
Radio access network (RAN) slices can provide various customized services for next-generation wireless networks. Thus, multiple performance metrics of different types of RAN slices need to be jointly optimized. However, existing efforts in multi-objective optimization problem (MOOP) for RAN slicing are only in the scalar form, which is difficult to achieve simultaneous optimization. In this paper, we consider a non-scalar MOOP for RAN slicing with three types of slices, i.e. , the high-bandwidth slice, the low-delay slice, and the wide-coverage slice over the same underlying physical network. We jointly optimize the throughput, the transmission delay, and the coverage area by user-oriented dynamic virtual base stations (vBSs)' deployment, and sub-channel and power allocation. An improved multi-agent deep deterministic policy gradient (IMADDPG) algorithm, having the characteristics of centralized training and distributed execution, is proposed to solve the above non-deterministic polynomial-time hard (NP-hard) problem. The rank voting method is introduced in the inference process to obtain near-Pareto optimal solutions. Simulation results verify that the proposed scheme can ensure better performance than the traditional scalar utility method and other benchmark algorithms. The proposed scheme has the advantage of flexibly approaching any point of the Pareto boundary, while the traditional scalar method only subjectively approaches one of the Pareto optimal solutions. Furthermore, our proposal strikes a compelling tradeoff among three types of RAN slices due to the non-dominance between Pareto optimal solutions.
... Reinforcement learning (RL) has been an efficient tool to solve optimization problems with a large number of data. It relies on uncharted exploitation with available samples for good reward feedback, which has been widely applied in large-scale scenarios [22,23]. Han et al. [24] proposed a State-Action-Reward-State-Action (SARSA) algorithm for power control to improve throughput. ...
Article
Full-text available
Wireless resource utilizations are the focus of future communication, which are used constantly to alleviate the communication quality problem caused by the explosive interference with increasing users, especially the inter-cell interference in the multi-cell multi-user systems. To tackle this interference and improve the resource utilization rate, we proposed a joint-priority-based reinforcement learning (JPRL) approach to jointly optimize the bandwidth and transmit power allocation. This method aims to maximize the average throughput of the system while suppressing the co-channel interference and guaranteeing the quality of service (QoS) constraint. Specifically, we de-coupled the joint problem into two sub-problems, i.e., the bandwidth assignment and power allocation sub-problems. The multi-agent double deep Q network (MADDQN) was developed to solve the bandwidth allocation sub-problem for each user and the prioritized multi-agent deep deterministic policy gradient (P-MADDPG) algorithm by deploying a prioritized replay buffer that is designed to handle the transmit power allocation sub-problem. Numerical results show that the proposed JPRL method could accelerate model training and outperform the alternative methods in terms of throughput. For example, the average throughput was approximately 10.4–15.5% better than the homogeneous-learning-based benchmarks, and about 17.3% higher than the genetic algorithm.
... For instance, [9] uses Deep Learning Important FeaTures (DeepLIFT), a back-propagationbased XAI approach, for obtaining the importance of neurons for the pruning and quantization of DNNs. [10] simulates a multichannel orthogonal frequency-division multiple access power allocation network using a simple two-layered DNN to show that a 2-3 times energy reduction can be achieved by compressing the DNN from 5-20 neurons per layer. ...
Preprint
Full-text available
p>Explainable and Robust Artificial Intelligence for Trustworthy Resource Management in 6G Networks. In this paper, we present an overview of the explainable and robust AI techniques for radio resource management. We explain how these methods can provide a systematic methodology for interpreting the decisions made by the black-box AI models, and improve the robustness of the decisions and performance of the algorithms by reducing the model complexity and convergence time. Besides, outlining the core explainability and robustness techniques, we also provide two practical case studies that illustrate the application of these techniques for model simplification and improving robustness of radio resource management decisions.</p
... For instance, [9] uses Deep Learning Important FeaTures (DeepLIFT), a back-propagationbased XAI approach, for obtaining the importance of neurons for the pruning and quantization of DNNs. [10] simulates a multichannel orthogonal frequency-division multiple access power allocation network using a simple two-layered DNN to show that a 2-3 times energy reduction can be achieved by compressing the DNN from 5-20 neurons per layer. ...
Preprint
Full-text available
Artificial intelligence (AI) is expected to be anintegral part of radio resource management (RRM) in sixth-generation (6G) networks. However, the opaque nature of complex deep learning (DL) models lacks explainability and robustness, posing a significant hindrance to adoption in practice as wireless experts and stakeholders express reluctance, fearing potential vulnerabilities. To this end, this paper sheds light on the importance and means of achieving explainability and robustness toward trustworthy AI-based RRM solutions for 6G networks. We outline a range of explainable and robust AI techniques for feature visualization and attribution; model simplification and interpretability; model compression; and sensitivity analysis, then explain how they can be leveraged for RRM. Two case studies are presented to demonstrate the application of explainability and robustness in wireless network design. The former case focuses on exploiting explainable AI methods to simplify the model by reducing the input size of deep reinforcement learning agents for scalable RRM of vehicular networks. On the other hand, the latter case highlights the importance of providing interpretable explanations of credible and confident decisions of a DL-based beam alignment solution in massive multiple-input multiple-output systems. Analyses of these cases provide a generic explainability pipeline and a credibility assessment tool for checking model robustness that can be applied to any pre-trained DL-based RRM method. Overall, the proposed framework offers a promising avenue for improving the practicality and trustworthiness of AI-empowered RRM.
... To empower cost efficient AI/ML solutions, new strategies are required to develop techniques that rely on reduced amount of data and fewer parameters to reach the desired accuracy level. For instance, the authors in [286] discussed different compression approaches that can be leveraged to reduce the size of a DRL model. Furthermore, it is essential to improve the computation efficiency of hyperparameter tuning; an expensive process requiring several training and testing trials in order to find the optimal set of hyperparameters yielding the higher accuracy. ...
... Although information and communication technology (ICT) have been widely believed as one of important technologies for tourism [3], little focus was put on an in-depth analysis on wireless communication capability and corresponding solution for tourism area scenario. One direct method to improve the capacity of wireless networks is increasing infrastructure investment, that is, deploying more BSs to densify radio access network [8,9]. However, this method may be not suitable for temporary network congestion in tourism areas. ...
Article
Full-text available
With the support of wireless network, tourists in tourism areas could enjoy various tourism information search and smart tourism–related services. However, due to the limited capacity of wireless networks, in peaking seasons, tourism area crowding in local areas could result in emergency and temporary wireless network congestion. While increasing infrastructure investment (e.g., densifying base stations) is desiring for peak seasons, it can be a waste of resource for significantly shrunk tourist arrival in off seasons. In response to the temporary network congestion offloading demand, this paper proposes an on-demand coverage solution based on unmanned aerial vehicle (UAV) base stations. Firstly, taking the air-to-ground channel characteristic into account, we define the effective coverage radius, based on which the optimal altitude of UAV BS is derived. Then, to tackle the inherent challenge of irregular tourist distribution issue in tourism areas, an automatic UAV BS deployment algorithm is designed to determine the minimal number of UAV BSs and their two-dimensional coordinates simultaneously. Simulation results show that the proposed solution could realize efficient UAV BS on-demand deployment.
... Furthermore, as the deployment of intelligent architectures are increasingly in demand for 5G and beyond, [15]- [17] have proposed novel radio resource management mechanisms using fully connected deep neural networks. However, with the increase in such learning-based architecture deployments, there is also a need to optimize the use of resources for the same, as discussed in [18], for moving towards green-learning based resource management. ...
Article
Full-text available
Optimal resource provisioning and management of the next generation communication networks are crucial for attaining a seamless Quality of Service with reduced environmental impact. Considering the ecological assessment, urban and rural telecommunication infrastructure is moving towards deploying green cellular base stations to cater to the needs of the ever-growing traffic demands of heterogeneous networks. In such scenarios , the existing learning-based renewable resource provision-ing methods lack intelligent and optimal resource management at the Small Cell Base Stations (SCBS). Therefore, in this article, we present a novel machine learning-based framework for intelligent resource provisioning mechanisms for micro-grid connected green SCBSs with a completely modified ring parametric distribution method. In addition, an algorithmic implementation is proposed for prediction-based renewable resource redistribution with Energy Flow Control Unit (EFCU) mechanism for grid-connected SCBS, eliminating the need for centralised hardware. Moreover, this modeling enables the prediction mechanism to estimate the future on-demand traffic provisioning capability of SCBS. Furthermore, we present the numerical analysis of the proposed framework showcasing the systems' ability to attain a balanced energy convergence level of all the SCBS at the end of the periodic cycle, signifying our model's merits.
... Although reinforcement learning may represent a powerful tool for radio optimization, it consumes huge energy over time. Thus, in [55], the authors discussed algorithm and architecture innovations to achieve green Deep Reinforcement Learning (DRL) when addressing Radio Resource Management (RRM). From an architectural point of view, a distributed DRL scheme is proposed to enable distributed decision-making by RRM entities. ...
Article
Full-text available
Open Radio Access Network (O-RAN) alliance was recently launched to devise a new RAN architecture featuring open, software-driven, virtual, and intelligent radio access architecture. O-RAN architecture is based on (1) disaggregated RAN functions that run as Virtual Network Function (VNF) and Physical Network Function (PNF); (2) the notion of RAN controller that runs centrally RAN applications such as mobility management, users’ scheduling, radio resources allocation, etc. The RAN controller is in charge of enforcing the application decisions by using open interfaces with the RAN functions. One important feature introduced by O-RAN is the heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent RAN applications that are able to fulfill the Quality of Service (QoS) requirements of the envisioned 5G and beyond network services. In this work, we first give an overview of the evolution of RAN architectures toward 5G and beyond, namely C-RAN, vRAN, and O-RAN. We also compare them based on various perspectives, such as edge support, virtualization, control and management, energy consumption, and AI support. Then, we review existing DL-based solutions addressing the RAN part. We also show how they can be integrated/mapped to the O-RAN architecture since these works were not initially adapted to the O-RAN architecture. In addition, we present two case studies for DL techniques deployment in O-RAN. Furthermore, we describe how the main steps of deployed DL models in O-RAN can be automated, to ensure stable performance of these models, introducing ML system operations (MLOps) concept in O-RAN. Finally, we identify key technical challenges, open issues, and future research directions related to the Artificial Intelligence (AI)-enabled O-RAN architecture.