Article

New Bridge to Cloud: An Ultra-Dense LEO Assisted Green Computation Offloading Approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Mobile edge computing and cloud computing have emerged as effective technologies to alleviate the increasing computational workload of mobile devices. As a promising enabling 6G technology, the ultra-dense (UD) low earth orbit (LEO) satellite network with low communication latency and high throughput is considered a new bridge for cloud computation offloading. In this paper, we investigate energy-efficient cloud and edge computing in UD-LEO-assisted terrestrial-satellite networks. An optimization problem aiming at minimizing the energy consumption of the computation tasks is formulated. The optimization problem is a mixed-integer non-linear programming problem. To solve this problem, we decompose it into two subproblems, i.e., a joint user association and task scheduling subproblem, and an adaptive computation resource allocation subproblem. For the first subproblem, we model the input of a forward neural network (NN) as the large-scale information (i.e., channel gain and task arrival rates) and obtain the optimal solution by transforming the direct output of the NN. For the second subproblem, we introduce a successive convex approximation method to optimize it iteratively. The simulation results show that our proposed user association and task scheduling strategy outperforms two benchmark algorithms in terms of energy consumption under a strict delay bound and high user density.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The simultaneous task transmission and MEC task execution are not considered in this paper because the focus of this paper is on the offloading decisions problem and bandwidth allocation problem [39][40][41][42][43]. In addition, in LEO satellite edge computing, if the simultaneous task transmission and MEC task execution are considered, the MEC servers need to process more tasks in one time slot, and the MEC servers will generate more energy consumption within a unit time, which will bring a great burden on the resource−constrained MEC servers on LEO satellites. ...
Article
Full-text available
Huge low earth orbit (LEO) satellite networks can achieve global coverage with low latency. In addition, mobile edge computing (MEC) servers can be mounted on LEO satellites to provide computing offloading services for users in remote areas. A multi−user multi−task system model is modeled and the problem of user’s offloading decisions and bandwidth allocation is formulated as a mixed integer programming problem to minimize the system utility function expressed as the weighted sum of the system energy consumption and delay. However, it cannot be effectively solved by general optimizations. Thus, a deep learning−based offloading algorithm for LEO satellite edge computing networks is proposed to generate offloading decisions through multiple parallel deep neural networks (DNNs) and store the newly generated optimal offloading decisions in memory to improve all DNNs to obtain near−optimal offloading decisions. Moreover, the optimal bandwidth allocation scheme of the system is theoretically derived for the user’s bandwidth allocation problem. The simulation results show that the proposed algorithm can achieve a good convergence effect within a small number of training steps, and obtain the optimal system utility function values compared with the comparative algorithms under different system parameters, and the time cost of the system and DNNs is very satisfactory.
Article
Edge computing is an efficient way to offload computational tasks for user equipment (UE) which has computation-intensive and latency-sensitive tasks in certain applications. However, UEs can not offload to ground edge servers when they are in remote areas. Mounting edge servers on low earth orbit (LEO) satellites can provide remote UEs with task offloading when the ground infrastructure is not available. In this paper, we introduce a multi-satellite-enabled edge computing system for offloading UEs’ computational tasks with the aim of minimizing system energy consumption by optimizing users’ association, power control, task scheduling, and computing resource allocation. Specifically, a UE’s partial task is executed locally and the rest of its task is offloaded to a satellite for processing. Such energy minimization problem is formulated as a mixed-integer nonlinear programming (MINLP) optimization problem. By decomposing the original problem into four sub-problems, we solve each sub-problem with convex optimization methods. In addition, an iterative algorithm is proposed to jointly optimize the task offloading and resource allocation strategy, which achieves a near optimal solution through several iterations. Finally, the complexity and convergence of the algorithm are verified. In our simulation results, the proposed algorithm is compared with different task offloading and resource allocation schemes in terms of system energy consumption, where 43% energy is saved.
Article
The deployment of mobile edge computing services in LEO satellite networks achieves seamless coverage of computing services. However, the time-varying wireless channel conditions between satellite-terrestrial channels and the random arrival characteristics of ground users’ tasks bring new challenges for managing the LEO satellite’s communication and computing resources. Facing these challenges, a stochastic computation offloading problem of joint optimizing communication and computing resources allocation and computation offloading decisions is formulated for minimizing the long-term average total power cost of the ground users and the LEO satellite, with the constraint of long-term task queue stability. However, the computing resource allocation and the computation offloading decisions are coupled within different slots, thus making it challenging to address this problem. To this end, we first employ Lyapunov optimization to decouple the long-term stochastic computation offloading problem into the deterministic subproblem in each slot. Then, an online algorithm combining deep reinforcement learning and conventional optimization algorithms is proposed to solve these subproblems. Simulation results show that the proposed algorithm can achieve the superior performance while ensuring the stability of all task queues in LEO satellite networks.
Article
Full-text available
This paper investigates the covert performance for an unmanned aerial vehicle (UAV) jammer assisted cognitive radio network. In particular, the covert transmission of secondary users can be effectively protected by UAV jamming against the eavesdropping. For practical consideration, the UAV is assumed to only know certain partial channel distribution information (CDI), whereas not to know the detection threshold of eavesdropper. For this sake, we propose a model-driven generative adversarial network (MD-GAN) assisted optimization framework, consisting of a generator and a discriminator, where the unknown channel information and the detection threshold are learned weights. Then a GAN based joint trajectory and power optimization (GAN-JTP) algorithm is developed to train the MD-GAN optimization framework for covert communication, which results in the joint solution of UAV’s trajectory and transmit power to maximize the covert rate and the probability of detection errors. Our simulation results show that, the proposed GAN-JTP with a rapid convergence speed can attain near-optimal solutions of UAV’s trajectory and transmit power for the covert communication.
Article
Full-text available
Fog/Edge computing emerges as a novel computing paradigm that harnesses resources in the proximity of the Internet of Things (IoT) devices so that, alongside with the cloud servers, provide services in a timely manner. However, due to the ever-increasing growth of IoT devices with resource-hungry applications, fog/edge servers with limited resources cannot efficiently satisfy the requirements of the IoT applications. Therefore, the application placement in the fog/edge computing environment, in which several distributed fog/edge servers and centralized cloud servers are available, is a challenging issue. In this article, we propose a weighted cost model to minimize the execution time and energy consumption of IoT applications, in a computing environment with multiple IoT devices, multiple fog/edge servers and cloud servers. Besides, a new application placement technique based on the Memetic Algorithm is proposed to make batch application placement decision for concurrent IoT applications. Due to the heterogeneity of IoT applications, we also propose a lightweight pre-scheduling algorithm to maximize the number of parallel tasks for the concurrent execution. The performance results demonstrate that our technique significantly improves the weighted cost of IoT applications up to 65% in comparison to its counterparts.
Article
Full-text available
Mobile edge computing (MEC) is proposed as a new paradigm to meet the ever-increasing computation requirements, which is caused by the rapid growth of the Internet of Things (IoT) devices. As a supplement to the terrestrial network, satellites can provide communication to terrestrial devices in some harsh environments and natural disasters. Satellite edge computing is becoming an emerging topic and technology. In this paper, a game-theoretic approach to the optimization of computation offloading strategy in satellite edge computing is proposed. The system model for computation offloading in satellite edge computing is established, considering the intermittent terrestrial-satellite communication caused by satellites orbiting. We conduct a computation offloading game framework and compute the response time and energy consumption of a task based on the queuing theory as metrics of optimizing performance. The existence and uniqueness of the Nash equilibrium is theoretically proved, and an iterative algorithm is proposed to find the Nash equilibrium. Simulation results validate the proposed algorithm and show that the game-based offloading strategy can greatly reduce the average cost of a device.
Article
Full-text available
Deep learning is currently widely used in a variety of applications, including computer vision and natural language processing. End devices, such as smartphones and Internet-of-Things sensors, are generating data that need to be analyzed in real time using deep learning or used to train deep learning models. However, deep learning inference and training require substantial computation resources to run quickly. Edge computing, where a fine mesh of compute nodes are placed close to end devices, is a viable way to meet the high computation and low-latency requirements of deep learning on edge devices and also provides additional benefits in terms of privacy, bandwidth efficiency, and scalability. This paper aims to provide a comprehensive review of the current state of the art at the intersection of deep learning and edge computing. Specifically, it will provide an overview of applications where deep learning is used at the network edge, discuss various approaches for quickly executing deep learning inference across a combination of end devices, edge servers, and the cloud, and describe the methods for training deep learning models across multiple edge devices. It will also discuss open challenges in terms of systems performance, network technologies and management, benchmarks, and privacy. The reader will take away the following concepts from this paper: understanding scenarios where deep learning at the network edge can be useful, understanding common techniques for speeding up deep learning inference and performing distributed training on edge devices, and understanding recent trends and opportunities.
Article
Full-text available
By performing data processing at the network edge, mobile edge computing can effectively overcome the deficiencies of network congestion and long latency in cloud computing systems. To improve edge cloud efficiency with limited communication and computation capacities, we investigate the collaboration between cloud computing and edge computing, where the tasks of mobile devices can be partially processed at the edge node and at the cloud server. First, a joint communication and computation resource allocation problem is formulated to minimize the weighted-sum latency of all mobile devices. Then, the closed-form optimal task splitting strategy is derived as a function of the normalized backhaul communication capacity and the normalized cloud computation capacity. Some interesting and useful insights for the optimal task splitting strategy are also highlighted by analyzing four special scenarios. Based on this, we further transform the original joint communication and computation resource allocation problem into an equivalent convex optimization problem and obtain the closed-form computation resource allocation strategy by leveraging the convex optimization theory. Moreover, a necessary condition is also developed to judge whether a task should be processed at the corresponding edge node only, without offloading to the cloud server. Finally, simulation results confirm our theoretical analysis and demonstrate that the proposed collaborative cloud and edge computing scheme can evidently achieve a better delay performance than the conventional schemes.
Article
Full-text available
Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized Mobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also discuss a set of issues, challenges and future research directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.
Conference Paper
Full-text available
Energy efficiency in cellular mobile radio networks has recently gained great interest in the research community. The development of more energy efficient hardware and software components aside, effect of different deployment strategies on energy efficiency are also studied in the literature. The latter mainly consist of optimizing the number and the location of different types of base stations in order to minimize the total power consumption. Usually, in the literature, the total network power consumption is restricted to the sum of the power consumption of all base stations. However, the choice of a specific deployment also affects the exact implementation of the backhaul network, and consequently its power consumption, which should therefore be taken into account when devising energy efficient deployment. In this paper, we propose a new power consumption model for a mobile radio network considering backhaul. We then handle a case study and perform a comparison of the power consumption of three different heterogeneous network deployments, and show how backhaul has a non-negligible impact on total power consumption, which differs for different deployments. An energy efficiency analysis is also carried out for different area throughput targets.
Article
With the rapid development of vehicular access network, three different communication modes have emerged as the fundamental technologies in cellular V2X, i.e., cellular mode, reuse mode, and dedicated mode, through different spectrum sharing strategies. However, how to conduct the multi-mode access management for cellular V2X is challenging considering the heterogeneity of network resource and the diverse vehicular services demands. In this paper, we investigate a software defined multi-mode access management framework in cellular V2X, which is capable of providing flexibility and programmability for dynamic spectrum sharing in vehicular access. We first conduct the performance analysis of multi-mode communications in cellular V2X. Then, we design an analytic hierarchy process (AHP) based evolutionary game to optimize the multi-mode access management solution, in which four performance indicators are jointly considered, including transmission rate, interference, transmission delay, and energy consumption. Extensive experimental simulations have validated the superiority of the proposed algorithm, and illustrated the impact of different vehicle access schemes, signal-to-interference-plus-noise ratio (SINR) threshold and other parameters on the performance of cellular V2X.
Article
While the commercial deployment and promotion of 5G is ongoing, mobile communication networks are still facing three fundamental challenges, i.e., spectrum resource scarcity, especially for low-frequency spectrum, exacerbated by fragmented spectrum allocation, user-centric network service provision when facing billions of personalized user demands in the era of Internet of everything (IoE), and proliferating operation costs mainly due to huge energy consumption of network infrastructure. To address these issues, it is imperative to consider and develop disruptive technologies in the next generation mobile communication networks, namely 6G. In this paper, by studying brain neurons and the neurotransmission, we propose the fully-decoupled radio access network (FD-RAN). In the FD-RAN, base stations (BSs) are physically decoupled into control BSs and data BSs, and the data BSs are further physically split into uplink BSs and downlink BSs. We first review the fundamentals of neurotransmission and then propose the 6G design principles inspired by the neurotransmission. Based on the principles, we propose the FD-RAN architecture, elastic resource cooperation in FD-RAN, and improved transport service layer design. The proposed fully decoupled and flexible architecture can profoundly facilitate resource cooperation to enhance the spectrum utilization, reduce the network energy consumption and improve the quality of user experience. Future research topics in this direction are envisioned and discussed.
Article
In this paper, a massive multiple-input multiple-output (MIMO) relay assisted multi-tier computing (MC) system is employed to enhance the task computation. We investigate the joint design of the task scheduling, service caching and power allocation to minimize the total task scheduling delay. To this end, we formulate a robust non-convex optimization problem taking into account the impact of imperfect channel state information (CSI). In particular, multiple task nodes (TNs) offload their computational tasks either to computing and caching nodes (CCN) constituted by nearby massive MIMO-aided relay nodes (MRN) or alternatively to the cloud constituted by nearby fog access nodes (FAN). To address the non-convexity of the optimization problem, an efficient alternating optimization algorithm is developed. First, we solve the non-convex power allocation optimization problem by transforming it into a linear optimization problem for a given task offloading and service caching result. Then, we use the classic Lagrange partial relaxation for relaxing the binary task offloading as well as caching constraints and formulate the dual problem to obtain the task allocation and software caching results. Given both the power allocation, as well as the task offloading and caching result, we propose an iterative optimization algorithm for finding the jointly optimized results. The simulation results demonstrate that the proposed scheme outperforms the benchmark schemes, where the power allocation may be controlled by the asymptotic form of the effective signal-to-interference-plus-noise ratio (SINR).
Article
In mobile edge computing networks, densely deployed access points are empowered with computation and storage capacities. This brings benefits of enlarged edge capacity, ultra-low latency, and reduced backhaul congestion. This paper concerns edge node grouping in mobile edge computing, where multiple edge nodes serve one end user cooperatively to enhance user experience. Most existing studies focus on centralized schemes that have to collect global information and thus induce high overhead. Although some recent studies propose efficient decentralized schemes, most of them did not consider the system uncertainty from both the wireless environment and other users. To tackle the aforementioned problems, we first formulate the edge node grouping problem as a game that is proved to be an exact potential game with a unique Nash equilibrium. Then, we propose a novel decentralized learning-based edge node grouping algorithm, which guides users to make decisions by learning from historical feedback. Furthermore, we investigate two extended scenarios by generalizing our computation model and communication model, respectively. We further prove that our algorithms converge to the Nash equilibrium with upper-bounded learning loss. Simulation results show that our mechanisms can achieve up to 96.99% of the oracle benchmark.
Article
Green communication is one of the key goals of the beyond 5G networks. However, as more and more delay sensitive applications emerge, the contradiction between task delay requirement and energy conservation becomes more and more prominent on the device side. This paper focuses on a mobile edge computing system where local computing capability and edge computing capability are both limited, which may lead to task discarding due to delay violation. An energy consumption minimization problem is first formulated under the binary offloading and the partial offloading modes. Then a low-complexity heuristic scheme and a Lagrange dual scheme are proposed to jointly optimize the task scheduling and resource allocation under those two modes respectively. Particularly, a task processing priority model is designed to effectively reduce the number of discarded tasks and improve the service performance of the MEC server. The effectiveness of the proposed schemes is validated by extensive simulations with comparison to other baseline schemes.
Article
In response to the growing demand for innovative applications and user experience, computation offloading migrates computation-intensive tasks from user to edge, which is closely coupled with resource allocation. For the rational motivations behind these issues, centralized decision-making may seriously compromise individuals' rationality, and the complexity of this problem is gradually beyond the comfort zone of traditional methods. Therefore, designing a multi-agent resource coordination framework becomes an emerging technical issue of edge computing, which has not been much explored. This paper proposes a learning-based decentralized resource coordination framework, L6C, which enables each decision-maker to pursue its benefit in a socializing and rational manner. We formulate a two-timescale problem of computation offloading and resource allocation, then exploit game theory to discuss the rational properties of users and edges. Specifically, each user wants to minimize the execution cost of its task, and edges try to maximize the experience of task execution cooperatively. Further, we design two explicit information interaction mechanisms based on multi-agent deep reinforcement learning, where interactive contents can be generated dynamically along with resource decisions. Experimental results on a real-world dataset show that the proposed framework achieves the superior performance of edges and users compared with various baselines.
Article
Non-orthogonal multiple access (NOMA) is a key technology to enable massive machine type communications (mMTC) in 5G networks and beyond. In this paper, NOMA is applied to improve the random access efficiency in high-density spatially-distributed multi-cell wireless IoT networks, where IoT devices contend for accessing the shared wireless channel using an adaptive p-persistent slotted Aloha protocol. To enable a capacity-optimal network, a novel formulation of random channel access management is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users’ expected capacity. It is shown that the network optimization objective is high dimensional and mathematically intractable, yet it admits favourable mathematical properties that enable the design of efficient data-driven algorithmic solutions which do not require a priori knowledge of the channel model or network topology. A centralized model-based algorithm and a scalable distributed model-free algorithm, are proposed to optimally tune the transmission probabilities of IoT devices and attain the maximum capacity. The convergence of the proposed algorithms to the optimal solution is further established based on convex optimization and game-theoretic analysis. Extensive simulations demonstrate the merits of the novel formulation and the efficacy of the proposed algorithms.
Article
The emerging uplink (UL) and downlink (DL) decoupled radio access networks (RAN) has attracted a lot of attention due to the significant gains in network throughput, load balancing and energy consumption, etc. However, due to the diverse vehicular service requirements in different vehicle-to-everything (V2X) applications, how to provide customized cellular V2X services with diversified requirements in the UL/DL decoupled 5G and beyond cellular V2X networks is challenging. To this end, we investigate the feasibility of UL/DL decoupled RAN framework for cellular V2X communications, including the vehicle-to-infrastructure (V2I) communications and relay-assisted cellular vehicle-to-vehicle (RAC-V2V) communications. We propose a two-tier UL/DL decoupled RAN slicing approach. On the first tier, the deep reinforcement learning (DRL) soft actor-critic (SAC) algorithm is leveraged to allocate bandwidth to different base stations. On the second tier, we model the QoS metric of RAC-V2V communications as an absolute-value optimization problem and solve it by the alternative slicing ratio search (ASRS) algorithm with global convergence. The extensive numerical simulations demonstrate that the UL/DL decoupled access can significantly promote load balancing and reduce C-V2X transmit power. Meanwhile, the simulation results show that the proposed solution can significantly improve the network throughput while ensuring the different QoS requirements of cellular V2X.
Article
Macro base stations are overlaid by small cells to satisfy the demands of user equipment in heterogeneous networks. To provide wide coverage, some small cells are not directly connected to macro base stations and thus backhaul connections are required to connect small cells to macro base stations. Millimeter wave backhauls which have high bandwidths are preferred for small cell backhaul communication, since they can increase the capacity of network considerably. In this context, association of user equipment to base stations becomes challenging due to the backhaul architecture. Considering environmental concerns, energy efficiency is a vital criterion in designing user association algorithms. In this paper, we study the user association problem aiming at the maximization of energy efficiency given a specific spectral efficiency target. We develop centralized and distributed user association algorithms based on sequentially minimizing the power consumption. Finally, we evaluate the performance of the proposed algorithms under two scenarios and show that they achieve higher energy efficiency compared to the existing algorithms in the literature, while maintaining high spectral efficiency and backhaul load balancing.
Article
With the advance of unmanned aerial vehicles (UAVs) and low earth orbit (LEO) satellites, the integration of space, air and ground networks has become a potential solution to the beyond fifth generation (B5G) Internet of remote things (IoRT) networks. However, due to the network heterogeneity and the high mobility of UAVs and LEOs, how to design an efficient UAV-LEO integrated data collection scheme without infrastructure support is very challenging. In this paper, we investigate the resource allocation problem for a two-hop uplink UAV-LEO integrated data collection for the B5G IoRT networks, where numerous UAVs gather data from IoT devices and transmit the IoT data to LEO satellites. In order to maximize the data gathering efficiency in the IoT-UAV data gathering process, we study the bandwidth allocation of IoT devices and the 3-dimensional (3D) trajectory design of UAVs. In the UAV-LEO data transmission process, we jointly optimize the transmit powers of UAVs and the selections of LEO satellites for the total uploaded data amount and the energy consumption of UAVs. Considering the relay role and the cache capacity limitations of UAVs, we merge the optimizations of IoT-UAV data gathering and UAV-LEO data transmission into an integrated optimization problem, which is solved with the aid of the successive convex approximation (SCA) and the block coordinate descent (BCD) techniques. Simulation results demonstrate that the proposed scheme achieves better performance than the benchmark algorithms in terms of both energy consumption and total upload data amount.
Article
Terrestrial-satellite networks are envisioned to play a significant role in the sixth-generation (6G) wireless networks. In such networks, hot air balloons are useful as they can relay the signals between satellites and ground stations. Most existing works assume that the hot air balloons are deployed at the same height with the same minimum elevation angle to the satellites, which may not be practical due to possible route conflict with airplanes and other flight equipment. In this paper, we consider a TSN containing hot air balloons at different heights and with different minimum elevation angles, which creates the challenge of non-uniform available serving time for the communication between the hot air balloons and the satellites. Jointly considering the caching, computing, and communication (3C) resource management for both the ground-balloon-satellite links and inter-satellite laser links, our objective is to maximize the network energy efficiency. Firstly, by proposing a tapped water-filling algorithm, we schedule the traffic to relay among satellites according to the available serving time of satellites. Then, we generate a series of configuration matrices, based on which we formulate the relationship of relay time and the power consumption involved in the relay among satellites. Finally, the integrated system model of TSN is built and solved by geometric programming with Taylor series approximation. Simulation results demonstrate the effectiveness of our proposed scheme.
Article
Low earth orbit (LEO) satellite networks can break through geographical restrictions and achieve global wireless coverage, which is an indispensable choice for future mobile communication systems. In this paper, we present a hybrid cloud and edge computing LEO satellite (CECLS) network with a three-tier computation architecture, which can provide ground users with heterogeneous computation resources and enable ground users to obtain computation services around the world. With the CECLS architecture, we investigate the computation offloading decisions to minimize the sum energy consumption of ground users, while satisfying the constraints in terms of the coverage time and the computation capability of each LEO satellite. The considered problem leads to a discrete and non-convex since the objective function and constraints contain binary variables, which makes it difficult to solve. To address this challenging problem, we convert the original non-convex problem into a linear programming problem by using the binary variables relaxation method. Then, we propose a distributed algorithm by leveraging the alternating direction method of multipliers (ADMM) to approximate the optimal solution with low computational complexity. Simulation results show that the proposed algorithm can effectively reduce the total energy consumption of ground users.
Article
In recent years, edge computing has attracted significant attention because it can effectively support many delay-sensitive applications. Despite such a salient feature, edge computing also faces many challenges, especially for efficiency and security, because edge devices are usually heterogeneous and may be untrustworthy. To address these challenges, we propose a unified framework to provide efficiency and confidentiality by coded distributed computing. Within the proposed framework, we use matrix multiplication, a fundamental building block of many distributed machine learning algorithms, as the representative computation task. To minimize resource consumption while achieving information-theoretic security, we investigate two highly-coupled problems, (1) task allocation that assigns data blocks in a computing task to edge devices and (2) linear code design that generates data blocks by encoding the original data with random information. Specifically, we first theoretically analyze the necessary conditions for the optimal solution. Based on the theoretical analysis, we develop an efficient task allocation algorithm to obtain a set of selected edge devices and the number of coded vectors allocated to them. Using the task allocation results, we then design secure coded computing schemes, for two cases, (1) with redundant computation and (2) without redundant computation, all of which satisfy the availability and security conditions. Moreover, we also theoretically analyze the optimization of the proposed scheme. Finally, we conduct extensive simulation experiments to demonstrate the effectiveness of the proposed schemes.
Article
With the proliferation of compute-intensive and delay-sensitive mobile applications, large amounts of computational resources with stringent latency requirements are required on Internet of Things (IoT) devices. One promising solution is to offload complex computing tasks from IoT devices either to Mobile Edge Computing (MEC) or Mobile Cloud Computing (MCC) servers. MEC servers are much closer to IoT devices and thus have lower latency, while MCC servers can provide flexible and scalable computing capability to support complicated applications. To address the tradeoff between limited computing capacity and high latency, and meanwhile, ensure the data integrity during the offloading process, we consider a blockchain scenario where edge computing and cloud computing can collaborate towards secure task offloading. We further propose a blockchain-enabled IoT-Edge-Cloud computing architecture that benefits both from MCC and MEC, where MEC servers offer lower latency computing services, while MCC servers provide stronger computation power. Moreover, we develop an Energy-Efficient Dynamic Task Offloading (EEDTO) algorithm by choosing the optimal computing place in an online way, either on the IoT device, the MEC server or the MCC server with the goal of jointly minimizing the energy consumption and task response time. The Lyapunov optimization technique is applied to control computation and communication costs incurred by different types of applications and the dynamic changes of wireless environments. During the optimization the best computing location for each task is chosen adaptively without requiring future system information as prior knowledge. Compared with previous offloading schemes with/without MEC and MCC cooperation, EEDTO can achieve energy-efficient offloading decisions with relatively lower computational complexity.
Article
With a massive number of Internet-of-Things (IoT) devices connecting with the Internet via 5G or beyond 5G (B5G) wireless networks, how to support massive access for coexisting cellular users and IoT devices with quality-of-service (QoS) guarantees over limited radio spectrum is one of the main challenges. In this paper, we investigate the multi-operator dynamic spectrum sharing problem to support the coexistence of rate guaranteed cellular users and massive IoT devices. For the spectrum sharing among mobile network operators (MNOs), we introduce a wireless spectrum provider (WSP) to make spectrum trading with MNOs through the Stackelberg pricing game. This framework is inspired by the active radio access network (RAN) sharing architecture of 3GPP, which is regarded as a promising solution for MNOs to improve the resource utilization and reduce deployment and operation cost. For the coexistence of cellular users and IoT devices under each MNO, we propose the coexisting access rules to ensure their QoS and the priority of cellular users. In particular, we prove the uniqueness of the Stackelberg equilibrium (SE) solution, which can maximize the payoffs of MNOs and WSP simultaneously. Moreover, we propose an iterative algorithm for the Stackelberg pricing game, which is proved to achieve the unique SE solution. Extensive numerical simulations demonstrate that, the payoffs of WSP and MNOs are maximized and the SE solution can be reached. Meanwhile, the proposed multi-operator dynamic spectrum sharing algorithm can support more than almost 40% IoT devices compared with the existing no-sharing method, and the gap is less than about 10% compared with the exhaustive method.
Article
Energy efficiency is one of the most important concerns in cloud/edge computing systems. A major benefit of the Dynamic Voltage and Frequency Scaling (DVFS) technique is that a Virtual Machine (VM) can dynamically scale its computation frequency on an on-demand basis, which is helpful in reducing the energy cost of computation when dealing with stochastic workloads. In this paper, we study the joint workload allocation and computation resource configuration problem in distributed cloud/edge computing. We propose a new energy consumption model that considers the stochastic workloads for computation capacity reconfiguration-enabled VMs. We define Service Risk Probability (SRP) as the probability a VM fails to process the incoming workloads in the current time slot, and we study the energy-SRP tradeoff problem in single VM. Without specifying any distribution of the workloads, we prove that, theoretically there exists an optimal SRP that achieves minimal energy cost, and we derive the closed form of the condition to achieve this minimal energy point. We also derive the closed form for computing the optimal SRP when the workloads follow a Gaussian distribution. We then study the joint workload allocation and computation frequency configuration problem for multiple distributed VMs scenario, and we propose solutions to solve the problem for both Gaussian and unspecified distributions. Our performance evaluation results on both synthetic and real-world workload trace data demonstrate the effectiveness of the proposed model. The closeness between the simulation results and the analytical results prove that our proposed method can achieve lower energy consumption compared with fixed computation capacity configuration methods.
Article
Both the edge and the cloud can provide computing services for mobile devices to enhance their performance. The edge can reduce the conveying delay by providing local computing services while the cloud can support enormous computing requirements. Their cooperation can improve the utilization of computing resources and ensure the QoS, and thus is critical to edge-cloud computing business models. This paper proposes an efficient framework for mobile edge-cloud computing networks, which enables the edge and the cloud to share their computing resources in the form of wholesale and buyback. To optimize the computing resource sharing process, we formulate the computing resource management problems for the edge servers to manage their wholesale and buyback scheme and the cloud to determine the wholesale price and its local computing resources. Then, we solve these problems from two perspectives: i) social welfare maximization and ii) profit maximization for the edge and the cloud. For i), we have proved the concavity of the social welfare and proposed an optimal cloud computing resource management to maximize the social welfare. For ii), since it is difficult to directly prove the convexity of the primal problem, we first proved the concavity of the wholesaled computing resources with respect to the wholesale price and designed an optimal pricing and cloud computing resource management to maximize their profits. Numerical evaluations show that the total profit can be maximized by social welfare maximization while the respective profits can be maximized by the optimal pricing and cloud computing resource management.
Article
Ubiquitous sensors and smart devices from factories and communities are generating massive amounts of data, and ever-increasing computing power is driving the core of computation and services from the cloud to the edge of the network. As an important enabler broadly changing people’s lives, from face recognition to ambitious smart factories and cities, developments of artificial intelligence (especially deep learning, DL) based applications and services are thriving. However, due to efficiency and latency issues, the current cloud computing service architecture hinders the vision of “providing artificial intelligence for every person and every organization at everywhere”. Thus, unleashing DL services using resources at the network edge near the data sources has emerged as a desirable solution. Therefore, edge intelligence, aiming to facilitate the deployment of DL services by edge computing, has received significant attention. In addition, DL, as the representative technique of artificial intelligence, can be integrated into edge computing frameworks to build intelligent edge for dynamic, adaptive edge maintenance and management. With regard to mutually beneficial edge intelligence and intelligent edge, this paper introduces and discusses: 1) the application scenarios of both; 2) the practical implementation methods and enabling technologies, namely DL training and inference in the customized edge computing framework; 3) challenges and future trends of more pervasive and fine-grained intelligence. We believe that by consolidating information scattered across the communication, networking, and DL areas, this survey can help readers to understand the connections between enabling technologies while promoting further discussions on the fusion of edge intelligence and intelligent edge, i.e., Edge DL.
Article
As data traffic in terrestrial-satellite systems surges, the integration of power allocation for caching, computing, and communication (3C) has attracted much research attention. How- ever, previous works on 3C power allocation in terrestrial-satellite systems mostly focus on maximizing the overall system throughput. In this paper, we aim to guarantee both throughput fairness and data security in terrestrial-satellite systems. Specifically, we first divide the system implementation into three steps, i.e., data accumulation, blockchain computing, and wireless transmission. Then, we model and analyze the delay and power consumption in each step by proposing several theorems and lemmas regarding 3C power allocation. Based on the theorems and lemmas, we further formulate the problem of 3C power allocation as a Nash bargaining game and construct an optimization model for the game. Last, we solve the optimization problem using dual decomposition and obtain the optimal period of the satellite serving the ground stations as well as the optimal 3C power allocation solution. The optimal solution can provide guidelines for parameter configuration in terrestrial-satellite systems. The performance of the proposed terrestrial-satellite architecture is verified by extensive simulations.
Article
In this work, we consider a mobile edge computing system with both ultra-reliable and low-latency communications services and delay tolerant services. We aim to minimize the normalized energy consumption, defined as the energy consumption per bit, by optimizing user association, resource allocation, and offloading probabilities subject to the quality-of-service requirements. The user association is managed by the mobility management entity (MME), while resource allocation and offloading probabilities are determined by each access point (AP). We propose a deep learning (DL) architecture, where a digital twin of the real network environment is used to train the DL algorithm off-line at a central server. From the pre-trained deep neural network (DNN), the MME can obtain user association scheme in a real-time manner. Considering that real networks are not static, the digital twin monitors the variation of real networks and updates the DNN accordingly. For a given user association scheme, we propose an optimization algorithm to find the optimal resource allocation and offloading probabilities at each AP. Simulation results show that our method can achieve lower normalized energy consumption with less computation complexity compared with an existing method and approach to the performance of the global optimal solution.
Article
To support the explosive growth of wireless devices and applications, various access techniques need to be developed for future wireless systems to provide reliable data services in vast areas. With recent significant advances in ultra-dense low Earth orbit (LEO) satellite constellations, satellite access networks (SANs) have shown their significant potential to integrate with 5G and beyond to support ubiquitous global wireless access. In this article, we propose an enabling network architecture for dense LEO-SANs in which the terrestrial and satellite communications are integrated to offer more reliable and flexible access. Through various physical-layer techniques such as effective interference management, diversity techniques, and cognitive radio schemes, the proposed SAN architecture can provide seamless and high-rate wireless links for wireless devices with different quality of service requirements. Three extensive applications and some future research directions in both the physical layer and network layer are then discussed.
Article
Internet of things (IoT) computing offloading is a challenging issue, especially in remote areas where common edge/cloud infrastructure is unavailable. In this paper, we present a space-air-ground integrated network (SAGIN) edge/cloud computing architecture for offloading the computation-intensive applications considering remote energy-and computation-constraints, where flying unmanned aerial vehicles (UAVs) provide near-user edge computing and satellites provide access to the cloud computing. Firstly, for UAV edge servers, we propose a joint resource allocation and task scheduling approach to efficiently allocate the computing resources to virtual machines and schedule the offloaded tasks. Secondly, we investigate the computing offloading problem in SAGIN and propose a learning-based approach to learn the optimal offloading policy from the dynamic SAGIN environments. Specifically, we formulate the offloading decision making as a Markov decision process where the system state considers the network dynamics. To cope with the system dynamics and complexity, we propose a deep reinforcement learning-based computing offloading approach to learn the optimal offloading policy on-the-fly, where we adopt the policy gradient method to handle the large action space and actor-critic method to accelerate the learning process. Simulation results show that the proposed edge virtual machine allocation and task scheduling approach can achieve near-optimal performance with very low complexity, and that the proposed learning-based computing offloading algorithm not only converges fast, but also achieves a lower total cost compared with other offloading approaches.
Article
The high-speed satellite-terrestrial network (STN) is an indispensable alternative in future mobile communication systems. In this article, we first introduce the architecture and application scenarios of STNs, and then investigate possible ways to implement mobile edge computing (MEC) technique for QoS improvement in STNs. We propose satellite MEC (SMEC), in which a user equipment without a proximal MEC server can also enjoy MEC services via satellite links. We propose a dynamic network virtualization technique to integrate the network resources, and furtherly design a cooperative computation offloading (CCO) model to achieve parallel computation in STNs. Task scheduling models in SMEC are discussed in detail, and an elemental simulation is conducted to evaluate the performance of the proposed CCO model in SMEC.
Article
The emergence of computation-intensive and delaysensitive applications pose a significant challenge to mobile users in providing required computation capacity and ensuring latency. Mobile Edge Computing (MEC) is a promising technology that can alleviate computation limitation of mobile users and prolong their lifetimes through computation offloading. However, computation offloading in an MEC environment faces severe challenges due to dense deployment of MEC servers. Moreover, a mobile user has multiple tasks with a certain dependency between tasks, which adds substantial challenge to offloading policy design. To address the above challenges, in this paper, we first propose a novel two-tier computation offloading framework in heterogeneous networks. Then, we formulate joint computation offloading and user association problem for multi-task mobile edge computing system to minimize overall energy consumption. To solve the optimization problem, we develop an efficient computation offloading algorithm by jointly optimizing user association and computation offloading where computation resource allocation and transmission power allocation are also considered. Numerical results illustrate that the fast convergence of the proposed algorithm, and demonstrate the superior performance of our proposed algorithm compared to state of the art solutions.
Article
In this paper, we propose Chimera, a novel hybrid edge computing framework, integrated with the emerging edge cloud radio access network, to augment network-wide vehicle resources for future large-scale vehicular crowdsensing applications, by leveraging a multitude of cooperative vehicles and the virtual machine (VM) pool in the edge cloud via the control of the application manager deployed in the edge cloud. We present a comprehensive framework model and formulate a novel multi-vehicle and multi-task offloading problem, aiming at minimizing the energy consumption of network-wide recruited vehicles serving heterogeneous crowdsensing applications, and meanwhile reconciling both application deadline and vehicle incentive. We invoke Lyapunov optimization framework to design TaskSche, an online task scheduling algorithm, which only utilizes the current system information. As the core components of the algorithm, we propose a task workload assignment policy based on graph transformation and a knapsack-based VM pool resource allocation policy. Rigorous theoretical analyses and extensive trace-driven simulations indicate that our framework achieves superior performance (e.g., 20% – 68% energy saving without overstepping application deadlines for network-wide vehicles compared with vehicle local processing) and scales well for a large number of vehicles and applications.
Article
Mobile Edge Computing (MEC) is an emergent architecture where cloud computing services are extended to the edge of networks leveraging mobile base stations. As a promising edge technology, it can be applied to mobile, wireless and wireline scenarios, using software and hardware platforms, located at the network edge in the vicinity of end-users. MEC provides seamless integration of multiple application service providers and vendors towards mobile subscribers, enterprises and other vertical segments. It is an important component in the 5G architecture which supports variety of innovative applications and services where ultra low latency is required. This paper is aimed to present a comprehensive survey of relevant research and technological developments in the area of MEC. It provides the definition of MEC, its advantages, architectures, and application areas; where we in particular highlight related research and future directions. Finally, security and privacy issues and related existing solutions are also discussed.
Article
In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints, and also consider extensions to some structured, nonsmooth problems. The algorithm solves a sequence of (separable) strongly convex problems and maintains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general and flexible and unifies several existing Successive Convex Approximation (SCA)- based algorithms More importantly, and differently from current SCA approaches, it naturally leads to distributed and parallelizable implementations for a large class of nonconvex problems. This Part I is devoted to the description of the framework in its generality. In Part II we customize our general methods to several multi-agent optimization problems in communications, networking, and machine learning; the result is a new class of centralized and distributed algorithms that compare favorably to existing ad-hoc (centralized) schemes.
Article
Mobile edge computing enables the provision of computationally demanding Augmented Reality (AR) applications on mobile devices. AR mobile applications have inherent collaborative properties in terms of data collection in the uplink, computing at the edge, and data delivery in the downlink. In this letter, we propose a resource allocation approach whereby transmitted, received and processed data are shared partially among the users to obtain an efficient utilization of the communication and computation resources. The approach, implemented via Successive Convex Approximation (SCA), is seen to yield considerable gains in mobile energy consumption as compared to the conventional independent offloading across users.
Article
Mobile-edge computation offloading (MECO) offloads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a non-convex mixed-integer problem. To solve this challenging problem and characterize its policy structure, a sub-optimal low-complexity algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance by simulation.
Article
With small cell base stations (SBSs) densely deployed in addition to conventional macro base stations (MBSs), the heterogeneous cellular network (HCN) architecture can effectively boost network capacity. To support the huge power demand of HCNs, renewable energy harvesting technologies can be leveraged. In this paper, we aim to make efficient use of the harvested energy for on-grid power saving while satisfying the quality of service (QoS) requirement. To this end, energy-aware traffic offloading schemes are proposed, whereby user associations, ON-OFF states of SBSs, and power control are jointly optimized according to the statistical information of energy arrival and traffic load. Specifically, for the single SBS case, the power saving gain achieved by activating the SBS is derived in closed form, based on which the SBS activation condition and optimal traffic offloading amount are obtained. Furthermore, a two-stage energy-aware traffic offloading (TEATO) scheme is proposed for the multiple-SBS case, considering various operating characteristics of SBSs with different power sources. Simulation results demonstrate that the proposed scheme can achieve more than 50% power saving gain for typical daily traffic and solar energy profiles, compared with the conventional traffic offloading schemes.
Article
With the foreseeable explosive growth of small cell deployment, backhaul has become the next big challenge in the next generation wireless networks. Heterogeneous backhaul deployment using different wired and wireless technologies may be a potential solution to meet this challenge. Therefore, it is of cardinal importance to evaluate and compare the performance characteristics of various backhaul technologies to understand their effect on the network aggregate performance. In this paper, we propose relevant backhaul models and study the delay performance of various backhaul technologies with different capabilities and characteristics, including fiber, xDSL, millimeter wave (mmWave), and sub–6 GHz. Using these models, we aim at optimizing the base station (BS) association so as to minimize the mean network packet delay in a macrocell network overlaid with small cells. Numerical results are presented to show the delay performance characteristics of different backhaul solutions. Comparisons between the proposed and traditional BS association policies show the significant effect of backhaul on the network performance, which demonstrates the importance of joint system design for radio access and backhaul networks.
A distributed deep reinforcement learning technique for application placement in edge and fog computing environments
  • M Goudarzi
  • M S Palaniswami
  • R Buyya