Figure 2 - available via license: CC BY
Content may be subject to copyright.
One-to-many matching.

One-to-many matching.

Source publication
Article
Full-text available
Mobile edge computing (MEC) is an emerging technology that leverages computing, storage, and network resources deployed at the proximity of users to offload their delay-sensitive tasks. Various existing facilities including mobile devices with idle resources, vehicles, and MEC servers deployed at base stations or road side units, could act as edges...

Contexts in source publication

Context 1
... achieve a practical and distributed solution, we realize that the task assignment problem in MEC architectures can also be formulated as a matching game. As shown in Figure 2, computation tasks and ENs are considered to be two disjointed sets of agents to be matched together. In particular, one task can be assigned to, at most, one EN, while one EN can accept multiple tasks. ...
Context 2
... continue until all task are matched or the delay requirements of the unmatched tasks cannot be satisfied by any ENs. The runtime complexity of this algorithm is O (N 2 ), where N is the number of tasks or ENs. ...

Similar publications

Article
Full-text available
Fog computing is considered emerging technology nowadays. Due to proximity to the end user, fog computing provides a reliable transmission with low latency. In this paper, we have proposed an improved mutual authentication security scheme based on advanced encryption standard AES and hashed message authentication code HMAC in fog computing. Our sch...

Citations

... The objective of [20]'s study of the mutual gain interaction of work offloading between two sets of agents was to decrease overall energy consumption. An agent-to-agent task-scheduling process was then used, and the effectiveness of the suggested mechanism was then confirmed by a simulation experiment. ...
Article
Full-text available
The idea of computational offloading is quickly catching on in the world of mobile cloud computing (MCC). Today’s applications have heavy demands on power and computing resources, creating issues with energy consumption, storage capacity, and mobile device performance. Mobile devices may efficiently offload their calculations to cloud servers using the offloading paradigm and then get the processed results back onto the device. The investigation relates to identifying the specific application components that should be offloaded and run remotely and which parts are supposed to be treated locally. The applications need to be partitioned to differentiate between remote and local codes. In this paper, an agent-based multistage graph partitioning (ABMP) scheme is proposed. The framework of the scheme is based on three-tier architecture that includes mobile, cloudlet, and cloud for the execution of application tasks. The main goal is to provide an efficient partitioning and offloading scheme in the mobile cloud computing area. The results show that incorporating both agent-based multistage graph partitioning and offloading algorithms yields superior performance as compared to previous methods in terms of reducing execution costs and conserving battery life for mobile devices.
... In [2], a genetic algorithm based on a dataaware task allocation strategy has been proposed that considers the network congestion control for allocating sub-tasks. In [20], the authors have focused on the reduction of energy consumption for task assignments by considering the heterogeneity of users using a heuristic-based greedy approach. An architecture has been proposed in [21] that considers unloading resourceintensive tasks from client devices in the cooperative edge space or to the remote cloud depending on users' desire and resource availability. ...
Article
Full-text available
The emergence of mobile edge computing (MEC) has brought cloud services to nearby edge servers facilitating penetration of real-time and resource-consuming applications from smart mobile devices at a high rate. The problem of task offloading from mobile devices to the edge servers has been addressed in the state-of-the-art works by introducing collaboration among the MEC servers. However, their contributions are either limited by minimization of service latency or cost reduction. In this paper, we address the problem by developing a multi-objective optimization framework that jointly optimizes the latency, energy consumption, and resource usage cost. The formulated problem is proven to be an NP-hard one. Thus, we develop an evolutionary meta-heuristic solution for the offloading problem, namely WOLVERINE, based on a Binary Multi-objective Grey Wolf Optimization algorithm that achieves a feasible solution within polynomial time having computational complexity of O(M3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(M^3)$$\end{document}, where M is an integer that determines the number of segments in each dimension of the objective space. Our experimental results depict that the developed WOLVERINE system achieves as high as 33.33%, 35%, and 40% performance improvements in terms of execution latency, energy, and resource cost, respectively compared to the state-of-the-art.
... As shown in Figure 1, this section proposes the communication network architecture of VPP aggregation regulation based on the communication requirements of distributed source (Gu et al., 2018;Zhou et al., 2019;Zhu et al., 2019), load, and storage resources participating in power grid regulation through VPP aggregation. Communication network architecture of VPP adopts hierarchical architecture, which consists of a terminal controller layer, local communication network layer, edge gateway layer, remote communication network layer, and master station layer from bottom to top. ...
Article
Full-text available
Virtual power plant (VPP) plays an important role in improving the balance and regulation abilities of new power system. The safe and reliable operation is support by the VPP end-to-end communication network with differentiated multi-service bearing capability. For the requirement of unified and standard VPP end-to-end networking scheme, the VPP service communication metrics, as well as the communication network architecture of VPP aggregation and control are analyzed. Then, a multi-dimension hierarchical VPP end-to-end network evaluation index system is put forward. In addition, an end-to-end VPP network evaluation method considering differentiated time-sensitive and granular requirements of multiple services is proposed. Finally, the suitability analysis results of various end-to-end networking schemes and multiple services with differentiated time-sensitive and granular requirements are given, which plays a guiding role in establishing a unified standard VPP end-to-end networking scheme.
... Some scholars have conducted some studies on this issue. For example, in [10], a relatively practical IoV scenario was considered, and a matching game method was used to model the task allocation. The simulation results show that the input data transmission delay accounts for 73% of the total task processing time. ...
Article
Full-text available
With the continuous development of the 6G mobile network, computing-intensive and delay-sensitive onboard applications generate task data traffic more frequently. Particularly, when multiple intelligent agents are involved in tasks, limited computational resources cannot meet the new Quality of Service (QoS) requirements. To provide a satisfactory task offloading strategy, combining Multi-Access Edge Computing (MEC) with artificial intelligence has become a potential solution. In this context, we have proposed a task offloading decision mechanism (TODM) based on cooperative game and deep reinforcement learning (DRL). A joint optimization problem is presented to minimize both the overall task processing delay (OTPD) and overall task energy consumption (OTEC). The approach considers task vehicles (TaVs) and service vehicles (SeVs) as participants in a cooperative game, jointly devising offloading strategies to achieve resource optimization. Additionally, a proximate policy optimization (PPO) algorithm is designed to ensure robustness. Simulation experiments confirm the convergence of the proposed algorithm. Compared with benchmark algorithms, the presented scheme effectively reduces delay and energy consumption while ensuring task completion.
... If the mutation probability is established, the node assigned to the IoT service for the current element is updated. This update is done based on Equation (22). ...
Article
Full-text available
Nowadays, fog computing has joined cloud computing as an emerging computing paradigm to provide resources at the network edge. Fog servers process data associated with Internet of Things (IoT) devices independently of cloud computing, thus saving bandwidth, resource reservations, and storage for real‐time applications with lower latency. Besides, cloud computing supports the integration of edge and cloud resources and facilitates the placement of IoT applications at the network edge. Recent researchers focus on how to deploy IoT services as components of IoT applications on fog computing units, where the loss of resources, energy, and bandwidth are minimized. This problem, known as the IoT service placement problem (SPP), is NP‐hard, and meta‐heuristic models are popular to address it. Each IoT service has its own requirements in terms of latency sensitivity, processor, memory, and storage. Meanwhile, fog computing units are heterogeneous and have limited resource capacities. Therefore, SPP should be addressed by considering the features of fog environment, tolerable delay, and network bandwidth. We formulate SPP as a multi‐objective optimization problem with the perspective of throughput, service cost, resource utilization, energy consumption, and service latency. To solve this problem, the learner performance‐based behavior (LPB) algorithm is presented as a meta‐heuristic model that originates from the MAPE‐K autonomous planning model. The proposed approach, LPB‐SPP, considers resource consumption distribution and service deployment prioritization, and also uses the concepts of elitism and balanced resource consumption to improve the placement process. The validation of LPB‐SPP has been done using different performance metrics and the results have been compared against state‐of‐the‐art algorithms. Simulations show that LPB‐SPP performs better in most comparisons.
... Pushing networks and capital next to the MEUs network edge is called mobile edge computing (MEC). The MEC integrates fog computing into the mobile world and monitors the storage, protection, and privacy limitations of mobile devices to reduce latency, increase the quality of experience, and ensure a highly efficient network operation and service delivery [4,5]. ...
... Pushing networks and capital next to the MEUs network edge is called mobile edge computing (MEC). The MEC integrates fog computing into the mobile world and monitors the storage, protection, and privacy limitations of mobile devices to reduce latency, increase the quality of experience, and ensure a highly efficient network operation and service delivery [4,5]. ...
Article
Healthcare applications need an immediate response with minimal latency. Fog computing provides real-time monitoring and computation at the network edge so the data collected by Internet of Things (IoT) devices can be analyzed with minimal cost in real-time. However, dealing with end-user mobility becomes even more challenging with fog-assisted IoT healthcare frameworks because low latency in mobility support and fog node locating is a vital prerequisite. In this paper, we propose a fog node (FN) allocation and mobility-adapted fog computing architecture. The architecture is based on an algorithm that offers low latency for location-sensitive operations and service stability with high data broadcast quality for user mobility in fog computing. In our proposed architecture, it is no longer obligatory to either carry out a job on, or redirect it to, the cloud but instead connect with other FNs to continue with a request for a new job. The proposed architecture minimizes average network lateness by 40%–50%, bandwidth by 30%–40%, and end-to-end communication by 35%–55%. Additionally, the utilization of the FN computational resources is maximized.
... Sahni et al. [12] proposed a data-aware multistage greedy adjustment algorithm to schedule tasks and network flows together to achieve low latency. Gu et al. [13] designed a distributed and context-aware task assignment mechanism to reduce overall energy consumption while satisfying the heterogeneous delay requirements. Wang et al. [14] presented a latency-aware heterogeneous mobile-edge computing system, where the data are offloaded to the cloud center if the edge cannot process it on time. ...
Article
Full-text available
The strict latency constraints of emerging vehicular applications make it unfeasible to forward sensing data from vehicles to the cloud for processing. To shorten network latency, vehicular fog computing (VFC) moves computation to the edge of the Internet, with the extension to support the mobility of distributed computing entities (a.k.a fog nodes). In other words, VFC proposes to complement stationary fog nodes co-located with cellular base stations with mobile ones carried by moving vehicles (e.g., buses). Previous works on VFC mainly focus on optimizing the assignments of computing tasks among available fog nodes. However, capacity planning, which decides where and how much computing resources to deploy, remains an open and challenging issue. The complexity of this problem results from the spatio-temporal dynamics of vehicular traffic, varying computing resource demand generated by vehicular applications, and the mobility of fog nodes. To solve the above challenges, we propose a data-driven capacity planning framework that optimizes the deployment of stationary and mobile fog nodes to minimize the installation and operational costs under the quality-of-service constraints, taking into account the spatio-temporal variation in both demand and supply. Using real-world traffic data and application profiles, we analyze the cost efficiency potential of VFC in the long term. We also evaluate the impacts of traffic patterns on the capacity plans and the potential cost savings. We find that high traffic density and significant hourly variation would lead to dense deployment of mobile fog nodes and create more savings in operational costs in the long term.
... For a machine learning algorithm, in order to verify the pros and cons of the algorithm, and whether the algorithm can successfully solve people's problems, it must pass the evaluation and test of the model [18]. Under normal circumstances, the overall sample will be classified into two categories, one is classified correctly, and the other is classified incorrectly. ...
Article
Full-text available
Edge computing is an important cornerstone for the construction of 5G networks, but with the development of Internet technology, the computer nodes are extremely vulnerable in attacks, especially clone attacks, causing casualties. The principle of clonal node attack is that the attacker captures the legitimate nodes in the network and obtains all their legitimate information, copies several nodes with the same ID and key information, and puts these clonal nodes in different locations in the network to attack the edge computing devices, resulting in network paralysis. How to quickly and efficiently identify clone nodes and isolate them becomes the key to prevent clone node attacks and improve the security of edge computing. In order to improve the degree of protection of edge computing and identify clonal nodes more quickly and accurately, based on edge computing of machine learning, this paper uses case analysis method, the literature analysis method, and other methods to collect data from the database, and uses parallel algorithm to build a model of clonal node recognition. The results show that the edge computing based on machine learning can greatly improve the efficiency of clonal node recognition, the recognition speed is more than 30% faster than the traditional edge computing, and the recognition accuracy reaches 0.852, which is about 50% higher than the traditional recognition. The results show that the edge computing clonal node method based on machine learning can improve the detection success rate of clonal nodes and reduce the energy consumption and transmission overhead of nodes, which is of great significance to the detection of clonal nodes.
... Gu et al. [76] proposed a binary linear programming-based model and heuristics for task assignment in a MEC environment. Their model optimizes overall energy consumption induced by the execution and transmission of tasks, while ensuring that the delay constraint required by all tasks is satisfied. ...
Article
Full-text available
In recent years, fog computing has emerged as a computing paradigm to support the computationally intensive and latency-critical applications for resource limited Internet of Things (IoT) devices. The main feature of fog computing is to push computation, networking, and storage facilities closer to the network edge. This enables IoT user equipment (UE) to profit from the fog computing paradigm by mainly offloading their intensive computation tasks to fog resources. Thus, computation offloading and service placement mechanisms can overcome the resource constraints of IoT devices, and improve the system performance in terms of increasing battery lifetime of UE and reducing the total delay. In this paper, we survey the current research conducted on computation offloading and service placement in fog computing-based IoT in a comparative manner.