Conference PaperPDF Available

Edge Computing Assisted Autonomous Driving Using Artificial Intelligence

Authors:
  • Capgemini Engineering
Edge Computing Assisted Autonomous Driving
Using Artificial Intelligence
Hatem Ibn-Khedher, Mohammed Laroui†‡ , Mouna Ben Mabrouk, Hassine Moungla†‡ , Hossam Afifi
Alberto Nai Oleari and Ahmed E. Kamal §
ALTRAN Labs,78140 Velizy-Villacoublay, France.
Emails: {hatem.ibnkhedher,mouna.benmabrouk,albero.naioleari}@altran.com
Universit´
e de Paris, LIPADE, F-75006 Paris, France.
Emails: {mohammed.laroui,hassine.moungla}@u-paris.fr
UMR 5157, CNRS, Institut Polytechnique de Paris, Telecom SudParis Saclay, France.
Emails: {hassine.moungla,hossam.afifi}@telecom-sudparis.eu
§Department of Electrical Computer Engineering, Iowa State University, Ames, IA 50011-3060, USA.
Email: see http://www.ece.iastate.edu/ kamal/
Abstract—The emergence of new vehicles generation such as
connected and autonomous vehicles led to new challenges in the
vehicular networking and computing managements to provide
efficient services and guarantee the quality of service. The edge
computing facility allows the decentralization of processing from
the cloud to the edge of the network. In this paper, we design
and propose an end-to-end, reliable and low latency communica-
tion architecture that allows the allocation of compute-intensive
autonomous driving services, in particular autopilot, to shared
resources on edge computing servers and improve the level of
performance for autonomous vehicles. The reference architecture
is used to design an Advanced Autonomous Driving (AAD)
communication protocol between autonomous vehicles, edge com-
puting servers, and the centralized cloud. Then, a mathematical
programming approach using Integer Linear Programming (ILP)
is formulated to model the autopilot chain resources Offloading
at the network edge. Further, a deep reinforcement learning
(DRL) approach is proposed to deal with dense Internet of
Autonomous Vehicle (IoAV) networks. Moreover, several scenarios
are considered to quantify the behavior of the optimization
approaches. We compare their efficiency in terms of Total Edge
Servers Utilization, Total Edge Servers Allocation Time, and
Successfully Allocated Edge Autopilots.
Index Terms—Edge Computing, Autonomous Vehicles (AV),
Artificial Intelligence (AI), Optimization, Deep Reinforcement
Learning (DRL).
I. INTRODUCTION
Autonomous electric vehicles (AEVs) [1]–[3] are recent self-
driving vehicles generation where the fuel energy replaced by
electric energy which is environmentally friendly energy. The
big challenge in autonomous electric vehicles is the self driving
management system for advanced autonomous driving (AAD).
The evolution of vehicles generations has led to new challenges
in vehicular communications which require high computation
resources to decide about safe trajectories and secure intra and
inter-vehicle communication.
The edge computing paradigm [4] allows the decentraliza-
tion of processing from the cloud to the network edge that
decreases the latency which represents one of the issues in the
cloud computing architecture. It offers a distributed architecture
for processing data close to end-users which facilitates the
transactions between mobile end-users and the centralized
cloud. In addition, the edge computing supports a high mobility
of connected and autonomous vehicles that impacts directly
the communication link [5] and the continuity of self-driving
procedures.
The edge computing layer can be embedded in different
points of operation (PoPs) such as the Remote Side Unit (RSU),
the Next Generation NodeB (gNB), and the autonomous vehi-
cles [6]. Moreover, the edge computing can be used with elec-
tric and autonomous vehicles to enhance driving situations and
the quality of service (QoS) in the vehicular communications
(V2X). In this paper, we propose to use autonomous vehicles
assisted with virtual edge servers for efficient self driving.
Introducing Artificial Intelligence (AI) techniques at the edge
layer may replace the current solvers or heuristics. It will
ease the network optimization task using recent deep and re-
inforcement learning approaches for edge/fog/cloud resources
allocation and management.
The main contributions in this paper are as follows
Design and propose an end-to-end, reliable and low-
latency communication architecture that allows the allo-
cation of compute-intensive autonomous driving services,
in particular autopilot, to shared resources on edge servers
and improve the level of performance for autonomous
vehicles
Propose an Advanced Autonomous Driving (AAD) com-
munication protocol that supports dense moving vehicles.
Use of Edge-assisted Integer Linear Programming (ILP)
Techniques to allocate edge autopilot resources on optimal
edge computing servers.
Introduce Edge Deep Reinforcement Learning (EDRL) as
a new approach to automate and optimize the allocation of
heterogeneous edge autopilot computing and networking
resources, extract knowledge from disseminated data, and
recommend edge autopilot allocation strategies according
to different metrics. The autopilot’s Virtual Network Func-
tions (VNFs) allocation is compared to standard optimiza-
tion techniques in order to show how deep learning tech-
niques will be used to solve generalized compute-intensive
autonomous driving services optimization problems.
The rest of the paper is as follow. Section II gives a
detailed related work in the field of edge-assisted autonomous
driving architectures, protocols, and optimizations. Section III
describes our proposed Advanced Autonomous Driving (AAD)
communication protocol. Section IV introduces the proposed
mathematical programming approach for AAD. Section V
enhances the optimisation module with an Edge Deep Rein-
forcement Learning (EDRL) approach. Section VII evaluates
the proposed approaches and the work is concluded in the final
section.
II. RE LATE D WORK
This section highlights the relevant autopilot offloading
optimization algorithms.
In [7] the authors present the state of the art approaches
that leverage the edge-computing paradigm in the autonomous
driving field. However, it misses the discussion about current
edge AI work and optimal resources allocation design.
In [8], the authors present an edge-cloud computing model
for autonomous vehicles using the open-source software plat-
form Autoware [9]. They believe that their proposed edge-
cloud computing model for Autoware-based autonomous ve-
hicles reduces the execution time and the total deadline miss.
Among the main missing modules in their platform, the work
does not consider neither the in-vehicle computing resources
management, nor the Vehicle to Edge (V2E) communications.
In [10], the authors propose surrogate, an edge architecture
for self-driving cars with OpenStack and ETSI open-source
MANO. It aims at virtualizing the in-vehicle On Board Units
(OBU) at the distributed edge platform and managing Multi-
Access Edge Computing (MEC) layers that process real-time
vehicle requests. The work suffers from optimal virtual OBU
(vOBU) management and orchestration algorithms at the vir-
tualized edge surrogates. Moreover, vOBU manager module
needs to take into account solver instances related to the IoAV
network scale and driving conditions.
In [11] the authors propose a cloud based self-driving
car which can optimize the in-vehicle data storage issues.
They propose to free autonomous vehicles from all data and
download everything from the cloud as per the need of the
travel. Their solution allows to free vehicles from raw data
and rely on a centralized cloud infrastructure for the drive.
The authors assume a persistent network connectivity to the
cloud and a sufficient in-vehicle storage to back the data in
the case of limited network availability. The proposed cloud
infrastructure is not clear and need to integrate scheduling
algorithms that allocate gathered data to cpu cores and servers.
Moreover, it missed distributed edge computing servers that
process efficiently sensitive application data.
In [12] the authors proposed Carcel, a cloud-assisted system
for autonomous driving. The cloud platform has access to data
from AV sensors and the roadside infrastructure environment.
It assists autonomous vehicles to detect/avoid obstacles such as
pedestrians and other vehicles that may not be directly detected
Advan5ed AD Protocol
C-ITS
C-ITS
ADAS
Figure 1: Edge autopilot use case in the Artificial Intelligence Defined
Optimization Framework
by AV sensors. We believe that the work is of practical interest,
however, it missed virtualization techniques and VNF manager
modules that ease the allocation procedures of the autopilot
chain on the cloud. Moreover, the edge/fog facility is missed
from the overall architecture.
III. PROP OS ED ED GE AU TO PI LOT PROTOCOL
A. Proposed AAD Architecture Using Edge Artificial Intelli-
gence
In Fig. 1 we highlight the proposed Advanced Autonomous
Driving (AAD) architecture of our edge autopilot use case. The
AAD architecture consists of three main layers or entities as
follows:
Centralized Cloud Computing Layer: It acts as the
cloud autopilot and is responsible for processing Non-
Real-Time (NRT) edge autopilot VNFs.
Distributed Network Edges Layer: It is an intermediate
layer that connects OBU vehicles to the cloud. It consists
of distributed edge servers that assure the cooperation
between the virtualized OBUs or vehicles. It is responsible
for processing and analyzing offloaded VNFs according to
vehicle requests requirements and available resources in
the edge servers. It is worth mentioning that resources
include computing (CPU, GPU, FPGA), radio (Resource
Block, SNR, MCS, CQI), RAM, and storage. The network
edge cooperates with distant edges and the cloud. It can
execute VNF migration and outsourcing in case of local
resources miss.
Autonomous Vehicles Layer: It is the layer of au-
tonomous vehicles that request autopilot services chains
offloading due to the local resources scarcity.
B. Proposed AAD Communication Protocol
Fig. 1 shows the main modules of the proposed AAD
protocol:
1) Edge Autopilot Slicing: each autonomous vehicle can
request for offloading some autopilot functions. It request
the near edge node, representing by gNB or RSU (i.e., a
Embedded
perception
system
Embedded
Autopilot
Connected
& AD
vehicles
Cloud
Autopilot
Position/Speed/decision
1
2
3
World Model
Request
(Brake/accelerate/stop)
Maximum speed
authorized
Lane Ordering
Vehicle states
collection
Traffic dynamic
Cloud Input autopilot
for anticipation / Long
range perception
Local Piloting and
reactivity
4
Figure 2: The communication steps between the centralized Edge and
the distributed connected AD vehicles
5G base station) to enable local edge resource discovery
and slices allocation.
2) Resources Discovery in Connected Edge Nodes: when
the access point receives autopilot functions offloading
request, it generates VNF components or slices. Then,
it selects a set of connected edge nodes that can satisfy
each VNF requirements in terms of GPU resources. The
selected set of connected edge nodes called Virtual Edge
Servers (VES). Resource discovery procedure is based on
computing and networking capabilities of the servers.
3) Autopilot VNFs Offloading/Allocation: when the VES
is selected, the access point starts the process of slices
offloading by allocating each slice a free device re-
sources (from the selected VES that can satisfy the slice
request requirements). It is worth mentioning that an
optimization algorithm is used to select optimal points
of operations where VNFs can be offloaded according
to the aforementioned system and network requirements.
Still, the cloud computing may represent a solution in
the case of edge resources miss. This case may occur
when the access point cannot select a VES that can meet
the demands of the autonomous vehicles set.
4) VNF Components Graph: this is the optimization re-
sults of the allocation procedures that indicate the place-
ment of each VNF component. After launching the VNFs
in the VES/cloud servers, optimal control commands are
sent directly to the access point.
5) Results Forwarding: in the last step, the access point
forwards optimal control commands to the autonomous
vehicle while satisfying its requirements.
For the sake of clarity, we show in Fig. 2 the main commu-
nication steps between the edge computing and the connected
autonomous vehicle:
Connected autonomous vehicles send instantaneous states
such as position, speed, and next decision of the autopilot
to the edge/cloud.
The Edge/Cloud Autonomous Driving (AD) service col-
lects the raw data, creates the wold model for each section
TABLE I: OVEAP Parameters and Decision Variables
Notations Definition
EA The set of Edge Autopilots in terms of services
chain.
VN F The set of Edge Autopilot VNFs. Each Edge
Autopilot VNF is composed of slices.
AV The set of Autonomous Vehicles.
MEC The set of Mobile Edge Computing Servers avail-
able at the network edge.
Lea The number of VNFs in each Edge Autopilot
ea EA. It represents the length of the Edge
Autopilot.
Gmec The maximum computing capacity vGPU avail-
able in the MEC server mec MEC.
gea,vnf Required vGPU resources for the VNF slices vnf
of the Edge Autopilot services chain ea.
Decision variables Definition
amec
ea,vnf A binary variable that allocates the Edge Autopilot
VNF vnf VN F of the Edge Autopilot ea
EA to the MEC server mec ME C.
bmec A binary variable that indicates if the MEC server
mec MEC is used to process the edge
autopilot VNFs.
of the road, and communicates with Cloud AD Autopilot.
The Edge/Cloud AD Autopilot sends the global model,
generates a high level request for each autonomous vehicle
node such as speed request and lane request positioning.
The Integrated Autopilot merges the Edge/Cloud autopilot
inputs and embedded/local inputs to decide to anticipate
and act locally.
As explained above, the AAD protocol needs an intelli-
gent optimization algorithms that allocate autopilot VNFs to
optimal/near-optimal edge servers.
IV. PROPOSED OPTIMAL VIRTUAL ED GE AU TOPILOT
PLAC EM EN T APP ROAC H
The optimization of edge autopilot’s Virtual Network Func-
tions (VNFs) placement in edge computing architectures has
achieved more attentiveness. It is similar to the placement of
Virtual Machines (VMs), where the VNFs are composed of
containers or VMs that can execute network functions.
We propose a mathematical programming approach for Op-
timal Virtual Edge-Autopilot Placement (OVEAP) based on
Linear Programming (LP) technique. It aims to model the
autopilot services offloading in the virtualized edge computing
architecture. It takes as inputs the edge system capacity in
terms of computing resources. It aims then to optimally allocate
autopilot VNFs upon the available virtual edge servers. The
autopilot VNFs are offloaded to the centralized edge in order
to reduce the VNF processing time and increase the driving
safety. The optimization algorithm OVEAP in edge computing
architecture is modeled in the next subsections.
A. OVEAP Parameters and Decision Variables
We quote in Table I the main parameters and decision
variables of the proposed OVEAP algorithm.
The binary variable aindicates the allocation of the edge
autopilot VNF (ea, vnf ) E A × VN F on the MEC server
mec MEC. It represents a service instantiate graph (SIG)
that defines the optimal points of operations where Edge
Autopilot VNFs should be allocated.
amec
ea,vnf =
1if the VNF vf n of the Edge Autopilot ea
is allocated on the MEC server mec
0Otherwise
(1)
Further, a binary variable yis needed to track the MEC
server utilisation. It is formulated as follows:
bmec =(1if the MEC server mec MEC is used
0Otherwise (2)
B. Exact ILP formulation
The OVEAP objective function (3) tries to maximize the
succeeded placement of edge autopilot VNFs. The general ILP
formulation is as follows:
Maximize OBJ =X
ea∈EA X
mec∈MEC
amec
ea,Lea X
mec∈MEC
bmec!
(3)
Subject to
X
mec∈MEC
amec
ea,vnf 1,ea EA, vnf V N F (4)
X
ea∈EA X
vnf ∈VN F
gea,vnf ×amec
ea,vnf Gmecbmec ,
mec MEC
(5)
X
mec∈MEC
amec
ea,vnf =X
mec∈MEC
amec
ea,vnf +1,ea E A,
vnf V N F \ {Lea }
(6)
bmec X
ea,vnf
amec
ea,vnf ,mec MEC (7)
0amec
ea,vnf 1,ve ME C, ea EA, vnf V N F (8)
0bmec 1,mec MEC, ea E A, v nf VN F (9)
The OVEAP algorithm constraints are presented as follows:
Constraints (4) guarantee that the VNF slices vnf
V N F of the edge autopilot ea E A is placed on a single
MEC server mec MEC .
Constraints (5) ensures that the selected MEC server has
enough amount of vGPU resources to execute the VNF
slices of the edge autopilot (bmec = 1). If the virtual
server is not used (bmec = 0), the model does not allocate
any edge autopilot.
Constraints (6) ensure that the VNF components of each
edge autopilot must be executed in series; for an edge
autopilot service chain, the allocation of one VNF to a
MEC server should not be done if the previous VNFs
have not been allocated. This guaranties that, at the end,
if a VNF is allocated, then the previous VNFs are also
allocated.
Constraints (7) are formulated to turn off non used MEC
servers in our exact model.
Constraints (8) (9) ensure the non-negativity of the pro-
posed decision variables.
C. OVEAP complexity and triggers
OVEAP complexity: OVEAP once having the required
computing thresholds, the algorithm is executed periodically.
Further, OVEAP is a non-deterministic polynomial time ap-
proach which is feasible with a small number of instances. It
follows an exponential time complexity.
V. DE EP RE IN FO RC EM EN T LEARNING BAS ED VI RTUA L
EDGE AUTOPILOT PLAC EM EN T
A. General DVEAP Context
We suppose that we have different types of edge autopilots,
each of them having a particular size in terms of VNF number.
Further, we consider a central edge server connected to differ-
ent autonomous vehicles via RSU and gNB gateways. The edge
server is considered as a centralized data center or a reservoir
computing that has computing resources in terms of vGPUs
slots per MEC server. Each MEC server has a maximum GPU
computing capacity.
We proposed a DRL-based Virtual Edge-Autopilot Place-
ment (DVEAP), an AI-defined optimization approach that tries
to replace the tedious process of placement (OVEAP) by recent
AI techniques. We introduce Deep Learning modules at the
network edge that collect, process, and analyse autonomous
vehicle data. Then, the edge sends optimized control commands
back to the vehicle. For this purpose, a scalable and cost
efficient algorithm based on AI/DRL is proposed to ease the
edge autopilot placement process.
The DVEAP optimization technique aims to:
Determine the near optimal MEC server where each edge
autopilot VNF should be offloaded.
Minimize the active MEC servers that should satisfy the
vehicle requests.
Maximize the allocation of incoming edge autopilot.
Guarantee the chaining and the precedence constraints
between edge autopilot VNFs.
We propose to used Deep-Q-Learning (DQN) technique as
the most promising value iteration strategy in discrete action
values.
B. DVEAP formulation
The DVEAP algorithm leverages Deep Learning techniques
to approximate the action-selection strategy of the RL model.
1) The RL model: We use RL theory to decide about the
optimal placement of each edge autopilot VNF. We formulate
the optimization problem (3) using RL model.
Environment design: We design our own environment and
consider a centralized edge computing cluster with a single
resource; the GPU. It is connected to moving vehicles that
offload autopilot VNFs. Jobs/tasks arrive to the cluster with
Algorithm 1 DRL-based Virtual Edge Autopilot Placement
at The Network Edge
1: Input: EA,V N F ,Lea
2: Output: OBJ
3: Initialize a reply memory D and action-value Q matrix with random weights
4: Observe the initial state s
5: repeat
6: Select a MEC server a.
With probability select a random MEC server
Otherwise select the MEC server that has the maxa0Q(s, a0)
7: Place the Edge Autopilot’s VNF on the MEC Server a.
8: Observe the allocation cost rand the new edge state s0
9: Store the experience {s, a, r, s0}in D.
10: Sample a random transition from D.
11: Calculate the target for each mini-batch transition (r+γ×maxaQ(s0, a0))
12: Train the Q-Network using the following loss function Loss =1
2(r+γ×
maxa0Q(s0, a0)Q(s, a))2
13: s = s’
14: until No incoming Edge Autopilot VNFs from all the EA
an online fashion in discrete time-steps. At each time step,
the edge server places each VNF of an edge autopilot to a
MEC server. We assume that the VNF demands is known upon
arrival.
State space: We represent the state of the system as the
current placement of autopilot VNFs on MEC server slots.
Action space: The action space is the placement of a
computing-intensive autopilot VNF on an available MEC
server. The placement takes into consideration the computing
edge server capacity in terms of vGPU slots. In fact, the
agent will not place the autopilot VNF on a server slot if it
is occupied by running another autopilot VNF. The action is
mono-type where the agent process VNFs one by one until
processed all the incoming vehicle autopilots.
Reward design: The proposed reward is the placement
cost of the edge autopilot. It measures the number of active
MEC servers used after performing an action. It is formulated
as follows: rt=|EAsucc|
|ME Copen |Where EAsucc and M ECopen
represent the total successful edge autopilots allocation and
opened MEC servers respectively.
Agent function: The placement decision results from an in-
telligent agent module that interacts with the designed environ-
ment. It implements a DQN learning algorithm to decide about
the incoming edge autopilot VNFs placement. The method tries
to estimate the future placement reward, representing the cost
of the optimization decision. Then, as indicated in rt, the agent
objective will be maximizing the number of successfully placed
edge autopilots while minimizing the number of opened MEC
servers.
2) The DVEAP algorithm: In the DVEAP algorithm we use
the Deep Neural Networks (DNNs) to approximate the above
RL model. A succession of layers of neural networks are used
to map the input state to the output action. In Algo. 1, we
describe the pseudo-code of our proposed DVEAP algorithm.
We use the Stochastic Gradient Descent (SGD) algorithm
[13] to perform Deep-Q-Network (DQN) agent training. Then,
we tune the main hyper-parameter to decide about optimal
DNN configurations such as epoch/iteration numbers, opti-
mizer parameters, and action selection strategies. As shown
in Algorithm 1 Line 13, SGD algorithm uses the following
Bellman equation: Q(s, a) = r+γ×maxa0Q(s0, a0)in order
TABLE II: Small Scale Configuration.
Vehicles Autopilot Chain
Vehicle 1 vPerception
Vehicle 2 vPerception, vLocalisation
Vehicle 3 vPerception, vLocalisation, vPlanner
Vehicle 4 vPerception, vLocalisation, vControl
Vehicle 5 vPerception, vLocalisation, vPlanner, vControl
Edge Capacity (servers/slots)
Edge 1 10/5
Edge 2 10/10
TABLE III: Large Scale Configuration.
Parameter Value
Number of vehicles 150
Number of vnf 300
Capacity of edge 1 10/10 (servers/slots)
Capacity of edge 2 15/10 (servers/slots)
to minimize the loss function (squared error) between target
and current Q-values. Then, DNN weight are updated using
back-propagation process. The DRL based approach consists
in reducing ILP time and RL state space complexities.
VI. OVEAP VS DVEAP: PERFORMANCE EVA LUATIO N
We evaluate the proposed algorithms (OVEAP and DVEAP)
using different optimization tools. CPLEX 1is used to evaluate
the exact ILP model while Keras 2is used to configure and
implement the AI DRL algorithm.
As explained above, we consider the following edge autopi-
lot VNF’s chain: vP erception,vLocalisation,v P lanner,
and vControl. Recall that the optimization objective is to place
each autopilot service on the edge server while assuring the
chaining of all the VNFs of the service. Tables II and III show
the different configurations used in our evaluation in small and
large scale networks respectively.
A. Key Performance Indicators (KPIs)
For the interest of assessing the efficiency of the proposed
approaches (ILP-OVEAP and DRL-DVEAP), we propose dif-
ferent KPIs as follows:
Total Edge Servers Utilization (TESU): it represents
the number of servers allocated for the autopilot service
functions chain.
Total Edge Servers Allocation Time (TESAT): it rep-
resents the required time for autopilot service functions
chain allocation.
Successfully Allocated Edge Autopilots: it represents the
number of successfully allocated edge autopilots.
For the sake of better selecting the appropriate allocation
strategy (OVEAP or DVEAP), different network scales (i.e.
small and large) are considered as follows:
B. Small scale network
In Figures 3 and 4, we show the total resources utilization
at the network edge for different edge configuration Edge1
and Edge2where Edge2has more computing capacity in
1https://pypi.org/project/cplex/
2https://keras.io/
0
1
2
3
4
5
6
Vehicle 1 Vehicle 2 Vehicle 3 Vehicle 4 Vehicle 5
Total Allo cated Servers
Autonomous Vehicles
OVEAP
DVEAP
(a) Edge Server Allocation Cost
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
0.011
0.012
0.013
0.014
0.015
0.016
0.017
0.018
0.019
0.02
Vehicle 1 Vehicle 2 V ehicle 3 Vehicle 4 Vehicl e 5
Allocation T ime (Time Unit)
Autonomous Vehicles
OVEAP
DVEAP
(b) Edge Server Allocation Time
Figure 3: Total Resources Utilization of Edge 1.
0
1
2
3
4
Vehicle 1 Vehicle 2 Vehicle 3 Vehicle 4 Vehicle 5
Total Allo cated Servers
Autonomous Vehicles
OVEAP
DVEAP
(a) Edge Server Allocation Cost
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
0.011
0.012
0.013
0.014
0.015
0.016
0.017
0.018
0.019
0.02
Vehicle 1 Vehicle 2 V ehicle 3 Vehicle 4 Vehicl e 5
Allocation T ime (Time Unit)
Autonomous Vehicles
OVEAP
DVEAP
(b) Edge Server Allocation Time
Figure 4: Total Resources Utilization of Edge 2.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
20 40 60 80 100
Total All ocated Servers
Autonomou s Vehicles
Edge1
Edge2
Figure 5: Total Resources Utilization in Large Scale Network.
terms of GPU slots. In Figures 3a and 4a we plot autonomous
vehicles against the T ES U metric. Results show the efficiency
of the proposed DRL-DVEAP algorithm that provides efficient
placement cost and converges to the exact ILP-OVEAP in terms
of placement cost. In addition, Figures 3b and 3b show that
DVEAP algorithm gives a non-significant T ES AT placement
time compared to OVEAP that still has a feasible placement
time (in terms of a few micro-seconds).
C. Large scale network
In Fig. 5, we quantify the behavior of the DRL-DVEAP
algorithm in large scale networks according to edge configura-
tions. We plot autonomous vehicle number ranging from 20 to
100 against T ES U . Result shows that increasing computing
capacity helps in better offloading edge autopilot functions.
In Fig. 6, we show the limit of the DRL approaches in
a very dense network constituted by a large autonomous
vehicles number requiring services offloading. Result shows
that 87.3% of the edge autopilots are successfully allocated on
MEC servers.
VII. CONCLUSION
In this paper we have proposed an Artificial Intelligence
approach for edge autopilot offloading at the network edge.
Failed Edge Autopilot Allocation
12.7%
Successful Edge Autopilot Allocation
87.3%
Figure 6: Total Successful-Failed Edge Autopilot Allocation in Large
Scale Network.
First, we have proposed an end-to-end architecture for edge-
assisted autonomous driving. Then, we have proposed an opti-
mal allocation approach (OVEAP) that decides about optimal
edge autopilot VNFs placement. Further, to deal with dense
IoAV network, a deep reinforcement learning approach (DRL-
DVEAP) is formulated. Based on different configurations and
edge environments, the proposed DVEAP achieves a good
results in terms of offloading cost and time. In Future work,
we will include more networking and computing architectures
for edge-assisted autonomous driving.
REFERENCES
[1] R. Hussain and S. Zeadally, Autonomous cars: Research results, issues,
and future challenges,” IEEE Communications Surveys & Tutorials,
vol. 21, no. 2, pp. 1275–1313, 2018.
[2] M. Ehsani, Y. Gao, S. Longo, and K. Ebrahimi, Modern electric, hybrid
electric, and fuel cell vehicles. CRC press, 2018.
[3] J. Wu, H. Liao, J.-W. Wang, and T. Chen, “The role of environmental
concern in the public acceptance of autonomous electric vehicles: A
survey from China, Transportation Research Part F: Traffic Psychology
and Behaviour, vol. 60, pp. 37–46, 2019.
[4] Z. Ning, J. Huang, and X. Wang, “Vehicular fog computing: Enabling
real-time traffic management for smart cities, IEEE Wireless Communi-
cations, vol. 26, no. 1, pp. 87–93, 2019.
[5] M. Laroui, A. Sellami, B. Nour, H. Moungla, H. Afifi, and S. B.
Hacene, “Driving path stability in VANETs,” in 2018 IEEE Global
Communications Conference (GLOBECOM), 2018, pp. 1–6.
[6] H. Khedher, S. Hoteit, P. Brown, R. Krishnaswamy, W. Diego, and
V. Veque, “Processing time evaluation and prediction in cloud-ran, in
ICC 2019 - 2019 IEEE International Conference on Communications
(ICC), 2019, pp. 1–6.
[7] S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang, and W. Shi, “Edge computing
for autonomous driving: Opportunities and challenges,” Proceedings of
the IEEE, vol. 107, no. 8, pp. 1697–1716, 2019.
[8] H. Chishiro, K. Suito, T. Ito, S. Maeda, T. Azumi, K. Funaoka, and
S. Kato, “Towards heterogeneous computing platforms for autonomous
driving, in 2019 IEEE International Conference on Embedded Software
and Systems (ICESS), 2019, pp. 1–8.
[9] autoware. (2019) autoware. [Online]. Available:
https://www.autoware.org/
[10] J. Santa, P. Fernandez, J. Ortiz, R. Sanchez-Iborra, and A. Skarmeta,
“Surrogates: Virtual obus to foster 5g vehicular services,” Electronics,
vol. 9, pp. 1–16, 01 2019.
[11] N. S. Yeshodara, N. S. Nagojappa, and N. Kishore, “Cloud based
self driving cars,” in 2014 IEEE International Conference on Cloud
Computing in Emerging Markets (CCEM), 2014, pp. 1–7.
[12] S. Kumar, S. Gollakota, and D. Katabi, A cloud-assisted design for
autonomous driving, in Proceedings of the first edition of the MCC
workshop on Mobile cloud computing, 2012, pp. 41–46.
[13] B.-c. Zhou, C.-y. Han et al., “Convergence of stochastic gradient descent
in deep neural network,” Acta Mathematicae Applicatae Sinica, English
Series, no. 1, pp. 126–136, 2021.
... The emergence of edge intelligence, an emerging computing paradigm that pushes computing and storage resources to the network edge close to the data source, has propelled various Internet of Things (IoT) applications. In contexts such as autonomous driving [1], where vehicles need to execute intensive tasks, offloading tasks to nearby edge servers with computational capabilities [2], the efficiency of task execution is greatly enhanced. Mobile edge computing plays a key role in reducing the time and energy consumption of IoT devices in handling tasks [3], which enables rapid response to user requests and reduces network latency [4]. ...
... illustrates the training architecture of the M-GNRL strategy, where the current device is represented as v n(1) . The graph data at time slot t are represented by an undirected ...
Article
Full-text available
In the mobile edge computing (MEC) architecture, base stations with computational capabilities are subject to service coverage limitations, and the mobility of devices leads to dynamic changes in their connections, directly impacting the offloading decisions of agents. The connections between base stations and mobile devices, as well as the connections between base stations themselves, are abstracted into an MEC structural diagram due to the difficulty of deep reinforcement learning (DRL) in capturing the complex relationships between nodes and their multi-order neighboring nodes in the graph; decisions solely generated by DRL have limitations. To address this issue, this study proposes a hierarchical mechanism strategy based on Graph Neural Reinforcement Learning (M-GNRL) under multiple constraints. Specifically, the MEC structural graph constructed with the current device as an observation point aggregates to learn node features, thus comprehensively considering the contextual information of nodes, and the learned graph information serves as the environment for deep reinforcement learning, effectively integrating a graph neural network (GNN) with DRL. In the M-GNRL strategy, edge features from GNN are introduced into the architecture of the DRL network to enhance the accuracy of agents’ decision-making. Additionally, this study proposes an updated algorithm to obtain graph data that change with observation points. Comparative experiments demonstrate that the M-GNRL algorithm outperforms other baseline algorithms in terms of system cost and convergence performance.
... Furthermore, through caching and processing massive vehicle data at distributed MEC and UAV nodes, autonomous driving quality can be extensively improved. 99 Moreover, high cybersecurity mechanisms and blockchain techniques increase communication reliability and data integrity. ...
... It is envisioned that MEC will play a vital role in 6G mobile networks. 7,8 It can be applied in different IoAV applications such as edge assisted autonomous driving, 99 AI models training, 111 resources allocation, 112 and so on. It is worth mentioning that ETSI standards introduced multi-access MEC support in different V2X use cases, as they require ultra-low latency, ultra-high reliability, and availability. ...
Article
Full-text available
With the commercial deployment of 5G mobile communication systems, 6G is proposed to meet the needs of new use cases including connected and autonomous vehicles (CAVs). In this article, we surveyed different 6G embracing technologies such as mobile edge computing, unmanned aerial vehicles, network function virtualization, software defined networking, and artificial intelligence that can enhance the Internet of Autonomous Vehicles (IoAV) network performance comparing to existing 5G enabled solutions. Then, we highlight the convergence path of 6G and IoAV to satisfy the next generation of CAV applications. Further, we propose 6G cellular communication to support the quality of next IoAV applications. Furthermore, the article attempts to study the serious challenges in the 6G‐IoAV research field and propose potential future directions.
... The emergence of edge intelligence, an emerging computing paradigm that pushes computing and storage resources to the network edge close to the data source, has propelled various Internet of Things (IoT) applications. In contexts such as autonomous driving [1], where vehicles need to execute intensive tasks, offloading tasks to nearby edge servers with computational capabilities [2], the efficiency of task execution is greatly enhanced. Mobile edge computing plays a key role in reducing the time and energy consumption of IoT devices in handling tasks [3], this enables rapid response to user requests and reduces network latency [4]. ...
Preprint
Full-text available
In Mobile Edge Computing (MEC), the service coverage limitation of base stations leads to dynamic changes in the connections between mobile devices and them, directly impacting the decision-making process of agents. The relationship between mobile devices and base stations is abstracted into an MEC structure graph in current research, however, traditional deep reinforcement learning faces challenges in effectively capturing the complex relationships between nodes within dynamic graph structures. To address this issue, this paper proposes a hierarchical mechanism known as Graph Neural Reinforcement Learning (M-GNRL) under multiple constraints to facilitate optimal result. Specifically, when tasks necessitate offloading for execution, M-GNRL algorithm recommends potential offloading nodes based on node representations. Selecting nodes with higher recommended probabilities to reconstruct into a deep reinforcement learning environment, aiming to extract crucial nodes to reduce the state space size of the policy. This strategy incorporates edge features from graph neural networks into the deep reinforcement learning architecture, after training, which can assist the agent in making optimal decisions in dynamic environments. To reduce the time required for mapping MEC scenarios into graph structures, this paper proposes an updating algorithm to acquire graph information that varies with anchor node. Comparative experiments demonstrate that the M-GNRL algorithm outperforms other baseline algorithms in terms of system cost and convergence performance.
... Moreover, Sasaki et al. exposed an edge-cloud computing model for autonomous vehicles using a specific software platform [56], while Ibn-Kheder et al. presented an edge computing assisted autonomous driving model [57], whereas Valocky et al. exhibited an experimental autonomous car model with safety sensor in wireless networks [58], whilst Wei et al. proposed a model for cooperative perception in autonomous driving [59]. ...
Article
Full-text available
Data center organization and optimization presents the opportunity to try and design systems with specific characteristics. In this sense, the combination of artificial intelligence methodology and sustainability may lead to achieve optimal topologies with enhanced feature, whilst taking care of the environment by lowering carbon emissions. In this paper, a model for a field monitoring system has been proposed, where an edge data center topology in the form of a redundant ring has been designed for redundancy purposes to join together nodes spread apart. Additionally, a formal algebraic model of such a design has been exposed and verified.
... Tang et al. [102] developed LoPECS, the first complete edge computing system for producing self-driving cars, which leverages the runtime layer of heterogeneous computing resources of low-power edge computing systems to meet the real-time requirements of self-driving applications. Ibn-Khedher et al. [103] proposed an end-to-end architecture for edge-assisted autonomous driving that allows the rationing of computationally intensive autonomous driving services to shared resources on edge servers, improving the performance level of autonomous vehicles. ...
Article
Full-text available
At the edge of the network close to the source of the data, edge computing deploys computing, storage and other capabilities to provide intelligent services in close proximity and offers low bandwidth consumption, low latency and high security. It satisfies the requirements of transmission bandwidth, real-time and security for Internet of Things (IoT) application scenarios. Based on the IoT architecture, an IoT edge computing (EC-IoT) reference architecture is proposed, which contained three layers: The end edge, the network edge and the cloud edge. Furthermore, the key technologies of the application of artificial intelligence (AI) technology in the EC-IoT reference architecture is analyzed. Platforms for different EC-IoT reference architecture edge locations are classified by comparing IoT edge computing platforms. On the basis of EC-IoT reference architecture, an industrial Internet of Things (IIoT) edge computing solution, an Internet of Vehicles (IoV) edge computing architecture and a reference architecture of the IoT edge gateway-based smart home are proposed. Finally, the trends and challenges of EC-IoT are examined, and the EC-IoT architecture will have very promising applications.
Article
Full-text available
With the continuous development and combined application of cloud computing and artificial intelligence, some new methods have emerged to reduce task execution time for training neural network models in a cloud-edge collaborative environment. The most attractive method is neural network model segmentation. However, many factors affect the segmentation point, such as resource allocation, system energy consumption, load balancing, and network Bandwidth allocation. Some segmentation methods consider the shortest task execution time, which ignores the utilization of resources at the edge and can result in resource waste. Additionally, these factors are difficult to measure, which presents a challenge in calculating the best segmentation point to achieve the goal of maximum resource utilization and minimum task execution time. To solve this problem, this paper proposes a cloud-edge collaborative task scheduling method based on model segmentation (CECMS). This method first analyzes the factors affecting the segmentation point of the model and then obtains accurate factors that affect the segmentation point calculation through the pre-execution method. Furthermore, a multi-objective solution algorithm is improved to calculate the optimal model segmentation point. And tasks are separately offloaded to the edge and cloud based on the optimal model segmentation point. Finally, the experiments are conducted to verify the effectiveness of this method. Finally, the effectiveness of the CECMS method was verified through simulation experiments. Compared with the Dynamic Adaptive DNN Surgery (DADS) method and an adaptive DNN inference acceleration framework algorithm with end–edge–cloud collaborative computing algorithm (ADC), CECMS achieves the same effectiveness as DADS and ADC in optimizing task execution time by comprehensively considering the utilization of edge resources and minimizing task execution time, while also effectively ensuring resource utilization.
Article
Full-text available
Recent advancements in hardware and software systems have been driven by the deployment of emerging smart health and mobility applications. These developments have modernized the traditional approaches by replacing conventional computing systems with cyber–physical and intelligent systems combining the Internet of Things (IoT) with Edge Artificial Intelligence. Despite the many advantages and opportunities of these systems within various application domains, the scarcity of energy, extensive computing needs, and limited communication must be considered when orchestrating their deployment. Inducing savings in these directions is central to the Approximate Computing (AxC) paradigm, in which the accuracy of some operations is traded off with energy, latency, and/or communication reductions. Unfortunately, the dynamics of the environments in which AxC-equipped IoT systems operate have been paid little attention. We bridge this gap by surveying adaptive AxC techniques applied to three emerging application domains, namely autonomous driving, smart sensing and wearables, and positioning, paying special attention to hardware acceleration. We discuss the challenges of such applications, how adaptive AxC can aid their deployment, and which savings it can bring based on traits of the data and devices involved. Insights arising thereof may serve as inspiration to researchers, engineers, and students active within the considered domains.
Article
As cloud computing continues to grow, the energy consumption of cloud-edge resources has become a concern, particularly in terms of cost of energy and environmental effects. Therefore, reducing energy consumption in cloud-edge environments is an important issue that needs to be addressed to ensure sustainable and cost-effective cloud services. The existing approaches face challenges in achieving optimized energy consumption and workflow execution delay while maintaining reliability. Therefore, there is a need for a novel approach that can address these challenges and provide an effective solution for managing scientific workflows in a hybrid cloud environment. This paper introduces Resource Prediction and Scheduling Error Optimization (RPSEO), a novel approach for optimizing energy consumption and workflow execution delay in cloud-edge environments. The proposed method leverages a task-ordering web server management system and a soft-computing-based searching algorithm. Evaluation of Epigenomics and SIPHT workflows demonstrates significant improvements, surpassing existing methods Reliability-Aware Cost-Efficient Scientific (RACES), Delay Aware and Performance Efficient Energy Optimization (DAPPEO), and Reliable and Efficient Webserver Management (REWM) with better average energy consumption performance (up to 43.92% and 35.93% for Epigenomics and SIPHT) and cost efficiency (up to 44.53% and 73.50% for Epigenomics and SIPHT). RPSEO emerges as a promising solution for reliable and efficient scientific workflow management in hybrid cloud settings.
Article
Full-text available
Autonomous driving services depends on active sensing from modules such as camera, LiDAR, radar, and communication units. Traditionally, these modules process the sensed data on high-performance computing units inside the vehicle, which can deploy intelligent algorithms and AI models. The sensors mentioned above can produce large volumes of data, potentially reaching up to 20 Terabytes. This data size is influenced by factors such as the duration of driving, the data rate, and the sensor specifications. Consequently, this substantial amount of data can lead to significant power consumption on the vehicle. Similarly, a substantial amount of data will be exchanged between infrastructure sensors and vehicles for collaborative vehicle applications or fully connected autonomous vehicles. This communication process generates an additional surge of energy consumption. Although the autonomous vehicle domain has seen advancements in sensory technologies, wireless communication, computing and AI/ML algorithms, the challenge still exists in how to apply and integrate these technology innovations to achieve energy efficiency. This survey reviews and compares the connected vehicular applications, vehicular communications, approximation and Edge AI techniques. The focus is on energy efficiency by covering newly proposed approximation and enabling frameworks. To the best of our knowledge, this survey is the first to review the latest approximate Edge AI frameworks and publicly available datasets in energy-efficient autonomous driving. The insights from this survey can benefit the collaborative driving service development on low-power and memory-constrained systems and the energy optimization of autonomous vehicles.
Article
Autonomous driving has so far received numerous attention from academia and industry. However, the inevitable occlusion is a great menace to safety and reliable driving. Existing works have primarily focused on improving the perception ability of a single autonomous vehicle (AV), but the safety problem brought by occlusions remains unanswered. In this paper, we propose a multi-tier perception task offloading framework with a collaborative computing approach, where an AV is able to achieve a comprehensive perception of the concerned region-of-interest (RoI) by leveraging collaborative computation with nearby AVs and road side units (RSUs). Besides, the collaborative computation provides offloading service for computationally intensive tasks so as to reduce processing delay. Specifically, we formulate a joint problem of perception task assignment, offloading and resource allocation, by fully considering the AV’s mobility, task dependency, and delay requirement. The collaborative offloading is modeled as a mixed-integer nonlinear programming (MINLP) problem. We design a two-layer binary intelligent firefly (TL-BIFA) algorithm to solve MINLP, with the goal of minimizing execution delay. The proposed TL-BIFA synthesizes the advantages of heuristic methods and deterministic methods. Through extensive simulations, the proposed collaborative offloading approach and the TL-BIFA show superiority in enhancing the autonomous driving system’s safety, efficiency and resource utilization.
Conference Paper
Full-text available
Cloud RAN (C-RAN) is a very promising architecture for future mobile network deployment, where the cloudcentric approach is useful in improving total processing load. In this context, radio and baseband network functions processing pose interesting problems that we try to expose and solve in this paper. A novel architecture for C-RAN and a first modeling of the system are proposed. Furthermore, we study the impact of many radio parameters on the processing time. Moreover, a mathematical and a deep learning model are proposed and evaluated for processing time prediction. Results show the feasibility of the proposed approaches
Article
Full-text available
Virtualization technologies are key enablers of softwarized 5G networks, and their usage in the vehicular domain can provide flexibility and reliability in real deployments, where mobility and processing needs may be an issue. Next-generation vehicular services, such as the ones in the area of urban mobility and, in general, those interconnecting on-board sensors, require continuous data gathering and processing, but current architectures are stratified in two-tier solutions in which data is collected by on-board units (OBU) and sent to cloud servers. In this line, intermediate cache and processing layers are needed in order to cover quasi-ubiquitous data-gathering needs of vehicles in scenarios of smart cities/roads considering vehicles as moving sensors. The SURROGATES solution presented in this paper proposes to virtualize vehicle OBUs and create a novel Multi-Access Edge Computing (MEC) layer with the aim of offloading processing from the vehicle and serving data-access requests. This deals with potential disconnection periods of vehicles, saves radio resources when accessing the physical OBU and improves data processing performance. A proof of concept has been implemented using OpenStack and Open Source MANO to virtualize resources and gather data from in-vehicle sensors, and a final traffic monitoring service has been implemented to validate the proposal. Performance results reveal a speedup of more than 50% in the data request resolution, with consequently great savings of network resources in the wireless segment. Thus, this work opens a novel path regarding the virtualization of end-devices in the Intelligent Transportation Systems (ITS) ecosystem.
Conference Paper
Full-text available
Vehicular Ad Hoc Network has attracted both research and industrial community due to its benefits in facilitating human life and enhancing the security and comfort. However, various issues have been faced in such networks such as information security, routing reliability, dynamic high mobility of vehicles, that influence the stability of communication. To overcome this issue, it is necessary to increase the routing protocols performances, by keeping only the stable path during the communication. The effective solutions that have been investigated in the literature are based on the link prediction to avoid broken links. In this paper, we propose a new solution based on machine learning concept for link prediction, using LR and Support Vector Regression (SVR) which is a variant of the Support Vector Machine (SVM) algorithm. SVR allows predicting the movements of the vehicles in the network which gives us a decision for the link state at a future time. We study the performance of SVR by comparing the generated prediction values against real movement traces of different vehicles in various mobility scenarios, and to show the effectiveness of the proposed method, we calculate the error rate. Finally, we compare this new SVR method with Lagrange interpolation solution.
Article
Stochastic gradient descent (SGD) is one of the most common optimization algorithms used in pattern recognition and machine learning. This algorithm and its variants are the preferred algorithm while optimizing parameters of deep neural network for their advantages of low storage space requirement and fast computation speed. Previous studies on convergence of these algorithms were based on some traditional assumptions in optimization problems. However, the deep neural network has its unique properties. Some assumptions are inappropriate in the actual optimization process of this kind of model. In this paper, we modify the assumptions to make them more consistent with the actual optimization process of deep neural network. Based on new assumptions, we studied the convergence and convergence rate of SGD and its two common variant algorithms. In addition, we carried out numerical experiments with LeNet-5, a common network framework, on the data set MNIST to verify the rationality of our assumptions.
Article
The development of electric vehicles (EVs) and autonomous vehicles (AVs) has made great progress and been expected to play an important role in a future transport system. Some researchers have explored the potential impacts of autonomous electric vehicles (AEVs), while few have focused on the public attitude towards AEVs. This paper aims to understand the public acceptance of AEVs through the application of Technology Acceptance Model. Considering the potential environmental benefits of AEVs, this study concentrates on how environmental concern affects AEVs acceptance. Data is collected from an online survey in China and analyzed by a structural model. The results indicate that green perceived usefulness, perceived ease of use and environmental concern have a positive relationship with people’s intentions to use AEVs. Environmental concern poses a powerful indirect effect on using intention through mediating effects. Implications for improving the public acceptance of AEVs and suggestions for further research are given correspondingly in this study.
Article
Throughout the last century, the automobile industry achieved remarkable milestones in manufacturing reliable, safe, and affordable vehicles. Because of significant recent advances in computation and communication technologies, autonomous cars are becoming a reality. Already autonomous car prototype models have covered millions of miles in test driving. Leading technical companies and car manufacturers have invested a staggering amount of resources in autonomous car technology, as they prepare for autonomous cars’ full commercialization in the coming years. However, to achieve this goal, several technical and non-technical issues remain: software complexity, real-time data analytics, and testing and verification are among the greater technical challenges; and consumer stimulation, insurance management, and ethical/moral concerns rank high among the non-technical issues. Tackling these challenges requires thoughtful solutions that satisfy consumers, industry, and governmental requirements, regulations, and policies. Thus, here we present a comprehensive review of state-of-the-art results for autonomous car technology. We discuss current issues that hinder autonomous cars’ development and deployment on a large scale. We also highlight autonomous car applications that will benefit consumers and many other sectors. Finally, to enable cost-effective, safe, and efficient autonomous cars, we discuss several challenges that must be addressed (and provide helpful suggestions for adoption) by designers, implementers, policymakers, regulatory organizations, and car manufacturers.
Article
Fog computing extends the facility of cloud computing from the center to edge networks. Although fog computing has the advantages of location awareness and low latency, the rising requirements of ubiquitous connectivity and ultra-low latency challenge the traffic management for smart cities. As an integration of fog computing and vehicular networks, Vehicular Fog Computing (VFC) is promising to achieve real-time and location-aware network responses. Since the concept and use case of VFC are in the initial phase, this article first constructs a three-layer VFC model for distributed traffic management, in order to minimize the response time of citywide events collected and reported by vehicles. Furthermore, the VFC-enabled offloading scheme is formulated as an optimal problem by considering moving and parked vehicles. A real city map-based performance analysis validates our model. Finally, some research challenges and open issues toward VFC-enabled traffic management are summarized and highlighted.