Content uploaded by Hatem Khedher
Author content
All content in this area was uploaded by Hatem Khedher on Jun 22, 2022
Content may be subject to copyright.
Edge Computing Assisted Autonomous Driving
Using Artificial Intelligence
Hatem Ibn-Khedher∗, Mohammed Laroui†‡ , Mouna Ben Mabrouk∗, Hassine Moungla†‡ , Hossam Afifi‡
Alberto Nai Oleari ∗and Ahmed E. Kamal §
∗ALTRAN Labs,78140 Velizy-Villacoublay, France.
Emails: {hatem.ibnkhedher,mouna.benmabrouk,albero.naioleari}@altran.com
†Universit´
e de Paris, LIPADE, F-75006 Paris, France.
Emails: {mohammed.laroui,hassine.moungla}@u-paris.fr
‡UMR 5157, CNRS, Institut Polytechnique de Paris, Telecom SudParis Saclay, France.
Emails: {hassine.moungla,hossam.afifi}@telecom-sudparis.eu
§Department of Electrical Computer Engineering, Iowa State University, Ames, IA 50011-3060, USA.
Email: see http://www.ece.iastate.edu/ kamal/
Abstract—The emergence of new vehicles generation such as
connected and autonomous vehicles led to new challenges in the
vehicular networking and computing managements to provide
efficient services and guarantee the quality of service. The edge
computing facility allows the decentralization of processing from
the cloud to the edge of the network. In this paper, we design
and propose an end-to-end, reliable and low latency communica-
tion architecture that allows the allocation of compute-intensive
autonomous driving services, in particular autopilot, to shared
resources on edge computing servers and improve the level of
performance for autonomous vehicles. The reference architecture
is used to design an Advanced Autonomous Driving (AAD)
communication protocol between autonomous vehicles, edge com-
puting servers, and the centralized cloud. Then, a mathematical
programming approach using Integer Linear Programming (ILP)
is formulated to model the autopilot chain resources Offloading
at the network edge. Further, a deep reinforcement learning
(DRL) approach is proposed to deal with dense Internet of
Autonomous Vehicle (IoAV) networks. Moreover, several scenarios
are considered to quantify the behavior of the optimization
approaches. We compare their efficiency in terms of Total Edge
Servers Utilization, Total Edge Servers Allocation Time, and
Successfully Allocated Edge Autopilots.
Index Terms—Edge Computing, Autonomous Vehicles (AV),
Artificial Intelligence (AI), Optimization, Deep Reinforcement
Learning (DRL).
I. INTRODUCTION
Autonomous electric vehicles (AEVs) [1]–[3] are recent self-
driving vehicles generation where the fuel energy replaced by
electric energy which is environmentally friendly energy. The
big challenge in autonomous electric vehicles is the self driving
management system for advanced autonomous driving (AAD).
The evolution of vehicles generations has led to new challenges
in vehicular communications which require high computation
resources to decide about safe trajectories and secure intra and
inter-vehicle communication.
The edge computing paradigm [4] allows the decentraliza-
tion of processing from the cloud to the network edge that
decreases the latency which represents one of the issues in the
cloud computing architecture. It offers a distributed architecture
for processing data close to end-users which facilitates the
transactions between mobile end-users and the centralized
cloud. In addition, the edge computing supports a high mobility
of connected and autonomous vehicles that impacts directly
the communication link [5] and the continuity of self-driving
procedures.
The edge computing layer can be embedded in different
points of operation (PoPs) such as the Remote Side Unit (RSU),
the Next Generation NodeB (gNB), and the autonomous vehi-
cles [6]. Moreover, the edge computing can be used with elec-
tric and autonomous vehicles to enhance driving situations and
the quality of service (QoS) in the vehicular communications
(V2X). In this paper, we propose to use autonomous vehicles
assisted with virtual edge servers for efficient self driving.
Introducing Artificial Intelligence (AI) techniques at the edge
layer may replace the current solvers or heuristics. It will
ease the network optimization task using recent deep and re-
inforcement learning approaches for edge/fog/cloud resources
allocation and management.
The main contributions in this paper are as follows
•Design and propose an end-to-end, reliable and low-
latency communication architecture that allows the allo-
cation of compute-intensive autonomous driving services,
in particular autopilot, to shared resources on edge servers
and improve the level of performance for autonomous
vehicles
•Propose an Advanced Autonomous Driving (AAD) com-
munication protocol that supports dense moving vehicles.
•Use of Edge-assisted Integer Linear Programming (ILP)
Techniques to allocate edge autopilot resources on optimal
edge computing servers.
•Introduce Edge Deep Reinforcement Learning (EDRL) as
a new approach to automate and optimize the allocation of
heterogeneous edge autopilot computing and networking
resources, extract knowledge from disseminated data, and
recommend edge autopilot allocation strategies according
to different metrics. The autopilot’s Virtual Network Func-
tions (VNFs) allocation is compared to standard optimiza-
tion techniques in order to show how deep learning tech-
niques will be used to solve generalized compute-intensive
autonomous driving services optimization problems.
The rest of the paper is as follow. Section II gives a
detailed related work in the field of edge-assisted autonomous
driving architectures, protocols, and optimizations. Section III
describes our proposed Advanced Autonomous Driving (AAD)
communication protocol. Section IV introduces the proposed
mathematical programming approach for AAD. Section V
enhances the optimisation module with an Edge Deep Rein-
forcement Learning (EDRL) approach. Section VII evaluates
the proposed approaches and the work is concluded in the final
section.
II. RE LATE D WORK
This section highlights the relevant autopilot offloading
optimization algorithms.
In [7] the authors present the state of the art approaches
that leverage the edge-computing paradigm in the autonomous
driving field. However, it misses the discussion about current
edge AI work and optimal resources allocation design.
In [8], the authors present an edge-cloud computing model
for autonomous vehicles using the open-source software plat-
form Autoware [9]. They believe that their proposed edge-
cloud computing model for Autoware-based autonomous ve-
hicles reduces the execution time and the total deadline miss.
Among the main missing modules in their platform, the work
does not consider neither the in-vehicle computing resources
management, nor the Vehicle to Edge (V2E) communications.
In [10], the authors propose surrogate, an edge architecture
for self-driving cars with OpenStack and ETSI open-source
MANO. It aims at virtualizing the in-vehicle On Board Units
(OBU) at the distributed edge platform and managing Multi-
Access Edge Computing (MEC) layers that process real-time
vehicle requests. The work suffers from optimal virtual OBU
(vOBU) management and orchestration algorithms at the vir-
tualized edge surrogates. Moreover, vOBU manager module
needs to take into account solver instances related to the IoAV
network scale and driving conditions.
In [11] the authors propose a cloud based self-driving
car which can optimize the in-vehicle data storage issues.
They propose to free autonomous vehicles from all data and
download everything from the cloud as per the need of the
travel. Their solution allows to free vehicles from raw data
and rely on a centralized cloud infrastructure for the drive.
The authors assume a persistent network connectivity to the
cloud and a sufficient in-vehicle storage to back the data in
the case of limited network availability. The proposed cloud
infrastructure is not clear and need to integrate scheduling
algorithms that allocate gathered data to cpu cores and servers.
Moreover, it missed distributed edge computing servers that
process efficiently sensitive application data.
In [12] the authors proposed Carcel, a cloud-assisted system
for autonomous driving. The cloud platform has access to data
from AV sensors and the roadside infrastructure environment.
It assists autonomous vehicles to detect/avoid obstacles such as
pedestrians and other vehicles that may not be directly detected
Advan5ed AD Protocol
C-ITS
C-ITS
ADAS
Figure 1: Edge autopilot use case in the Artificial Intelligence Defined
Optimization Framework
by AV sensors. We believe that the work is of practical interest,
however, it missed virtualization techniques and VNF manager
modules that ease the allocation procedures of the autopilot
chain on the cloud. Moreover, the edge/fog facility is missed
from the overall architecture.
III. PROP OS ED ED GE AU TO PI LOT PROTOCOL
A. Proposed AAD Architecture Using Edge Artificial Intelli-
gence
In Fig. 1 we highlight the proposed Advanced Autonomous
Driving (AAD) architecture of our edge autopilot use case. The
AAD architecture consists of three main layers or entities as
follows:
•Centralized Cloud Computing Layer: It acts as the
cloud autopilot and is responsible for processing Non-
Real-Time (NRT) edge autopilot VNFs.
•Distributed Network Edges Layer: It is an intermediate
layer that connects OBU vehicles to the cloud. It consists
of distributed edge servers that assure the cooperation
between the virtualized OBUs or vehicles. It is responsible
for processing and analyzing offloaded VNFs according to
vehicle requests requirements and available resources in
the edge servers. It is worth mentioning that resources
include computing (CPU, GPU, FPGA), radio (Resource
Block, SNR, MCS, CQI), RAM, and storage. The network
edge cooperates with distant edges and the cloud. It can
execute VNF migration and outsourcing in case of local
resources miss.
•Autonomous Vehicles Layer: It is the layer of au-
tonomous vehicles that request autopilot services chains
offloading due to the local resources scarcity.
B. Proposed AAD Communication Protocol
Fig. 1 shows the main modules of the proposed AAD
protocol:
1) Edge Autopilot Slicing: each autonomous vehicle can
request for offloading some autopilot functions. It request
the near edge node, representing by gNB or RSU (i.e., a
Embedded
perception
system
Embedded
Autopilot
Connected
& AD
vehicles
Cloud
Autopilot
Position/Speed/decision
1
2
3
World Model
Request
(Brake/accelerate/stop)
Maximum speed
authorized
Lane Ordering
Vehicle states
collection
Traffic dynamic
Cloud Input autopilot
for anticipation / Long
range perception
Local Piloting and
reactivity
4
Figure 2: The communication steps between the centralized Edge and
the distributed connected AD vehicles
5G base station) to enable local edge resource discovery
and slices allocation.
2) Resources Discovery in Connected Edge Nodes: when
the access point receives autopilot functions offloading
request, it generates VNF components or slices. Then,
it selects a set of connected edge nodes that can satisfy
each VNF requirements in terms of GPU resources. The
selected set of connected edge nodes called Virtual Edge
Servers (VES). Resource discovery procedure is based on
computing and networking capabilities of the servers.
3) Autopilot VNFs Offloading/Allocation: when the VES
is selected, the access point starts the process of slices
offloading by allocating each slice a free device re-
sources (from the selected VES that can satisfy the slice
request requirements). It is worth mentioning that an
optimization algorithm is used to select optimal points
of operations where VNFs can be offloaded according
to the aforementioned system and network requirements.
Still, the cloud computing may represent a solution in
the case of edge resources miss. This case may occur
when the access point cannot select a VES that can meet
the demands of the autonomous vehicles set.
4) VNF Components Graph: this is the optimization re-
sults of the allocation procedures that indicate the place-
ment of each VNF component. After launching the VNFs
in the VES/cloud servers, optimal control commands are
sent directly to the access point.
5) Results Forwarding: in the last step, the access point
forwards optimal control commands to the autonomous
vehicle while satisfying its requirements.
For the sake of clarity, we show in Fig. 2 the main commu-
nication steps between the edge computing and the connected
autonomous vehicle:
•Connected autonomous vehicles send instantaneous states
such as position, speed, and next decision of the autopilot
to the edge/cloud.
•The Edge/Cloud Autonomous Driving (AD) service col-
lects the raw data, creates the wold model for each section
TABLE I: OVEAP Parameters and Decision Variables
Notations Definition
EA The set of Edge Autopilots in terms of services
chain.
VN F The set of Edge Autopilot VNFs. Each Edge
Autopilot VNF is composed of slices.
AV The set of Autonomous Vehicles.
MEC The set of Mobile Edge Computing Servers avail-
able at the network edge.
Lea The number of VNFs in each Edge Autopilot
ea ∈ EA. It represents the length of the Edge
Autopilot.
Gmec The maximum computing capacity vGPU avail-
able in the MEC server mec ∈ MEC.
gea,vnf Required vGPU resources for the VNF slices vnf
of the Edge Autopilot services chain ea.
Decision variables Definition
amec
ea,vnf A binary variable that allocates the Edge Autopilot
VNF vnf ∈ VN F of the Edge Autopilot ea ∈
EA to the MEC server mec ∈ ME C.
bmec A binary variable that indicates if the MEC server
mec ∈ MEC is used to process the edge
autopilot VNFs.
of the road, and communicates with Cloud AD Autopilot.
•The Edge/Cloud AD Autopilot sends the global model,
generates a high level request for each autonomous vehicle
node such as speed request and lane request positioning.
•The Integrated Autopilot merges the Edge/Cloud autopilot
inputs and embedded/local inputs to decide to anticipate
and act locally.
As explained above, the AAD protocol needs an intelli-
gent optimization algorithms that allocate autopilot VNFs to
optimal/near-optimal edge servers.
IV. PROPOSED OPTIMAL VIRTUAL ED GE AU TOPILOT
PLAC EM EN T APP ROAC H
The optimization of edge autopilot’s Virtual Network Func-
tions (VNFs) placement in edge computing architectures has
achieved more attentiveness. It is similar to the placement of
Virtual Machines (VMs), where the VNFs are composed of
containers or VMs that can execute network functions.
We propose a mathematical programming approach for Op-
timal Virtual Edge-Autopilot Placement (OVEAP) based on
Linear Programming (LP) technique. It aims to model the
autopilot services offloading in the virtualized edge computing
architecture. It takes as inputs the edge system capacity in
terms of computing resources. It aims then to optimally allocate
autopilot VNFs upon the available virtual edge servers. The
autopilot VNFs are offloaded to the centralized edge in order
to reduce the VNF processing time and increase the driving
safety. The optimization algorithm OVEAP in edge computing
architecture is modeled in the next subsections.
A. OVEAP Parameters and Decision Variables
We quote in Table I the main parameters and decision
variables of the proposed OVEAP algorithm.
The binary variable aindicates the allocation of the edge
autopilot VNF (ea, vnf )∈ E A × VN F on the MEC server
mec ∈ MEC. It represents a service instantiate graph (SIG)
that defines the optimal points of operations where Edge
Autopilot VNFs should be allocated.
amec
ea,vnf =
1if the VNF vf n of the Edge Autopilot ea
is allocated on the MEC server mec
0Otherwise
(1)
Further, a binary variable yis needed to track the MEC
server utilisation. It is formulated as follows:
bmec =(1if the MEC server mec ∈ MEC is used
0Otherwise (2)
B. Exact ILP formulation
The OVEAP objective function (3) tries to maximize the
succeeded placement of edge autopilot VNFs. The general ILP
formulation is as follows:
Maximize OBJ =X
ea∈EA X
mec∈MEC
amec
ea,Lea −X
mec∈MEC
bmec!
(3)
Subject to
X
mec∈MEC
amec
ea,vnf ≤1,∀ea ∈ EA, vnf ∈ V N F (4)
X
ea∈EA X
vnf ∈VN F
gea,vnf ×amec
ea,vnf ≤ Gmecbmec ,
∀mec ∈ MEC
(5)
X
mec∈MEC
amec
ea,vnf =X
mec∈MEC
amec
ea,vnf +1,∀ea ∈ E A,
vnf ∈ V N F \ {Lea }
(6)
bmec ≤X
ea,vnf
amec
ea,vnf ,∀mec ∈ MEC (7)
0≤amec
ea,vnf ≤1,∀ve ∈ ME C, ea ∈ EA, vnf ∈ V N F (8)
0≤bmec ≤1,∀mec ∈ MEC, ea ∈ E A, v nf ∈ VN F (9)
The OVEAP algorithm constraints are presented as follows:
•Constraints (4) guarantee that the VNF slices vnf ∈
V N F of the edge autopilot ea ∈ E A is placed on a single
MEC server mec ∈ MEC .
•Constraints (5) ensures that the selected MEC server has
enough amount of vGPU resources to execute the VNF
slices of the edge autopilot (bmec = 1). If the virtual
server is not used (bmec = 0), the model does not allocate
any edge autopilot.
•Constraints (6) ensure that the VNF components of each
edge autopilot must be executed in series; for an edge
autopilot service chain, the allocation of one VNF to a
MEC server should not be done if the previous VNFs
have not been allocated. This guaranties that, at the end,
if a VNF is allocated, then the previous VNFs are also
allocated.
•Constraints (7) are formulated to turn off non used MEC
servers in our exact model.
•Constraints (8) (9) ensure the non-negativity of the pro-
posed decision variables.
C. OVEAP complexity and triggers
OVEAP complexity: OVEAP once having the required
computing thresholds, the algorithm is executed periodically.
Further, OVEAP is a non-deterministic polynomial time ap-
proach which is feasible with a small number of instances. It
follows an exponential time complexity.
V. DE EP RE IN FO RC EM EN T LEARNING BAS ED VI RTUA L
EDGE AUTOPILOT PLAC EM EN T
A. General DVEAP Context
We suppose that we have different types of edge autopilots,
each of them having a particular size in terms of VNF number.
Further, we consider a central edge server connected to differ-
ent autonomous vehicles via RSU and gNB gateways. The edge
server is considered as a centralized data center or a reservoir
computing that has computing resources in terms of vGPUs
slots per MEC server. Each MEC server has a maximum GPU
computing capacity.
We proposed a DRL-based Virtual Edge-Autopilot Place-
ment (DVEAP), an AI-defined optimization approach that tries
to replace the tedious process of placement (OVEAP) by recent
AI techniques. We introduce Deep Learning modules at the
network edge that collect, process, and analyse autonomous
vehicle data. Then, the edge sends optimized control commands
back to the vehicle. For this purpose, a scalable and cost
efficient algorithm based on AI/DRL is proposed to ease the
edge autopilot placement process.
The DVEAP optimization technique aims to:
•Determine the near optimal MEC server where each edge
autopilot VNF should be offloaded.
•Minimize the active MEC servers that should satisfy the
vehicle requests.
•Maximize the allocation of incoming edge autopilot.
•Guarantee the chaining and the precedence constraints
between edge autopilot VNFs.
We propose to used Deep-Q-Learning (DQN) technique as
the most promising value iteration strategy in discrete action
values.
B. DVEAP formulation
The DVEAP algorithm leverages Deep Learning techniques
to approximate the action-selection strategy of the RL model.
1) The RL model: We use RL theory to decide about the
optimal placement of each edge autopilot VNF. We formulate
the optimization problem (3) using RL model.
Environment design: We design our own environment and
consider a centralized edge computing cluster with a single
resource; the GPU. It is connected to moving vehicles that
offload autopilot VNFs. Jobs/tasks arrive to the cluster with
Algorithm 1 DRL-based Virtual Edge Autopilot Placement
at The Network Edge
1: Input: EA,V N F ,Lea
2: Output: OBJ
3: Initialize a reply memory D and action-value Q matrix with random weights
4: Observe the initial state s
5: repeat
6: Select a MEC server a.
•With probability select a random MEC server
•Otherwise select the MEC server that has the maxa0Q(s, a0)
7: Place the Edge Autopilot’s VNF on the MEC Server a.
8: Observe the allocation cost rand the new edge state s0
9: Store the experience {s, a, r, s0}in D.
10: Sample a random transition from D.
11: Calculate the target for each mini-batch transition (r+γ×maxaQ(s0, a0))
12: Train the Q-Network using the following loss function Loss =1
2∗(r+γ×
maxa0Q(s0, a0)−Q(s, a))2
13: s = s’
14: until No incoming Edge Autopilot VNFs from all the EA
an online fashion in discrete time-steps. At each time step,
the edge server places each VNF of an edge autopilot to a
MEC server. We assume that the VNF demands is known upon
arrival.
State space: We represent the state of the system as the
current placement of autopilot VNFs on MEC server slots.
Action space: The action space is the placement of a
computing-intensive autopilot VNF on an available MEC
server. The placement takes into consideration the computing
edge server capacity in terms of vGPU slots. In fact, the
agent will not place the autopilot VNF on a server slot if it
is occupied by running another autopilot VNF. The action is
mono-type where the agent process VNFs one by one until
processed all the incoming vehicle autopilots.
Reward design: The proposed reward is the placement
cost of the edge autopilot. It measures the number of active
MEC servers used after performing an action. It is formulated
as follows: rt=|EAsucc|
|ME Copen |Where EAsucc and M ECopen
represent the total successful edge autopilots allocation and
opened MEC servers respectively.
Agent function: The placement decision results from an in-
telligent agent module that interacts with the designed environ-
ment. It implements a DQN learning algorithm to decide about
the incoming edge autopilot VNFs placement. The method tries
to estimate the future placement reward, representing the cost
of the optimization decision. Then, as indicated in rt, the agent
objective will be maximizing the number of successfully placed
edge autopilots while minimizing the number of opened MEC
servers.
2) The DVEAP algorithm: In the DVEAP algorithm we use
the Deep Neural Networks (DNNs) to approximate the above
RL model. A succession of layers of neural networks are used
to map the input state to the output action. In Algo. 1, we
describe the pseudo-code of our proposed DVEAP algorithm.
We use the Stochastic Gradient Descent (SGD) algorithm
[13] to perform Deep-Q-Network (DQN) agent training. Then,
we tune the main hyper-parameter to decide about optimal
DNN configurations such as epoch/iteration numbers, opti-
mizer parameters, and action selection strategies. As shown
in Algorithm 1 Line 13, SGD algorithm uses the following
Bellman equation: Q(s, a) = r+γ×maxa0Q(s0, a0)in order
TABLE II: Small Scale Configuration.
Vehicles Autopilot Chain
Vehicle 1 vPerception
Vehicle 2 vPerception, vLocalisation
Vehicle 3 vPerception, vLocalisation, vPlanner
Vehicle 4 vPerception, vLocalisation, vControl
Vehicle 5 vPerception, vLocalisation, vPlanner, vControl
Edge Capacity (servers/slots)
Edge 1 10/5
Edge 2 10/10
TABLE III: Large Scale Configuration.
Parameter Value
Number of vehicles 150
Number of vnf 300
Capacity of edge 1 10/10 (servers/slots)
Capacity of edge 2 15/10 (servers/slots)
to minimize the loss function (squared error) between target
and current Q-values. Then, DNN weight are updated using
back-propagation process. The DRL based approach consists
in reducing ILP time and RL state space complexities.
VI. OVEAP VS DVEAP: PERFORMANCE EVA LUATIO N
We evaluate the proposed algorithms (OVEAP and DVEAP)
using different optimization tools. CPLEX 1is used to evaluate
the exact ILP model while Keras 2is used to configure and
implement the AI DRL algorithm.
As explained above, we consider the following edge autopi-
lot VNF’s chain: vP erception,vLocalisation,v P lanner,
and vControl. Recall that the optimization objective is to place
each autopilot service on the edge server while assuring the
chaining of all the VNFs of the service. Tables II and III show
the different configurations used in our evaluation in small and
large scale networks respectively.
A. Key Performance Indicators (KPIs)
For the interest of assessing the efficiency of the proposed
approaches (ILP-OVEAP and DRL-DVEAP), we propose dif-
ferent KPIs as follows:
•Total Edge Servers Utilization (TESU): it represents
the number of servers allocated for the autopilot service
functions chain.
•Total Edge Servers Allocation Time (TESAT): it rep-
resents the required time for autopilot service functions
chain allocation.
•Successfully Allocated Edge Autopilots: it represents the
number of successfully allocated edge autopilots.
For the sake of better selecting the appropriate allocation
strategy (OVEAP or DVEAP), different network scales (i.e.
small and large) are considered as follows:
B. Small scale network
In Figures 3 and 4, we show the total resources utilization
at the network edge for different edge configuration Edge1
and Edge2where Edge2has more computing capacity in
1https://pypi.org/project/cplex/
2https://keras.io/
0
1
2
3
4
5
6
Vehicle 1 Vehicle 2 Vehicle 3 Vehicle 4 Vehicle 5
Total Allo cated Servers
Autonomous Vehicles
OVEAP
DVEAP
(a) Edge Server Allocation Cost
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
0.011
0.012
0.013
0.014
0.015
0.016
0.017
0.018
0.019
0.02
Vehicle 1 Vehicle 2 V ehicle 3 Vehicle 4 Vehicl e 5
Allocation T ime (Time Unit)
Autonomous Vehicles
OVEAP
DVEAP
(b) Edge Server Allocation Time
Figure 3: Total Resources Utilization of Edge 1.
0
1
2
3
4
Vehicle 1 Vehicle 2 Vehicle 3 Vehicle 4 Vehicle 5
Total Allo cated Servers
Autonomous Vehicles
OVEAP
DVEAP
(a) Edge Server Allocation Cost
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
0.011
0.012
0.013
0.014
0.015
0.016
0.017
0.018
0.019
0.02
Vehicle 1 Vehicle 2 V ehicle 3 Vehicle 4 Vehicl e 5
Allocation T ime (Time Unit)
Autonomous Vehicles
OVEAP
DVEAP
(b) Edge Server Allocation Time
Figure 4: Total Resources Utilization of Edge 2.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
20 40 60 80 100
Total All ocated Servers
Autonomou s Vehicles
Edge1
Edge2
Figure 5: Total Resources Utilization in Large Scale Network.
terms of GPU slots. In Figures 3a and 4a we plot autonomous
vehicles against the T ES U metric. Results show the efficiency
of the proposed DRL-DVEAP algorithm that provides efficient
placement cost and converges to the exact ILP-OVEAP in terms
of placement cost. In addition, Figures 3b and 3b show that
DVEAP algorithm gives a non-significant T ES AT placement
time compared to OVEAP that still has a feasible placement
time (in terms of a few micro-seconds).
C. Large scale network
In Fig. 5, we quantify the behavior of the DRL-DVEAP
algorithm in large scale networks according to edge configura-
tions. We plot autonomous vehicle number ranging from 20 to
100 against T ES U . Result shows that increasing computing
capacity helps in better offloading edge autopilot functions.
In Fig. 6, we show the limit of the DRL approaches in
a very dense network constituted by a large autonomous
vehicles number requiring services offloading. Result shows
that 87.3% of the edge autopilots are successfully allocated on
MEC servers.
VII. CONCLUSION
In this paper we have proposed an Artificial Intelligence
approach for edge autopilot offloading at the network edge.
Failed Edge Autopilot Allocation
12.7%
Successful Edge Autopilot Allocation
87.3%
Figure 6: Total Successful-Failed Edge Autopilot Allocation in Large
Scale Network.
First, we have proposed an end-to-end architecture for edge-
assisted autonomous driving. Then, we have proposed an opti-
mal allocation approach (OVEAP) that decides about optimal
edge autopilot VNFs placement. Further, to deal with dense
IoAV network, a deep reinforcement learning approach (DRL-
DVEAP) is formulated. Based on different configurations and
edge environments, the proposed DVEAP achieves a good
results in terms of offloading cost and time. In Future work,
we will include more networking and computing architectures
for edge-assisted autonomous driving.
REFERENCES
[1] R. Hussain and S. Zeadally, “Autonomous cars: Research results, issues,
and future challenges,” IEEE Communications Surveys & Tutorials,
vol. 21, no. 2, pp. 1275–1313, 2018.
[2] M. Ehsani, Y. Gao, S. Longo, and K. Ebrahimi, Modern electric, hybrid
electric, and fuel cell vehicles. CRC press, 2018.
[3] J. Wu, H. Liao, J.-W. Wang, and T. Chen, “The role of environmental
concern in the public acceptance of autonomous electric vehicles: A
survey from China,” Transportation Research Part F: Traffic Psychology
and Behaviour, vol. 60, pp. 37–46, 2019.
[4] Z. Ning, J. Huang, and X. Wang, “Vehicular fog computing: Enabling
real-time traffic management for smart cities,” IEEE Wireless Communi-
cations, vol. 26, no. 1, pp. 87–93, 2019.
[5] M. Laroui, A. Sellami, B. Nour, H. Moungla, H. Afifi, and S. B.
Hacene, “Driving path stability in VANETs,” in 2018 IEEE Global
Communications Conference (GLOBECOM), 2018, pp. 1–6.
[6] H. Khedher, S. Hoteit, P. Brown, R. Krishnaswamy, W. Diego, and
V. Veque, “Processing time evaluation and prediction in cloud-ran,” in
ICC 2019 - 2019 IEEE International Conference on Communications
(ICC), 2019, pp. 1–6.
[7] S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang, and W. Shi, “Edge computing
for autonomous driving: Opportunities and challenges,” Proceedings of
the IEEE, vol. 107, no. 8, pp. 1697–1716, 2019.
[8] H. Chishiro, K. Suito, T. Ito, S. Maeda, T. Azumi, K. Funaoka, and
S. Kato, “Towards heterogeneous computing platforms for autonomous
driving,” in 2019 IEEE International Conference on Embedded Software
and Systems (ICESS), 2019, pp. 1–8.
[9] autoware. (2019) autoware. [Online]. Available:
https://www.autoware.org/
[10] J. Santa, P. Fernandez, J. Ortiz, R. Sanchez-Iborra, and A. Skarmeta,
“Surrogates: Virtual obus to foster 5g vehicular services,” Electronics,
vol. 9, pp. 1–16, 01 2019.
[11] N. S. Yeshodara, N. S. Nagojappa, and N. Kishore, “Cloud based
self driving cars,” in 2014 IEEE International Conference on Cloud
Computing in Emerging Markets (CCEM), 2014, pp. 1–7.
[12] S. Kumar, S. Gollakota, and D. Katabi, “A cloud-assisted design for
autonomous driving,” in Proceedings of the first edition of the MCC
workshop on Mobile cloud computing, 2012, pp. 41–46.
[13] B.-c. Zhou, C.-y. Han et al., “Convergence of stochastic gradient descent
in deep neural network,” Acta Mathematicae Applicatae Sinica, English
Series, no. 1, pp. 126–136, 2021.