Content uploaded by Abhishek Jandial
Author content
All content in this area was uploaded by Abhishek Jandial on Jul 28, 2020
Content may be subject to copyright.
Implementation and evaluation of intelligent roadside
infrastructure for automated vehicle with I2V
communication
Abhishek JANDIAL [1], Pierre MERDRIGNAC [1]
Oyunchimeg SHAGDAR [1]
1 Institut VEDECOM, 23 bis Allée des Marronniers, 78000 Versailles, France
firstname.surname@vedecom.fr
Abstract. Since the smart cities are emerging, Autonomous vehicles (AVs) can
help to avoid congestions and gas emission. The smart infrastructures are cur-
rently undergoing significant improvements regarding connectivity and commu-
nication. Hence, they are foreseen to assist AV in complex situations like round-
abouts, blind zones, uphill. Particularly, smart sensors combined with Infrastruc-
ture-to-Vehicle (I2V) communication offer an opportunity to make better deci-
sion and extend perception of AV. Roadside Infrastructure (RSI) can be equipped
with various services like SPAT, CAM, DENM to exchange specific information
over I2V. Complementary, a collective perception message (CPM) has recently
been specified to transfer information from smart sensors. Taken into consider-
ation that CPM carries detailed information on detected objects, installing such
service on RSI to collect data on blind zones and complex areas is proposed here.
In this paper, we first introduce an intelligent RSI architecture, which is compat-
ible with multiple sensors and their providers, further processes sensor data for
CPM creation and transmission. Additionally, we present our methodology for
implementation of our RSI during on-site experimentations. Furthermore, some
key evaluations of CPM transmission are also conducted.
Keywords: Infrastructure-To-Vehicle communication, collective perception, au-
tomated vehicle, roadside infrastructure, roadside unit
1 Introduction
Autonomous vehicles (AVs) are rapidly emerging as a major technology to improve
road transportation systems in terms of safety and efficiency. Presently, AVs
rely on the existing road infrastructure to navigate autonomously and must share their
driving environment with other users such as non-AVs, pedestrians or cyclists. AVs
should perform maneuvers such intersection crossing, entering in a roundabout, over-
taking in complex areas, cooperation between road users has largely been adopted in so
called Cooperative Intelligent Transportation Systems (C-ITS) [7], [16]. Complemen-
tary to the development of AVs, roadside infrastructure (RSI) is also under constant
evolution and can hosts different services to support new generation of vehicles in par-
ticular situations such as intersection crossing assistance [11] or road hazard warning
[16].
To ensure interoperability and facilitate the deployment of such services, standardi-
zation organizations such as European Telecommunications Standards Institute (ETSI)
and IEEE made enormous efforts in producing specifications of V2X communication
protocols, security, privacy measures, and a set of target C-ITS services. In Europe,
Roadside Units (RSU) and vehicles implementing such services should be equipped
with a module, called ITS Station, compliant with the standardized C-ITS architec-
ture [5].
Among the different services offered by C-ITS architecture, Cooperative Awareness
(CA) [2] and Decentralized Environmental Notification (DEN) [3] basic ser-
vices were the first ones to be normalized by ETSI in order to provide basic set of ap-
plication for driving assistance [4]. Although CA and DEN services have brought in-
teresting support to connected vehicles, they are inefficient for AVs which rely on
a new generation of C-ITS applications.
AVs rely on their embedded sensors in order to have a local perception of their en-
vironment and plan automated maneuvers. However, as these sensors can have a lim-
ited field of view or be occluded in the presence of a road obstacle, it would be difficult
for an AV to perform some actions which may lead counter-productive effects in terms
of road safety and traffic. In order overcome this issue, we are sharing embedded sen-
sor information among AVs which has been proposed in order to extend the perception
range [1], [17].
In this paper, we present an RSI which uses collective perception (CP) for assisting
AVs in dedicated sites. Fig.1 illustrates the proposed system at a roundabout where local
perception of AV is insufficient due to limited field of view and occluding obstacles.
Here, a set of local intelligent sensors, e.g. camera, lidars, can be installed to cover the
site with a full perception. Then, RSU transmits such perception information via V2X
Communication using Collective Perception Messages (CPM) as specified by
ETSI [6]. The proposed system assists the vehicle to adapt its behavior when approach-
ing an accident-prone zone (ex: for arrival on a pedestrian passage or bus stop or a
crossroad, etc.). Indeed, receiving real-time information of the current situation on this
zone would help the vehicle to anticipate better trajectory (speed and heading) either
with soft braking or lane change which satisfies safety requirement for public transpor-
tation and corresponds to good driving practices.
Fig. 1. Illustration of Collective Perception by Roadside Infrastructure
The rest of the paper is organized as follows: Section 2 gives details on related
works. Section 3 presents our system for roadside infrastructure based extended per-
ception for AVs and Section 4 explains the details of the key steps in implementation
of an intelligent RSI. In Section 5, we provide system evaluations. Finally, Section 6
concludes our work.
2 Related works
Roadside units were initially assisting vehicles using infrastructure-to-vehicle (I2V)
communication [10] for specific use cases such event notification by an operator cen-
ter [16], Signal Phase and Time (SPAT) of traffic lights for Green Light Optimal
Speed Adaptation [11]. Such information can then be displayed on Human Machine
Interface (HMI) of the vehicle for driver assistance [16] or advertisement of service
availability on service channel of V2X Communication frequency band [13]. Although,
all these messages offer very interesting assistance for drivers of non-AVs, they do not
provide a sufficiently precise description of the environment to assist AVs.
Then, sharing sensor measurement using for example Cooperative Awareness Mes-
sages (CAM) [12] has been proposed to extend current capabilities of
RSI. Although such approach relies on existing standards, it can face scalability and
security issues.
As automated vehicles are equipped with embedded sensors, various works have
proposed to share data collected by these sensors within the vehicular network [1],
[15], [17]. With such systems, information from local sensors are broadcasted to neigh-
boring vehicles which can then receive and process these data elements. As different
kind of sensors can be used by different manufacturers, a generalization of the data
format was necessary to allow interoperability between multiple car makers. Such for-
mat has been introduced in [9] and is under specification at ETSI [6] under Collective
Perception Service.
With CP service, a common knowledge is shared among road user and it has been
shown that such information can improve the overall situation awareness for AVs [8].
First simulation studies have been carried out with CPM transmitted by an intelligent
RSI and showed its potential to extend time horizon of AVs [14]
As CPM is getting more and more attention, we introduce an intelligent RSI which
has been installed in multiple test sites. The main contribution of our work consists in
large experimental trials of CPM transmission with a study on RSI installation at trial
site to obtain the optimal performance with these messages.
3 Intelligent RSI for extending perception of AVs
As autonomous shuttles are driving in open road to serve passengers in regular trips, it
is necessary to ensure that the roads are fully compliant with their operational require-
ments as illustrated in Fig.2. As part of the French national project, multiple AVs are
conducting on-field experiments covering 10 km of open roads. In this context, we de-
veloped intelligent RSI to cover the autonomous shuttle testing environment. In partic-
ular, our intelligent RSI has been installed at 26 sites where the quality of service of
autonomous shuttles may be reduced if they drive in complete autonomy.
Fig. 2: Road infrastructure for autonomous shuttle supervision
3.1 Requirements
The proposed system must satisfy the following requirements:
• Time synchronization: As data communicating between AV and RSI are highly dy-
namic, e.g. position and speed of vehicles, a perfect synchronization between these
equipments is essential.
• End to end time: The time taken by the RSI to share the roadside information with
AV should be close to the reality. In case, the information arrives late to the AV, it
would be of no use.
• Interoperability between various sensor and V2X vendors: Nowadays, many com-
mercial sensor providers are available with very efficient detection algorithm like
smart cameras by Flir[18] and Lidar by Quanergy[19]. The system should support
any sensors of such vendors. Similarly, it should be able to support interface with
any V2X devices or LTE broadcasting routers.
• Precision of position: As this system shares roadside perception with AV, so that
AV can make decisions to perform automated actions, a precision a few centimeters
is required for transmitted position values.
• Site coverage: RSI is installed in an environment where perception of AV can be
degraded due to limited visibility. Therefore, the roadside sensors which are
mounted on the RSI should provide broader perception view in order to reduce the
effects of blind zones for AVs and assist AVS in anticipating any danger.
• Adaptability to many topology configurations: As every target site for RSI installa-
tion may have some particularities, it is necessary for the system to be adaptable. By
studying installation sites, appropriate decisions regarding choice of sensors and
equipment implantation. For example, sensors cannot be installed everywhere, and
existing poles or lamp posts can be chosen to support them.
3.2 Architecture of the system
As shown in Fig.3Fig. 3, the proposed RSI is composed of three main elements:
• Smart sensors are responsible for roadside scene observation and object detection.
As such smart sensors embed signal processing algorithms, they produce tracking
lists containing information regarding the objects located in the target zone of RSI.
• A central perception unit is responsible for converting tracking list obtained from
smart sensors into CPM format. This module ensures interoperability between mul-
tiple sensors vendors and multiple communication technology. In addition, it can
fill CPM data fields based on data elements provided by the sensors currently in use
at a given RSI.
• A road side unit (RSU) sends CPM on the V2X channel. This RSU must be com-
pliant with the reference C-ITS architecture [5].
Fig. 3. Architecture of the roadside infrastructure
3.3 Central Perception Unit
Currently, many commercial sensors and softwares are available in the market to per-
form automatic object detection and tracking. The purpose in designing such compo-
nent is that the RSI can be configured with multiple sensor and RSU vendors. This
provides an improved adaptability of the system when installed in sites having different
requirements.
Central Perception Unit flow chart is shown in Fig. 3. and is composed of three main
steps:
1) A reader connects to the sensor via standard protocols like TCP, UDP, WebSocket
etc. Then sensor data are read over various formats like, XML, JSON etc. In our
experiment, a TCP and WebSocket reader have been designed.
2) Depending on the received sensor data format, the packets are decoded using either
JsonDecoder or XMLDecoder, and then converted in order to be filled into CPM
format.
3) Once the CPM is prepared, a publisher provides such data structure to RSU, which
is in our case a V2X device, for its broadcasting to the AV. In our experiment, the
publisher sends the CPM over TCP protocol. Additionally, it can be configurated to
connect to multiple kinds of devices or logging units.
Key concepts in implementation of intelligent RSI
3.4 Site Survey
Perception survey was performed before the installation of the sensors and the RSU.
The purpose of this survey was to understand the topology of the sites and points of
interest. Moreover, keen observation was conducted to find blind spots or dead zones
for AV. To ensure that such blind zones can be anticipated, an optimal sensor needs to
be chosen.
Fig 4: Roadside Topology
SS_ID
Site Categories
Site Survey Indicators
SS_RPC_1
Roadside conditions
Presence of trees
SS_RPC_2
Presence of glass buildings
SS_RPC_3
One-way narrow streets
SS_RPC_4
Pedestrian zone
SS_RPC_5
Large roundabout
SS_DOT_1
Detection object type
Pedestrians presence
SS_DOT_2
Bus-stop with buses and pedestrians
SS_DOT_3
One-way vehicle presence
SS_DOT_4
Roundabout with multiple object type
SS_ROI_1
Region of interest
(ROI) for detections
Blind/hidden spot detections
SS_ROI_2
Dangerous zones detection
SS_ROI_3
Key detection areas on the site
Table 1. Categorization of indicators evaluated during site survey
Table 1, lists indicators that were evaluated during the site survey. These indicators
have been organized into three main categories:
1. Physical conditions of roadside: roadsides might have trees or heavy struc-
ture buildings which could limit the visibility of AV. Topology of a site
varies like roundabouts, small intersections, blind curves, single or multiple
lane roads etc. Based on the topology, large or limited, a sensor is chosen.
The figure 2, shows two different types of road topologies, Fig.2(left) shows
a large roundabout where a sensor with large Field-of- view (FOV) would
be required e.g. lidar.
2. Detection object type: On certain sites, main interest for detection could be
just pedestrians, vehicles/buses, or both. Sites with a high number of specific
objects, could use a sensor with the best detection performances on that spe-
cific object. For example, Fig. 2(right), a site containing intersection with a
pedestrian crossing would have potential detections of pedestrians. On such
sites a short-range pedestrian detection camera could be the best fit.
3. Region of interest (ROI) for detection: On a site, the region of interest could
be the blind spots, danger zones to enhance the perception of AV. Many
blind zones are possible in a trajectory of an AV. It could be an uphill or
sharp bend on the road. It can also be caused by the presence of infrastruc-
ture, e.g. a roundabout, and even due to dynamic situations such as a large
vehicle blocking the visibility, e.g. at a bus station.
In order to choose a sensor for the installations, it is important to understand the effi-
ciency and functionality of that sensor. In our work, we performed a series of test cases
on the sensors, as shown in Table 2, to evaluate their strengths and limitations.
Our tests can be divided into two kinds. First, sensor unit test cases, classified in five
categories, seek to provide the conditions in which a sensor can be used at a RSI. The
results of such tests are binary and would be yes or no. Second, sensor performance
evaluation tests are conducted on multiple parameters. These tests are conducted with
multiple datasets in order to estimate average performances for different parameters,
e.g. accuracy of position and speed of detected objects.
SV_ID
Test Ca-
tegory
Test cases
SV_LCT_1
To perform a test with the object detection in night conditions
Table 2. Sensor Validation Tests.
3.5 Sensors qualification per site
After completing site survey and sensor testing as presented in the above sections, a
main part of our work is to qualify sensors that can be installed on every target site. Fig.
4 presents the flow chart of our methodology for qualifying a given sensor for a target
installation site.
SV_LCT_2
Light conditions
tests
To Test object in the weather conditions like Rain, snow
SV_FOC_1
FOV tests
To test horizontal field of view of the sensor
SV_FOC_2
To test Lidar 90/180/270°, cameras 45,60,90° with best detection
SV_OIC_1
Object_id confirma-
tion and
range blockage tests
To test detected object blockage by some external source like wall, tree
etc.
SV_OIC_2
To Test 2 or more objects moving closely and then separating
SV_OIC_3
To Test 2 or more detected object from different direction together to-
wards one direction and speed
SV_OIC_4
Test an object enters the detection frame and then goes out of the range
and then shows up again
SV_MOT_1
Mask Object tests
To Test for pedestrian with cap, umbrella
SV_MOT_2
Test incomplete or semi hidden object in the frame of the sen-
sor e.g.: half vehicle appears in the FOV of the sensor
SV_MOT_3
To Test for pedestrian with skates/ electric scooter/ Stoller
SV_MOT1_1
Multiple object tests
To Test multiple object entering from different location and exiting
SV_MOT1_2
Test Two or more pedestrian appearing in the frame from opposite direc-
tion at the same time /different directions
SV_MOT1_ 3
Test 20 -100 objects appearing at the same time in the frame
SV_PAT_1
Position Accuracy
Tests
To test the object detection 0- 20 mts. distance from the sensors
SV_PAT_2
To test the object detection at 30 mts. distance from the sensors
SV_PAT_3
To test the object detection at 40 mts. distance from the sensors
SV_ST_1
Speed Test
To test the object under various vehicle speeds 10,20,40, 60 km/h
SV_DT_2
To test the Pedestrian at walk, jog and run
SV_ST_1
Direction Test
To test the direction of the object on following directions
SV_ST_2
From X axis to -X axis ; -X axis to X axis
SV_ST_3
From Y axis to -Y axis ; -Y axis to Y axis
Fig. 4.: Flow chart of sensor qualification methodology
First, requirements related to the installation site are extracted from the perception
survey. These requirements express objectives for the RSI sensors in terms of site cov-
erage regarding identified ROI and target object for detection. In addition, some con-
straints for sensor implantation are also defined from the perception survey.
Second, every sensor is qualified with respect to the site requirements. In particular,
the following criteria are considered in the qualification process:
• Type of sensors: As different sensors have different capabilities, we need to
validate the capacity of the sensor to be compliant with site coverage and sub-
ject of detection requirements.
• Sensor installation constraint: All the sensors cannot be mounted on the same
height. For example, lidar cannot be installed at a height more than 4 meters,
while cameras have pre-requisite of installation from 6-9 meters depending
upon its horizontal FOV. Implantation position satisfying these conditions
must be ensured.
• Position and Angle of the sensor: Depending on the position and view angle
of the sensor, it can have different performances. As these parameters are fixed
at the installation, it is important to decide a configuration where a sensor has
optimal detection for the required region of interest inside the site.
• Then, an expected performance level is evaluated by retrieving the sensor per-
formance in the configuration of the target site from a recorded dataset. From
this evaluation stage, a sensor acceptance score is obtained in order to validate
or invalidate the installation of such sensor.
• Finally, once all sensors have qualified for a given test site, the configuration
of the RSI with the highest acceptance is selected and is applied for installa-
tion.
3.6 RSI installations
The RSI installation was performed for the experimentation with AV on a commercial
road with many non-connected vehicles. The Figures below shows AV on the road with
installed RSI. In this section we discuss post-installation tasks and challenges.
Fig. 6. Experiment sites and AV next to RSI
Calibrations: Once the sensor is installed, some intrinsic and extrinsic parameters like
sensor height, tilt angle, detection criteria should be calibrated for the best results. We
provide here some steps of this procedure. The calibrations are sensor dependent, like
in case of smart cameras, the calibration involves just inputting the height of the sen-
sors. However, in case of lidars it can get complicated if dealing with the raw data of
the laser output. For our experiments we choose a commercial lidar in which the cali-
bration process was automatic except if there were multiple lidar fusion involved.
• Position precision: Precise position of sensors and RSU are strictly needed during
the installation of these equipment on a site. The central perception unit performs
the translation of the object position from related to an absolute position using the
position of RSU and the sensor. These precise positions were obtained in this exper-
iment using an RTK correction with GNSS. The Fig 6, (right) shows recording posi-
tion at the experiment site using GPS RTK.
• Defining the height and tilt angle of sensors installed: It enhances the precision of
detection of the sensor.
• Define the region of interest (ROI): To avoid some unwanted detections we define
the ROI of the sensors.
• Stability of the infrastructure: After the installation of the equipment it is important
to ensure the stability. As there has been a calibration performed onto the sensors if
moved the calibration could be lost resulting into false detections. The instability
could be caused due to improper installations or due to the weather conditions. It is
been observed that the mast being unstable, moves about 2 cm, in case of automatic
calibration, this might change the output data.
4 Evaluations
In this section, we provide some key insights related to the performance of our RSI
which were obtained during the evaluation of the system.
4.1 V2X transmissions evaluations
Fig. 5 a & b CPM generation and transmission time.
Fig. 5a (left) shows the CPM generation time with number of objects associated to it.
The CPM generation time ranging from minimum 2.6 milli seconds to maximum 9.3
milli seconds.
Fig. 5b (right) shows the CPM air time or transmission time associated with number of
objects in the CPM message.
It should be noted that all these values in graphs are average of at least 100 iterations
performed.
In addition, we made the following observations from our experiments regarding V2X
communication:
• Weather conditions effects the radio transmission
• Limitations of maximum size of message to be broadcasted is 1398 bytes i.e. ap-
proximately 50 CPM objects (IEEE Std 802.11 - 2007).
• Range of message transmission by RSU is affected by urban infrastructure.
• RSU's performance reduces with overload of processing like security certificates,
• Security certification at RSU causes the reduction of size of message.
4.2 Sensor evaluations
Table 3 shows an example of of calculation of accuracy of position of a sensor. The
right column shows actual distance of the object and left columns show sensor output
on various heights. According to this table, the evaluated sensor has a coverage till 25
meters and accurate detections till 10-15 meters. Here, we observed that optimal height
for installation is between 5-5.5 meters.
Fig. 6. Accuracy of position for camera sensor
4.3 Time synchronization between complete RSI and AV
In our experiments, time was synchronized by using an NTP server for sensors and by
GNSS antennas on the V2X side. On the AV side, the time synchronization was per-
formed using LTE network and GNSS. For the message exchange like SPAT there was
not a major problem of the time lag as it does not require micro second time precision.
But for the CPM, which requires a high precision of time, we did not receive good
results. Despite of using GNSS for time synchronization on the both the ends RSU and
OBU (on board unit) we conclude that the GNSS does not provide a stable time and
position accuracy. For our experiment, we decided to trigger a centralized clock which
synchronizes both sides.
5 Conclusion
We presented in this paper an intelligent roadside infrastructure which has been tested
and installed in different target sites for assisting autonomous vehicles in case of limited
perception situation. Our study provides major insight for future installation of such
RSI, 1) the system must be interoperable with different sensors and communication
system to be adapted to multiple site topologies 2) installation sites should be carefully
examined for choosing the most appropriate sensor configuration and offering the best
service to AVs 3) installation calibration should be done very precisely and alerts
should be provided to operators in case calibration is lost to guaranty a high level of
service for the RSI 4) current V2X communication systems have still some limitations
to fully support collective perception, indeed, it has limitations to transmit information
for a large number of detected objects and cannot ensure synchronization between
transmitters.
References
1. Bauer, M. A., Charbonneau, K. and Beauchemin, S. S. “V2Eye: Enhancement of visual per-
ception from V2V communication.” 2011 IEEE Consumer Communications and Networking
Conference (CCNC). IEEE, 2011.
Distance 4 mtrs 5 mtrs 5,5 mtrs 6,5 mtrs
2 mtrs 2,7095 3,171 3,6187 3,5803
5 mtrs 4,8455 4,8661 4,9522 3,9772
10 mtrs 9,0887 9,4445 9,5099 11,6362
20 mtrs 15,8029 16,5614 17,4739 13,0971
25 mtrs 18,3565 19,5003 20,4306 15,9728
Height of the sensor
2. ETSI. “ETSI EN 302 637-2; Intelligent Transport Systems (ITS); Vehicular Communications;
Basic Set of Applications; Part 2: Specification of Cooperative Awareness Basic Service.”
2014-11.
3. ETSI. “ETSI EN 302 637-3; Intelligent Transport Systems (ITS); Vehicular Communications;
Basic Set of Applications; Part 3 Specification of Decentralized Environmental Notification
Basic Service.” 2014-11.
4. ETSI. “ETSI TR 102 638; Intelligent Transport Systems (ITS); Vehicular
Communications;Basic Set of Applications; Definitions.” 2009-06.
5. ETSI. “ETSI EN 302 665; Intelligent Transport Systems (ITS); Communications
Architecture.” 2010-09.
6. ETSI. “ETSI TS 103 324; Intelligent Transport System (ITS); Collective Perception Service
[Release 2].” 2017.
7. Festag, A. “Cooperative intelligent transport systems standards in Europe.” IEEE communi-
cations magazine 52.12 (2014): 166-172.
8. Günther, H.-J., Trauer, O. and Wolf, L.. “The potential of collective perception in vehicular
ad-hoc networks.” 14th International Conference on ITS Telecommunications (ITST). IEEE,
2015.
9. Gunther, H.-J., Mennenga, B., Trauer, O., Riebl, R., and Wolf L. “Realizing Collective
Perception in a Vehicle.” IEEE Vehicular Networking Conference (VNC), IEEE, 2016.
10. Jerbi, M., Marlier, P. and Senouci, S. M.. “Experimental assessment of V2V and I2V com-
munications.” 2007 IEEE International Conference on Mobile Adhoc and Sensor Systems.
IEEE, 2007.
11. Katsaros, K., Kernchen, R., Dianati, M., Rieck D. "Performance study of a Green Light Opti-
mized Speed Advisory (GLOSA) application using an integrated cooperative ITS simulation
platform." 7th International Wireless Communications and Mobile Computing Conference.
IEEE, 2011.
12. Kitazato, T., Tsukada, M., Ochiai H., and Esaki K. “Proxy cooperative awareness message:
an infrastructure-assisted v2v messaging.” 2016 Ninth International Conference on Mobile
Computing and Ubiquitous Networking (ICMU). IEEE, 2016.
13. Labiod, H., Servel A., Segarra G., Hammi B., Monteuuis J.-P. “A new service advertisement
message for ETSI ITS environments: CAM-Infrastructure.” 2016 8th IFIP International Con-
ference on New Technologies, Mobility and Security (NTMS). IEEE, 2016.
14. Merdrignac, P., Shagdar, O., Tohmé, S., and Franchineau, J.-L.. “Augmented Perception by
V2X Communication for Safety of Autonomous and Non-Autonomous Vehicles.” Proceed-
ings of 7th Transport Research Arena TRA 2018, April 16-19, 2018.
15. Rauch, A., Klanner, F., and Dietmayer K. “Analysis of V2X Communication Parameters for
the Development of a Fusion Architecture for Cooperative Perception Systems.” IEEE
Intelligent Vehicles Symposium (IV), IEEE, 2011.
16. Santa, J., Pereñíguez F., Moragón A., Skarmeta A. F. “Vehicle-to-infrastructure messaging
proposal based on CAM/DENM specifications.” 2013 IFIP Wireless Days (WD). IEEE, 2013.
17. Wender, S., and Dietmayer, K. “Extending onboard sensor information by wireless
communication.” IEEE Intelligent Vehicles Symposium 5IV, IEEE, 2007. 535--540.
18. Flir trafione cameras
19. Quanergy M8 lidar