Fig 1 - uploaded by Khan Muhammad
Content may be subject to copyright.
Hardware assembly of our self-driving car. Raspberry Pi GPIO pins are extended through extension board, where to physical units are connected. bounced by the nearby objects and received by the ECHO. Distance is calculated from the difference of the transmitted pulse and the received pulse using a speed formula. The speed of sound is 343 (34300 centimeter) meters per second in the air. Time is divided by 2 because the pulse travels to the object and back again: í µí±‘ = í µí± í µí±í µí±’í µí±’í µí±‘ × í µí±¡í µí±–í µí±ší µí±’ ,

Hardware assembly of our self-driving car. Raspberry Pi GPIO pins are extended through extension board, where to physical units are connected. bounced by the nearby objects and received by the ECHO. Distance is calculated from the difference of the transmitted pulse and the received pulse using a speed formula. The speed of sound is 343 (34300 centimeter) meters per second in the air. Time is divided by 2 because the pulse travels to the object and back again: í µí±‘ = í µí± í µí±í µí±’í µí±’í µí±‘ × í µí±¡í µí±–í µí±ší µí±’ ,

Source publication
Article
Full-text available
Autonomous vehicles rely on sophisticated hardware and software technologies for acquiring holistic awareness of their immediate surroundings. Deep learning methods have effectively equipped modern self-driving cars with high levels of such awareness. However, their application requires high-end computational hardware, which makes utilization infea...

Contexts in source publication

Context 1
... to the obstacle. Table I shows the specification of hardware modules used in this prototype car. The specified ultrasonic unit has four pins, Trigger pulse unit (TRIG), Echo unit (ECHO), Ground unit (GND), and Power supply unit (VCC). The power was supplied to the unit from 5 volts General Purpose Input Output (GPIO) pins of Raspberry Pi. Fig. 1 shows the assembly of all hardware sensors in detail. The ultrasonic sensor works by the principle that a pulse is sent from the sensor using TRIG. This pulse is ...
Context 2
... referred as Optimized OpenCV. Taking advantage of these features (ARM NEON, SIMD, VFPV3, and NaN) in the Raspberry Pi, OpenCV is a built-in optimized mode. Further, tensorflow provides a possibility to use a number of processor cores for a task. Leveraging this feature, deep experiments are set on multiple cores and different versions of OpenCV. Fig. 10 shows the average execution time of the normal OpenCV, optimized OpenCV, and Optimized OpenCV with the support of Movidius Intel Computing Stick. Fig. 11 demonstrates the average temperature of the core during frame processing. A high increase in temperature on cores 3 and 4 is because the load shifting probability is decreased among ...
Context 3
... Further, tensorflow provides a possibility to use a number of processor cores for a task. Leveraging this feature, deep experiments are set on multiple cores and different versions of OpenCV. Fig. 10 shows the average execution time of the normal OpenCV, optimized OpenCV, and Optimized OpenCV with the support of Movidius Intel Computing Stick. Fig. 11 demonstrates the average temperature of the core during frame processing. A high increase in temperature on cores 3 and 4 is because the load shifting probability is decreased among the CPU cores. In order to achieve the maximum possible accuracy and to reduce the computational cost, the number of frames per second were decreased, thus ...
Context 4
... frames as "left" and 51 frames as "non-left", thus attaining an overall accuracy of 98.5%. Distance calculations with monocular vision sensors became a challenging task. A shorter distance to the sign gives nearly the actual distance, but when the distance from the sign was increased, error in the distance calculations also increased as shown in Fig. 12. Fig. 12. Distance calculated by a vision sensor. As model car moves toward the detected traffic sign, the difference between the actual and the predicted values decreases. Fig. 13 shows a sample image of the distance calculated by a vision sensor. The difference between the actual distance and the distance calculated by the vision ...
Context 5
... "left" and 51 frames as "non-left", thus attaining an overall accuracy of 98.5%. Distance calculations with monocular vision sensors became a challenging task. A shorter distance to the sign gives nearly the actual distance, but when the distance from the sign was increased, error in the distance calculations also increased as shown in Fig. 12. Fig. 12. Distance calculated by a vision sensor. As model car moves toward the detected traffic sign, the difference between the actual and the predicted values decreases. Fig. 13 shows a sample image of the distance calculated by a vision sensor. The difference between the actual distance and the distance calculated by the vision sensor may ...
Context 6
... distance to the sign gives nearly the actual distance, but when the distance from the sign was increased, error in the distance calculations also increased as shown in Fig. 12. Fig. 12. Distance calculated by a vision sensor. As model car moves toward the detected traffic sign, the difference between the actual and the predicted values decreases. Fig. 13 shows a sample image of the distance calculated by a vision sensor. The difference between the actual distance and the distance calculated by the vision sensor may be due to the following reasons: 1) Error in the measurements of the actual values. 2) Error in the camera calibration. 3) Variation in the object bounding box while ...
Context 7
... in higher error. 5) Raspberry Pi camera is general-purpose camera and has average image quality. Ultrasonic sensors have only been used for detecting objects and distances from the car. An ultrasonic sensor uses sound waves to calculate the distance. Due to this, some errors were experienced in calculating the distance during our demonstrations. Fig. 14 shows the actual distance and the distance calculated by the ultrasonic sensor. The difference between the actual and the measured values may be due to the following reasons: Test 1 60 98 55 103 103 55 60 98 52 106 107 51 98.5 % Test 2 79 79 71 87 98 60 79 79 73 85 99 59 99.1 % Test 3 87 71 118 40 40 118 88 70 116 42 39 119 98.9 % Test ...
Context 8
... calculated by the ultrasonic sensor. The difference between the actual and the measured values may be due to the following reasons: Test 1 60 98 55 103 103 55 60 98 52 106 107 51 98.5 % Test 2 79 79 71 87 98 60 79 79 73 85 99 59 99.1 % Test 3 87 71 118 40 40 118 88 70 116 42 39 119 98.9 % Test 4 50 108 90 68 100 108 53 105 87 71 99 59 98.8 % Fig. 13. Sample images showing distance to the traffic sign by using a vision sensor. Each sign is detected (green rectangle), recognized, and the distance is calculated between the traffic sign and the model car. ...
Context 9
... total expenditures used by a system during completing a specific task are known as energy consumption. To evaluate the total energy consumption by our system with deep learning model, 5 frames have been processed in one second. The parameters have been calculated using a Keweisi device while estimating the total power consumption of our system. Fig. 15 shows sample images of power consumption. The unit power and the energy consumption of the Raspberry Pi can be observed in Table VI when the system is idle and no task is in progress. The total ampere, voltage, time, power, and energy drain by Raspberry Pi during processing a single frame with deep learning can be seen in Table ...
Context 10
... the performance of the deep model on CPU and GPU, a multi-thread TCP server is used for receiving the video stream and the ultrasonic data from Raspberry Pi on the computer. Data from the Raspberry Pi is processed on the computer and only the decisions reached by the deep model are sent to the Raspberry Pi to take the necessary actions. In Fig. 16, the red bar shows the average power consumption of the GPU during processing the deep model. The total average power consumed by the GPU on the frame processing is 440 Watts, as the normal current increases from 1.5 to 2amps on the execution of the neural network. The power consumption of the GPU is 330 Watts in the idle state. Yellow ...
Context 11
... The power consumption of the GPU is 330 Watts in the idle state. Yellow bar shows the average power consumption of the CPU on the execution of the deep model. The total average power consumed by the CPU on executing the deep model is 224.4 Watts. A small amount of power of 6 Watts is consumed by the Raspberry Pi, as shown in green color in Fig. 16, while attaining the same accuracy as obtained by the GPU and ...
Context 12
... energy consumption of different computing platforms is presented in Fig. 17. The average energy consumption of the GPU is 4.4 Joule on a single frame execution. The CPU consumes higher energy than the GPU since the former is not Graphics card enabled. Raspberry Pi, in contrast to GPU and CPU, consumes less energy, which is on average 1.38 Joule/frame. Fig. 18 shows the time complexity of a single frame on ...
Context 13
... consumption of different computing platforms is presented in Fig. 17. The average energy consumption of the GPU is 4.4 Joule on a single frame execution. The CPU consumes higher energy than the GPU since the former is not Graphics card enabled. Raspberry Pi, in contrast to GPU and CPU, consumes less energy, which is on average 1.38 Joule/frame. Fig. 18 shows the time complexity of a single frame on different hardware platforms. A total of 1,000 frames are executed on each platform. The time complexity of the GPU on a single frame is 0.01 seconds. The average time consumed by the CPU to execute a single frame is 0.06 seconds. Raspberry Pi did not perform well as it had limited ...

Similar publications

Article
Full-text available
In logistics and freight distribution, scheduling and cost efficiency are two crucial issues for transportation companies that look with favour at the innovation introduced by Intelligent Transportation Systems (ITS). Moreover, an infrastructure level of service, safety and environmental defence are important for planners and public administrations...

Citations

... These parameters include the intrinsic matrix, which contains the focal length, optical center, and skew of the camera, and the distortion coefficients, which account for the radial and tangential distortion of the lens. The OpenCV library was used to perform the calibration by capturing several images of the calibration chessboard from different angles, finding the 2D coordinates of the chessboard corners in the images, and computing the camera parameters [15]. Approximately 50 images were taken with varying camera poses, and the translation and rotation vectors were calculated from the calibration. ...
Conference Paper
Full-text available
Dynamic pile load tests are essential for verifying the ultimate limit state for pile design in geotechnical engineering. However, conventional methods for monitoring these tests, such as strain gauges and accelerometers, are expensive and labor-intensive. This paper proposes a novel method that uses computer vision and artificial markers to measure pile head movement during dynamic pile load tests, and a transformer-based deep learning model to predict pile capacity from the movement data. The proposed method is low-cost, easy-to-use, and accurate, with a mean absolute error of 2.4% for pile capacity prediction using K-fold cross-validation. The paper also presents a sensitivity analysis of the transformer model with respect to the number of heads and layers, which indicated the optimal settings to avoid overfitting of the training data. The paper discusses the limitations of the proposed method, such as the dependency on the camera position and suggests future directions of the research, such as incorporating other features and improving the data quality. The proposed method can be applied in real cases of dynamic pile load tests to increase the number of tests on site and to ensure the safety and reliability of pile design.
... Marvy Badr Monir Mansour et al. [47] implemented a project to demonstrate autonomous parallel car parking that can be used efficiently in metropolitan cities. Baramee Thunyapoo et al. [48] proposed a simulation framework for autonomous-car parking for moderate complexity scenarios. Shiva Raj Pokhrel et al. [49] developed (and evaluated) an experience-driven, secure and privacy-aware framework of parking reservations for automated cars. Since autonomous-cars can communicate with each other, they can reduce traffic congestion by coordinating their movement on the road. ...
... Simulation tools developed specifically for the requirement of autonomous vehicles are utilized to simulate diverse aspects such as path planning and testing, mobility dynamics, fuel economy in urban scenarios [97]. Sajjad et al. [49] proposed an efficient and scalable simulation model for autonomous vehicles with economical hardware. A reduced reality gap for testing autonomous vehicles has been proposed by Patel et al. [98]. ...
Article
Full-text available
Autonomous cars have achieved exceptional growth in the automotive industry in the last century in terms of reliability, safety and affordability. Due to significant advancements in computing, communication and other technologies, today we are in the era of autonomous cars. A number of prototype models of autonomous cars have been tested covering several miles of test drives. Many prominent car manufacturers have started investing huge resources in this technology to make it commercialize in the near future years. But to achieve this goal still there are a number of technical and non-technical challenges that exist in terms of real-time implementation, consumer satisfaction, security and privacy concerns, policies and regulations. In summary, this survey paper presents a comprehensive and up-to-date overview of the latest developments in the field of autonomous cars, including cutting-edge technologies, innovative applications, and testing. It addresses the key obstacles and challenges hindering the progress of autonomous car development, making it a valuable resource for anyone interested in understanding the current state of the art and future potential of autonomous cars.
... An Efficient Model for Autonomous Vehicles [21] is designed to target light devices. The proposed model uses a simple monocular camera and ultrasonic sensor to identify traffic signs to detect obstacles and avoid them. ...
Article
Full-text available
In this paper, we present a two stages solution to 3D vehicle detection and segmentation. The first stage depends on the combination of EfficientNetB3 architecture with multiparallel residual blocks (inspired by CenterNet architecture) for 3D localization and poses estimation for vehicles on the scene. The second stage takes the output of the first stage as input (cropped car images) to train EfficientNet B3 for the image recognition task. Using predefined 3D Models, we substitute each vehicle on the scene with its match using the rotation matrix and translation vector from the first stage to get the 3D detection bounding boxes and segmentation masks. We trained our models on an open-source dataset (ApolloCar3D). Our method outperforms all published solutions in terms of 6 degrees of freedom error (6 DoF err).
... So as to overcome these issues the graphical processing unit (GPU) is used with Inverse Distance Weighted (IDW) interpolation algorithm for the obstacle detection and the increase of the accuracy and efficiency of the road point. Sajjad et al. (2021) Deep learning methods effectively supports and act as a milestone in the autonomous vehicle. With the usage of the less hardware and the effective deep learning technique for the capture of the image and directorate towards the path and with the simple technique the huge accidents are avoided by means of this using of ultrasonic sensor. ...
... This section provide details summary of current literature in the context of view selection in videos. In the literature, a lot of work has been done on the video processing ranging from detection [21] and quality assessment [22] to autonmous driving [23] . However, in comparison to conventional videos, 360°videos give the users an exciting experience through an illusion of being there in the virtual contents. ...
Article
A 360° video stream provide users a choice of viewing one’s own point of interest inside the immersive contents. Performing head or hand manipulations to view the interesting scene in a 360° video is very tedious and the user may view the interested frame during his head/hand movement or even lose it. While automatically extracting user’s point of interest (UPI) in a 360° video is very challenging because of subjectivity and difference of comforts. To handle these challenges and provide user’s the best and visually pleasant view, we propose an automatic approach by utilizing two CNN models: object detector and aesthetic score of the scene. The proposed framework is three folded: pre-processing, Deepdive architecture, and view selection pipeline. In first fold, an input 360° video-frame is divided into three sub-frames, each one with 120° view. In second fold, each sub-frame is passed through CNN models to extract visual features in the sub-frames and calculate aesthetic score. Finally, decision pipeline selects the sub-frame with salient object based on the detected object and calculated aesthetic score. As compared to other state-of-the-art techniques which are domain specific approaches i.e., support sports 360° video, our system support most of the 360° videos genre. Performance evaluation of proposed framework on our own collected data from various websites indicate performance for different categories of 360° videos.
... In Chen et al. (2021), a novel technique named copula-BN was adopted using a large dataset obtained from a naturalistic driving experiment. In Sajjad et al. (2020), the DL model was implemented using Raspberry Pi and monocular vision for autonomous car development. In Bouhoute et al. (2019), an approach for examining driving performance, such as speed, acceleration and steering angle, amongst others, was implemented using motorised devices. ...
Article
The aim of this article is to review and analyse previous academic articles associated with car behaviour analysis for the period of 2010 to June 10, 2021 and understand the benefits of using data collection devices. Articles related to car driver behaviour and sensor utilisation are systematically searched. Three major databases – ScienceDirect, IEEE Xplore and Web of Science – were searched. A set of inclusion and exclusion criteria were developed for the search protocol. All articles were coherently classified via taxonomy. Also. The motives that have led researchers to continue their investigations are explored. The challenges and issues of driver behaviour analysis are illustrated with respect to power consumption, data analysis, detection, cost, security and privacy, sensor usage and individual challenges. The research direction of this review points towards different aspects based on the critical analysis of the different scenarios of driver behaviour studies in real time situations. Here, the critical behaviour analysis of intelligent transportation system development is addressed. The gaps in the reviewed articles include the following: sensors used during experiments, the effect of thresholds on labelling processes or data balancing and classification accuracy, the thresholds in identifying driving styles in the car-following model, insufficient experiment size (large scale or small scale) and limitations in data pre-processing. An implementation map depicting the steps of the case study is provided to give insights into the procedures and the problems they address. This review is expected to offer valid and clear points, contributing to the enhancement of driver behaviour research.
... Successful detection and classification of traffic signs are one of the major challenges to overcome for fully self-driving cars. For example, Sajjad et al. (2020) developed a method for detection and avoidance of obstacles. The model recognizes various traffic signs based on visual sensors; it allows to avoid obstacles using ultrasonic sensors. ...
Article
Full-text available
Recent advances in Intelligent Transport Systems (ITS) and Artificial Intelligence (AI) have stimulated and paved the way toward the widespread introduction of Autonomous Vehicles (AVs). This has opened new opportunities for smart roads, intelligent traffic safety, and traveler comfort. Autonomous Vehicles have become a highly popular research topic in recent years because of their significant capability to reduce road accidents and human injuries. This paper is an attempt to survey all recent AI based techniques used to deal with major functions in AVs, namely scene understanding, motion planning, decision making, vehicle control, social behavior, and communication. Our survey focuses solely on deep learning and reinforcement learning based approaches; it does not include conventional (shallow) shallow based techniques , a subject that has been extensively investigated in the past. Our survey builds a taxonomy of DL and RL algorithms that have been used so far to bring solutions to the four main issues in autonomous driving. Finally, this survey highlights the open challenges and points out possible future research directions.
... For computer vision problems, object detection can provide valuable information for semantic understanding of images and videos. It has been used in many applications, including image classification, activity recognition, surveillance, and autonomous driving [17] [18] [19]. ...
Article
Full-text available
Object detection supported by Unmanned Aerial Vehicles (UAVs) has generated significant interest in recent years including applications such as surveillance, search for missing persons, traffic, and disaster management. Location awareness is a challenging task, particularly the deployment of UAVs in a Global Positioning System (GPS) restricted environment or GPS sensor failure. To mitigate this problem, we propose LocateUAV a novel location awareness framework, to detect UAV’s location by processing the data from the visual sensor in real-time using a lightweight Convolutional Neural Network (CNN). Assuming that the drone is in an IoT environment, first, the object detection technique is applied to detect the Object of Interest (OOI) namely, signboard. Subsequently, Optical Character Recognition (OCR) is applied to extract useful contextual information. In the final step, the extracted information is forwarded to the map Application Programming Interface (API) to locate the UAV. We also present a newly created dataset for LocateUAV, which comprises challenging scenarios for context analysis. Moreover, we also compress an existing lightweight model up to 45MB for efficient processing over UAV, which is 19.5% when compared with the size of the original model. Finally, an in-depth comparison of various trained and efficient object detection and OCR techniques is presented to facilitate future research on the development of flex-drone that can extract information from the surroundings of a location in a GPS-restricted environment.
... Conventionally, the majority of simulation tools in this sector have mostly focused in aerodynamics, computed-aided design, vehicle collision, autonomous driving, communication systems, electric drive-train and energy management [2]. The latest simulation trends in automotive systems deal with advanced functionalities such as scalable models for autonomous-driving cars [3], energy-efficient networking for electric vehicles networks [4] and internet of vehicles for automation and orchestration [5]. However, regarding the on-board Electrical Distribution Systems (EDS), which are responsible for delivering power supply to the different consumers within a vehicle, only few commercial tools (such as Harness Studio and Saber RD) and a few sustained research related to tailored simulation platforms have been exposed [2,6,7]. ...
Article
Full-text available
For the validation of vehicular Electrical Distribution Systems (EDS), engineers are currently required to analyze disperse information regarding technical requirements, standards and datasheets. Moreover, an enormous effort takes place to elaborate testing plans that are representative for most EDS possible configurations. These experiments are followed by laborious data analysis. To diminish this workload and the need for physical resources, this work reports a simulation platform that centralizes the tasks for testing different EDS configurations and assists the early detection of inadequacies in the design process. A specific procedure is provided to develop a software tool intended for this aim. Moreover, the described functionalities are exemplified considering as a case study the main wire harness from a commercial vehicle. A web-based architecture has been employed in alignment with the ongoing software development revolution and thus provides flexibility for both, developers and users. Due to its scalability, the proposed software scheme can be extended to other web-based simulation applications. Furthermore, the automatic generation of electrical layouts for EDS is addressed to favor an intuitive understanding of the network. To favor human–information interaction, utilized visual analytics strategies are also discussed. Finally, full simulation workflows are exposed to provide further insights on the deployment of this type of computer platforms.
... A few examples of recent studies do couple the application of statistical machine learning and artificial intelligence with the influence of exogenous factors on sign detection and recognition. For instance, Sajjad et al. [40] developed a deep learning-based sign detection system as a part of an autonomous driving demonstration. Although the developed system performed well in a controlled environment, the detection and navigation accuracy are yet to be tested in realworld scenarios with complex challenges. ...
Article
Full-text available
Automatic recognition of traffic signs in complex, real-world environments has become a pressing research concern with rapid improvements of smart technologies. Hence, this study leveraged an industry-grade object detection and classification algorithm (You-Only-Look-Once, YOLO) to develop an automatic traffic sign recognition system that can identify widely used regulatory and warning signs in diverse driving conditions. Sign recognition performance was assessed in terms of weather and reflectivity to identify the limitations of the developed system in real-world conditions. Furthermore, we produced several editions of our sign recognition system by gradually increasing the number of training images in order to account for the significance of training resources in recognition performance. Analysis considering variable weather conditions, including fair (clear and sunny) and inclement (cloudy and snowy), demonstrated a lower susceptibility of sign recognition in the highly trained system. Analysis considering variable reflectivity conditions, including sheeting type, lighting conditions, and sign age, showed that older engineering-grade sheeting signs were more likely to go unnoticed by the developed system at night. In summary, this study incorporated automatic object detection technology to develop a novel sign recognition system to determine its real-world applicability, opportunities, and limitations for future integration with advanced driver assistance technologies.