Fig 5 - uploaded by Panos Liatsis
Content may be subject to copyright.
Wiring diagram for (a) Arduino NANO with MPU 6050, NRF24L01 and 9v power source (sender), and (b) for NRF24L01 and Arduino UNO (receiver).

Wiring diagram for (a) Arduino NANO with MPU 6050, NRF24L01 and 9v power source (sender), and (b) for NRF24L01 and Arduino UNO (receiver).

Source publication
Article
Full-text available
Modern cars are equipped with autonomous systems to assist the driver and improve driving experience. Driving assist system (DAS) is one of the most significant components of a self-driving vehicle (SDV), used to overcome non-autonomous driving challenges. However, most conventional cars are not equipped with DAS, and high-cost systems are required...

Context in source publication

Context 1
... wiring from steering angle sensor is not recommended because the rotation of the steering will affect and lopped the wire. Thus, an NRF24L01 as sender and receiver is used to send the information without wiring as illustrated in Fig. ...

Similar publications

Article
Full-text available
Smoking in public places not only causes potential harm to the health of oneself and others, but also causes hidden dangers such as fires. Therefore, for health and safety considerations, a detection model is designed based on deep learning for places where smoking is prohibited, such as airports, gas stations, and chemical warehouses, that can qui...

Citations

... The library connecting the C++ and Python programming languages is PyFirmata. The Firmata library implements the Firmata protocol to communicate with software on the host computer [29]- [31]. PyFirmata can connect several programming languages with the Arduino IDE language. ...
Article
Full-text available
Viruses can be transmitted due to various aspects; one spreads through airborne droplets or the touch of multiple objects. This can occur in any area, including the entrance to the house or access to a room or deposit box. The spread of viruses that cause diseases like Covid-19 has caused many human casualties, and there is still the possibility of similar conditions appearing in the future. Several things need to be done to reduce the chances of spreading disease due to viruses, including developing contactless security support methods. This paper proposes a security system using hand gesture recognition using squeeze and excitation residual networks (SE-ResNet). This research offers a hand gesture recognition system for an automatic door system using SE-ResNet and the residual network (ResNet).
... An intelligent Driving assist system (DAS) is presented in Ref. [9] for real-time prediction of steering angle using deep learning (DL) and raw dataset collected from a real environment. A constraint controller based on the Lyapunov function is designed to improve the constraint control and disturbance problems in the operation of shuttle vehicles in Ref. [10]. ...
... We can represent equation (9) in the following manner, the input control. ...
Article
Full-text available
This article delves into the intricate world of controlling the longitudinal dynamics of autonomous vehicles. In the first part, we studied two distinct controllers: the Super Twisting Sliding Mode Control and its modified version enriched with the integration of fuzzy logic, applied to the longitudinal dynamics of the autonomous automobile to follow a desired speed longitudinal profile, the two controllers are compared with a Neural Network-Based Non-singular Terminal Sliding-Mode Control, the system takes the throttle and brake as its inputs and delivers speed and acceleration as outputs. The overarching objective is to ensure that the controlled vehicle maintains a close and precise alignment with the desired speed profile. The second part of our research is dedicated to the development of adaptive cruise control systems and cruise control according to the safety conditions. This controller consists of two blocks low and upper controller, in upper controller the inputs are the speeds of the automobile in front and of the autonomous automobile itself, the safety distance, the measured distance. The output is the desired acceleration. The objective is to maintain a distance between the front vehicles, greater than or equal to the safety distance. For this, to achieve this task, we have implemented a control system known as the Proportional Integral Derivative (PID) controller in the adaptive cruise control system to control this system. In the low controller block, the same controllers used in the first part: the Super Twisting Sliding Mode Control and its modified version based on fuzzy logic, are applied, the system inputs are throttle and brake, and the outputs are speed and acceleration. This system is processed by MATLAB code, we obtained a better result with our proposed controller such that the maximum absolute speed error is equal to 0.0144 m/s in the first case of speed tracking, and to 0.006 m/s in the second case of the using adaptive cruise control, the illustrations below show the efficiency and robustness of these controllers.
... FLC has been studied to control the AGV based on a differential drive mobile robot using kinematics control [17]. Dan dalam aplikasinya penggunaan camera lebih murah dibanding sensor lainnya seperti lidar dalam mendeteksi gambar, dan juga digunakan dalam riset modifikasi mobil konvensional menjadi autonomus [18]. ...
Article
Full-text available
This paper discusses the development of an automated guided vehicle (AGV) model equipped with a navigation system. The AGV employs computer vision and fuzzy logic control for the lane-keeping assist system as a steering control. The inputs used in fuzzy logic control are the AGV path line gradient values for the left and right lanes. The navigation system uses a camera with a high level of light sensitivity. A light intensity that is too dim or bright will affect the steering control performance, meaning that a certain range of light intensity will affect the performance of the lane-keeping assist. A path with left and right lanes is built to test the performance steering control based on computer vision. The result shows that the optimal light intensity for the developed lane-keeping assists is from 110 to 150 lux. The AGV can successfully follow the path under these light intensities although the deviation still occurs.
... T HE swift advancement in IoTs' computing, processing, sensing, and wireless communication capabilities, combined with their integration into vehicular onboard units (OBU), has paved the way for the evolution of missioncritical systems, including autonomous vehicular applications, AR/VR [1], driver assistance [2], object detection [3], and assisted lane change [4]. These applications require high computing, processing, communication, and storage resources beyond a single vehicle's resource capability. ...
Preprint
Full-text available
p>This paper presents the design and implementation of a multilayered microservices-centric in-network computation framework tailored for the Internet of Vehicular Networks. The proposed framework integrates a vehicular network layer, Edge computing layer, and Cloud to enhance network efficiency and minimize latency for time-critical vehicular applications. In the proposed work, modifications were made to the NDNsim codebase, integrating the C++ Boost Asio library to facilitate NDNsim-Edge communication for designated microservice computation. in additions, a microservice-centric driver assistance application is also emulated in Dotnet core with microservices deployed on various Edge and cloud servers. Moreover, Restful APIs have been developed, allowing the NDNsim layer, physical Edge layer, and Cloud stations to offload a microservice-centric computation request and retrieve the corresponding compute outcomes. Furthermore, the Entity framework is utilized to ensure proper tracking and management of the requests while an HTML-based user interface is developed for a visual representation of the request pattern. Extensive testbed experimentation revealed that the proposed framework highly optimized bandwidth consumption, reduces latency and amplifies the computed results delivery ratio compared with the conventional monolithic system.</p
... Globally, about 1.3 million people die every year in road traffic crashes and about 50 million are injured [1]. Insufficient road safety is thus a critical issue worldwide, addressed by multidisciplinary research efforts [2][3][4]. Extensive road-safety research has also focused on horizontal curves on two-lane rural roads. The curves have a higher crash risk in comparison to tangents (e.g., Ref. [5]). ...
Article
Full-text available
Three types of road markings — edgelines, a centreline, and no marking — are used on rural Czech roads, despite the fact that their impact on the lateral position of a vehicle in real-life driving behaviour is not completely understood. This study strives to fill this gap for horizontal curves. It considers the type of road marking and other factors in a sample of 68 curves. The modelling results confirm that, in addition to driving speed, road width, and other factors, the road marking type has an impact on the lateral position. Based on an analysis of road axis exceedance, the centreline proved better for curve negotiation because it led to trajectories that were farther from the road axis, which lowers the probability of a head-on crash. Thus, the centreline marking proved to be the better alternative in terms of lateral position, as well as in the practical perspective. This finding provides guidance for road administrators towards increasing the consistency of road marking.
... Context-aware computing refers to the ability of computing systems to automatically recognize and respond to the user's scenario, providing intelligent and personalized services [1][2][3][4][5]. The global market size of context-aware services exceeded USD 36.3 billion in 2020 and is projected to reach over USD 318.6 billion by 2030 [6]. ...
... As shown in Figure 2, the baseband signal has a constant frequency, and, thus, S B (t) in Equation (5) can be simplified to Equation (6). ...
Article
Full-text available
Region-function combinations are essential for smartphones to be intelligent and context-aware. The prerequisite for providing intelligent services is that the device can recognize the contextual region in which it resides. The existing region recognition schemes are mainly based on indoor positioning, which require pre-installed infrastructures or tedious calibration efforts or memory burden of precise locations. In addition, location classification recognition methods are limited by either their recognition granularity being too large (room-level) or too small (centimeter-level, requiring training data collection at multiple positions within the region), which constrains the applications of providing contextual awareness services based on region function combinations. In this paper, we propose a novel mobile system, called Echo-ID, that enables a phone to identify the region in which it resides without requiring any additional sensors or pre-installed infrastructure. Echo-ID applies Frequency Modulated Continuous Wave (FMCW) acoustic signals as its sensing medium which is transmitted and received by the speaker and microphones already available in common smartphones. The spatial relationships among the surrounding objects and the smartphone are extracted with a signal processing procedure. We further design a deep learning model to achieve accurate region identification, which calculate finer features inside the spatial relations, robust to phone placement uncertainty and environmental variation. Echo-ID requires users only to put their phone at two orthogonal angles for 8.5 s each inside a target region before use. We implement Echo-ID on the Android platform and evaluate it with Xiaomi 12 Pro and Honor-10 smartphones. Our experiments demonstrate that Echo-ID achieves an average accuracy of 94.6% for identifying five typical regions, with an improvement of 35.5% compared to EchoTag. The results confirm Echo-ID’s robustness and effectiveness for region identification.
... Semantic segmentation is an important task in computer vision and its purpose is to divide the input image into multiple regions with coherent semantic meaning to complete pixel-dense scene understanding for many real-world applications, such as autonomous driving [1], robot navigation [2] and so on. In recent years, with the rapid development of deep learning [3][4][5][6][7], pixel-based semantic segmentation of RGB images has received more and more attention and has achieved remarkable progress in segmentation accuracy [6,7]. However, due to the characteristics of RGB images, current deep semantic segmentation models cannot always extract correct features in some specific cases. ...
Article
Full-text available
Semantic segmentation, as the pixel level classification with dividing an image into multiple blocks based on the similarities and differences of categories (i.e., assigning each pixel in the image to a class label), is an important task in computer vision. Combining RGB and Depth information can improve the performance of semantic segmentation. However, there is still a problem of the way to deeply integrate RGB and Depth. In this paper, we propose a cross-modal feature fusion RGB-D semantic segmentation model based on ConvNeXt, which uses ConvNeXt as the skeleton network and embeds a cross-modal feature fusion module (CMFFM). The CMFFM designs feature channel-wise and spectral-wise fusion, which can realize the deeply feature fusion of RGB and Depth. The in-depth multi-modal feature fusion in multiple stages improves the performance of the model. Experiments are performed on the public dataset of SUN-RGBD, showing the best segmentation by our proposed model ConvNeXt-CMFFM with the highest mIoU score of 53.5% among the nine comparative models. The outstanding performance of ConvNeXt-CMFFM is also achieved on our self-built dataset of RICE-RGBD with the highest mIoU score and pixel accuracy among the three comparative datasets. The ablation experiment on our rice dataset shows that compared with ConvNeXt (without CMFFM), the mIoU score of ConvNext-CMFFM is increased from 71.5% to 74.8% and its pixel accuracy is increased from 86.2% to 88.3%, indicating the effectiveness of the added feature fusion module in improving segmentation performance. This study shows the feasibility of the practical application of the proposed model in agriculture.
Article
Lack or excess of water, moisture, and nutrients may cause diseases in various growing stages of rice. Unlike related studies, this work aims to detect each disease’s symptom separately, rather than just classifying images by a classifier or showing the whole diseased leaf in a single bounding box. In this way, we consider all disease regions and make it possible to observe better the disease progression by considering detected boxes. Our motivation for this hybrid study stems from the fact that more than one disease symptom may occur on a leaf and the detection of symptoms at an early stage can positively affect the harvest yield. The main aim of this study is to classify rice leaf disease images accurately, reduce false detections, and validate the predictions of the classification network utilizing an object detection network. Therefore, we identify two stages for this work. In the first part, the task of classification, and in the second part, the task of determining the location of the disease symptoms is conducted. We use data augmentation and disout techniques to prevent overfitting in the classification process and to improve performance by modifying the classification network. Finally, we discuss how classification robustness can be tested and false predictions can be eliminated using the classification network Inception v3 and the detection network YOLOv5x jointly. As a result of the proposed hybrid model, state-of-the-art results are achieved with 96.67 % accuracy and 98.24 % F1 score on the publicly available rice leaf disease dataset.