Overview of the transformer encoder.

Overview of the transformer encoder.

Source publication
Article
Full-text available
There is great interest in automatically detecting road weather and understanding its impacts on the overall safety of the transport network. This can, for example, support road condition-based maintenance or even serve as detection systems that assist safe driving during adverse climate conditions. In computer vision, previous work has demonstrate...

Context in source publication

Context 1
... results with the position embeddings are then fed to a transformer encoder for classification, as shown in Figure 3. The transformer encoder module consists of a MultiHead Self Attention (MSA) layer and a Multi-Layer Perceptron (MLP) layer. ...

Citations

... An innovative approach is presented in [76], where the authors introduce the usage of Vision Transformers (ViTs) models [77]. Unlike traditional CNNs, ViTs apply attention mechanisms to dynamically assign weights to pixels, focusing the model on relevant image features. ...
Article
Full-text available
The rapid advancement of autonomous vehicle technology has brought into focus the critical need for enhanced road safety systems, particularly in the areas of road damage detection and surface classification. This paper explores these two essential components, highlighting their importance in autonomous driving. In the domain of road damage detection, this study explores a range of deep learning methods, particularly focusing on one-stage and two-stage detectors. These methodologies, including notable ones like YOLO and SSD for one-stage detection and Faster R-CNN for two-stage detection, are critically analyzed for their efficacy in identifying various road damages under diverse conditions. The review provides insights into their comparative advantages, balancing between real-time processing and accuracy in damage localization. For road surface classification, the paper investigates the classification techniques based on both environmental conditions and material road composition. It highlights the role of different convolutional neural network architectures and innovations at the neural level in enhancing classification accuracy under varying road and weather conditions. The main finding of this work is that it offers a comprehensive overview of the current state of the art, showcasing significant strides in utilizing deep learning for road analysis in autonomous vehicle systems. The study concludes by underscoring the importance of continued research in these areas to further refine and improve the safety and efficiency of autonomous driving.
... [24] for example, uses already existing architecture, like SqueezeNet, ResNet-50 and EfficientNet CNN layers to extract features, then a single fully connected layer with Softmax activation function to classify the data. [25] uses CNNs layers from well-known architectures and transformers to achieve multi-label weather classification. The work presented in [26] is another case of CNN-based weather detection algorithm using the Resnet18 framework. ...
Conference Paper
We propose 3-GWD, a new and innovative solution for the particle atmospheric disturbers' detection that can be used to weight the data in a fusion process and adjust the sensor reliability. The proposed approach, based on 3D Gray-Level Co-occurrence Matrices (3D-GLCMs), is generic and could be used for a significant number of sensors (LiDAR, IR camera, Neuromorphic camera...). We demonstrate that this method leads to the identification of very relevant patterns in the symbolic space that shallow Artificial Neural Network (ANN) can classify with a great accuracy. We also discuss the importance of all-environment dataset by creating a reality-enriched simulated database with controlled disturbances and numerous environments. Using a single optic camera produces promising results with a high prediction score. We end our study with an attempt to compare our approach with previous works using the DAWN and the MCWCD datasets.
Conference Paper
In this work, we propose a Multi-Label Weather Classification framework (MLWC) designed for embedded computing platforms. The MLWC framework consists of five Convolutional Neural Network (CNN) models, where each one is trained with a specific adverse weather condition, i.e., clear skies, cloudy, rainy, foggy, snowy. In order to develop lightweight CNN models, we have applied the novel Sparse-Split-Parallelism (SSP) framework to DenseNet-121, achieving a model size reduction of 55%. The experiments prove the effectiveness of the multi-label framework, by correctly predicting multiple weather conditions in a single image. In particular, the weather classification models, lightened with the SSP method, achieved accuracy rates of 90.95% on clear skies, 59.24% on cloudy weather, 91.54% on foggy weather, 73.33% on rainy weather, and 95.22% on snowy weather. Following that, their performance is compared with other lightweight models, such as, MobileNetV2, MobileNetV3 and EfficientNetB0, confirming the advantages of the SSP framework. Lastly, the proposed MLWC framework is evaluated on the NVIDIA Jetson Nano board, which achieves multi-label classification results in 4.2 seconds per image. With the accomplished inference time, we prove that the MLWC framework is suitable to be deployed in small embedded platforms that can be used in autonomous driving systems, like vehicles or drones, and maintain a good performance.