(a) Process of translation in direction of x-axis and y-axis and rotation around z-axis for Rz angle; (b) process of translation in direction of z-axis. After the transformation process, the PDAL library was used to export point clouds into orthophoto images with depth of 10 m, i.e., road cross sections for 10 m of road. The exported images have a band with values of reflectance. The spatial resolution of the exported images is 1 cm × 1 cm. In terms of height, a range of 15 m above and 10 m below the Lidar sensor is covered. According to the height profile, the dimensions of the road segments and the spatial resolution, the dimensions of the images are 2500 px × 4000 px (2500 px × 1 cm = 25 m; 4000 px × 1 cm = 40 m). Examples of four exported road cross section images are shown in Figure 5.

(a) Process of translation in direction of x-axis and y-axis and rotation around z-axis for Rz angle; (b) process of translation in direction of z-axis. After the transformation process, the PDAL library was used to export point clouds into orthophoto images with depth of 10 m, i.e., road cross sections for 10 m of road. The exported images have a band with values of reflectance. The spatial resolution of the exported images is 1 cm × 1 cm. In terms of height, a range of 15 m above and 10 m below the Lidar sensor is covered. According to the height profile, the dimensions of the road segments and the spatial resolution, the dimensions of the images are 2500 px × 4000 px (2500 px × 1 cm = 25 m; 4000 px × 1 cm = 40 m). Examples of four exported road cross section images are shown in Figure 5.

Source publication
Article
Full-text available
The United Nations (UN) stated that all new roads and 75% of travel time on roads must be 3+ star standard by 2030. The number of stars is determined by the International Road Assessment Program (iRAP) star rating module. It is based on 64 attributes for each road. In this paper, a framework for highly accurate and fully automatic determination of...

Citations

... However, this process can be time-consuming and subjective, leading researchers to explore the use of computer vision techniques to automate and improve roadside safety ratings. In recent years, deep learning models have shown promising results in various computer vision applications, including roadside safety (Yi et al. 2021;Brkić et al. 2022). However, most existing studies only focus on a limited number of influencing factors, due to the data collection problems and 1 Ph.D. Student, Dept. of Civil and Environmental Engineering, Univ. of Utah, Salt Lake City, UT 84112 (corresponding author). ...
Article
The prevalence of run-off-road crashes, particularly in rural areas, underscores the significance of roadside characteristics in safety analysis. This paper proposes a novel approach for automated roadside safety assessment using deep convolutional neural networks (CNNs) and Generative Adversarial Networks (GANs) for data augmentation. The CNN models evaluate roadside features through two-dimensional (2D) image analysis, whereas GANs expand the data set by generating additional diverse samples. The proposed framework aligns with the standard rating system of the Federal Highway Administration (FHWA) and encompasses four distinct models for guardrail detection, clear zone width assessment, rigid obstacle detection, and sideslope estimation. The performance of each model is compared against non-GAN augmented models to assess the efficacy of using GANs for data augmentation. The results show that the proposed approach outperforms existing methods in terms of accuracy, which is measured with 96% in detecting guardrails, 88% in detecting clear zones, 80% in detecting rigid obstacles, and 84% in detecting roadside slopes. Compared with manual approaches, the proposed method offers advantages such as cost-effectiveness, ease of implementation, and the ability to rapidly rank state roads. The developed framework can assist departments of transportation (DOTs) in efficiently identifying problematic road segments and prioritizing safety improvement projects based on FHWA standard rating system.
... The YOLO algorithm is widely used for target detection in many different kinds of studies, such as estimating the positions of pedestrians from video recordings [48], identifying insect pests that influence agricultural production [49], iceberg and ship discrimination [50], ship detection and recognition in complex-scene SAR images [51], bridge detection [52], underwater object detection [53], automatic roadside feature detection [54], and object detection in automated driving [55]. ...
Article
Full-text available
The Sustainable Development Goals (SDGs) have addressed environmental and social issues in cities, such as insecure land tenure, climate change, and vulnerability to natural disasters. SDGs have motivated authorities to adopt urban land policies that support the quality and safety of urban life. Reliable, accurate, and up-to-date building information should be provided to develop effective land policies to solve the challenges of urbanization. Creating comprehensive and effective systems for land management in urban areas requires a significant long-term effort. However, some procedures should be undertaken immediately to mitigate the potential negative impacts of urban problems on human life. In developing countries, public records may not reflect the current status of buildings. Thus, implementing an automated and rapid building monitoring system using the potential of high-spatial-resolution satellite images and street views may be ideal for urban areas. This study proposed a two-step automated building stock monitoring mechanism. Our proposed method can identify critical building features, such as the building footprint and the number of floors. In the first step, buildings were automatically detected by using the object-based image analysis (OBIA) method on high-resolution spatial satellite images. In the second step, vertical images of the buildings were collected. Then, the number of the building floors was determined automatically using Google Street View Images (GSVI) via the YOLOv5 algorithm and the kernel density estimation method. The first step of the experiment was applied to the high-resolution images of the Pleiades satellite, which covers three different urban areas in Istanbul. The average accuracy metrics of the OBIA experiment for Area 1, Area 2, and Area 3 were 92.74%, 92.23%, and 92.92%, respectively. The second step of the experiment was applied to the image dataset containing the GSVIs of several buildings in different Istanbul streets. The perspective effect, the presence of more than one building in the photograph, some obstacles around the buildings, and different window sizes caused errors in the floor estimations. For this reason, the operator’s manual interpretation when obtaining SVIs increases the floor estimation accuracy. The proposed algorithm estimates the number of floors at a rate of 79.2% accuracy for the SVIs collected by operator interpretation. Consequently, our methodology can easily be used to monitor and document the critical features of the existing buildings. This approach can support an immediate emergency action plan to reduce the possible losses caused by urban problems. In addition, this method can be utilized to analyze the previous conditions after damage or losses occur.
... Lidar collects high numbers of spatial points as well as other attributes such as color, intensity, number of returns, etc. This makes it suitable for collecting a wide range of road attributes such as roadside feature detection [25,26] road surface distress detection [27], traffic sign detection [28], intersection detection [29], and determination of road geometry characteristics such as slope and curvature [30][31][32]. In terms of satellite imagery and road attributes, most research are focused on road extraction from optical as well as Synthetic-Aperture Radar (SAR) sensors [33][34][35]. ...
... Although the iRAP coding manual defines a 100 m road segment as the basic unit for road evaluation, smaller dimensions were used in this work to facilitate the fitting of the segment images into the YOLO network. With respect to the iRAP coding process, the conversion of smaller segments into 100 m segments was described in our previously published work [25]. When there were several different types of the same attribute, the riskiest attribute was coded. ...
Article
Full-text available
The European Commission (EC) has published a European Union (EU) Road Safety Framework for the period 2021 to 2030 to reduce road fatalities. In addition, the EC with the EU Directive 2019/1936 requires a much more detailed recording of road attributes. Therefore, automatic detection of school routes, four classes of crosswalks, and divided carriageways were performed in this paper. The study integrated satellite imagery as a data source and the Yolo object detector. The satellite Pleiades Neo 3 with a spatial resolution of 0.3 m was used as the source for the satellite images. In addition, the study was divided into three phases: vector processing, satellite imagery processing, and training and evaluation of the You Only Look Once (Yolo) object detector. The training process was performed on 1951 images with 2515 samples, while the evaluation was performed on 651 images with 862 samples. For school zones and divided carriageways, this study achieved accuracies of 0.988 and 0.950, respectively. For crosswalks, this study also achieved similar or better results than similar work, with accuracies ranging from 0.957 to 0.988. The study also provided the standard performance measure for object recognition, mean average precision (mAP), as well as the values for the confusion matrix, precision, recall, and f1 score for each class as benchmark values for future studies.
... Therefore, LiDAR sensors can function well even in adverse illumination conditions. With robust depth measurements, LiDAR sensors are crucial to many applications such as autonomous vehicles [2,3], classification [4], and instance detection [5,6]. ...
Article
Full-text available
Light Detection and Ranging (LiDAR) systems are novel sensors that provide robust distance and reflection strength by active pulsed laser beams. They have significant advantages over visual cameras by providing active depth and intensity measurements that are robust to ambient illumination. However, the systemsstill pay limited attention to intensity measurements since the output intensity maps of LiDAR sensors are different from conventional cameras and are too sparse. In this work, we propose exploiting the information from both intensity and depth measurements simultaneously to complete the LiDAR intensity maps. With the completed intensity maps, mature computer vision techniques can work well on the LiDAR data without any specific adjustment. We propose an end-to-end convolutional neural network named LiDAR-Net to jointly complete the sparse intensity and depth measurements by exploiting their correlations. For network training, an intensity fusion method is proposed to generate the ground truth. Experiment results indicate that intensity–depth fusion can benefit the task and improve performance. We further apply an off-the-shelf object (lane) segmentation algorithm to the completed intensity maps, which delivers consistent robust to ambient illumination performance. We believe that the intensity completion method allows LiDAR sensors to cope with a broader range of practice applications.