Fig 4 - uploaded by Hannan Keen
Content may be subject to copyright.
Labeling procedure for pixel wise segmentation using PAT tool is shown here. (a) Raw Images (b) Pixel-wise masked image. Classes for segmentation are colored as green for vegetation, light blue for sky, dark blue for buildings, brown for static obstacles and purple for roads.

Labeling procedure for pixel wise segmentation using PAT tool is shown here. (a) Raw Images (b) Pixel-wise masked image. Classes for segmentation are colored as green for vegetation, light blue for sky, dark blue for buildings, brown for static obstacles and purple for roads.

Source publication
Conference Paper
Full-text available
Autonomous driving in a pedestrian zone is a challenging task. Technische Universitaet Kaiserslautern (TUK) is currently researching autonomous driving on the university campus for elderly or disabled people. This paper presents a novel campus dataset from the TUK campus, recorded over the span of one year for an autonomous bus project. John Deere'...

Similar publications

Article
Full-text available
Generally, one of the most vulnerable road user categories that could be recognized is pedestrians, thus their susceptibility to high injury severities in road vehicle crashes. In Sri Lanka, the risk of pedestrian safety near school zones has increased rapidly with the growth of areas such as infrastructure development, socioeconomic development, e...
Article
Full-text available
Many possible actions can be taken to improve road safety, thus reducing the number of fatalities and injuries. These include improving the enforcement of existing rules, improving infrastructure, improving driver behavior, and introducing safety technologies in vehicles. Annually in Malaysia, around 500 pedestrians are killed in road traffic crash...
Article
Full-text available
To improve mobility in cities in line with environmental goals, in urban traffic, trams represent an increasingly important means of transport. Due to the close interaction with other road users, this makes collisions with trams fairly frequent. This study has investigated accidents between trams and vulnerable road users resulting in personal inju...
Article
Full-text available
This study aims at developing a solid understanding of the contributing factors to pedestrian fatal and injury collisions at highway-railway grade crossings (HRGC), along with the impact of different warning devices that are commonly used at HRGCs. The study utilized integrated Machine Learning and Bayesian models to analyze the United States HRGC...
Conference Paper
Full-text available
Pedestrian motions and behaviours in evacuations are compactly interrelated indoor environments. According to how pedestrians interact with indoor environments in three-dimensional (3D) space, 3D indoor-pedestrian interaction is defined as five sorts of specific pedestrian motions, i.e., stepping over, crawling, bent-over walking, jumping over and...

Citations

... Another compelling motivation for understanding pedestrian behavior is to give a conclusive intention of the vehicle to pedestrians [6][7][8][9]. In our initial work [10], an interaction strategy between an AV and pedestrians was proposed. ...
Article
Full-text available
As autonomous driving technology is developing rapidly, demands for pedestrian safety, intelligence, and stability are increasing. In this situation, there is a need to discern pedestrian location and action, such as crossing or standing, in dynamic and uncertain contexts. The success of autonomous driving for pedestrian zones depends heavily on its capacity to distinguish between safe and unsafe pedestrians. The vehicles must first recognize the pedestrian, then their body movements, and understand the meaning of their actions before responding appropriately. This article presents a detailed explanation of the architecture for 3D pedestrian activity recognition using recurrent neural networks (RNN). A custom dataset was created for behaviors such as parallel and perpendicular crossing while texting or calling encountered around autonomous vehicles. A model similar to Long-Short Term Memory (LSMT) has been used for different experiments. As a result, it is revealed that the models trained independently on upper and lower body data produced better classification than the one trained on whole body skeleton data. An accuracy of 97% has been achieved for lower body and 88–90% on upper body test data, respectively.
... This article lists 7 commonly used datasets for SLAM systems. References [76][77][78][79][80][81][82] provided a detailed introduction to each dataset, and Table 4 lists the main features of each dataset. The sensor type is an important indicator for selecting a dataset. ...
Article
Full-text available
Visual simultaneous localization and mapping (SLAM) is crucial in robotics and autonomous driving. However, traditional visual SLAM faces challenges in dynamic environments. To address this issue, researchers have proposed semantic SLAM, which combines object detection, semantic segmentation, instance segmentation, and visual SLAM. Despite the growing body of literature on semantic SLAM, there is currently a lack of comprehensive research on the integration of object detection and visual SLAM. Therefore, this study aims to gather information from multiple databases and review relevant literature using specific keywords. It focuses on visual SLAM based on object detection, covering different aspects. Firstly, it discusses the current research status and challenges in this field, highlighting methods for incorporating semantic information from object detection networks into mileage measurement, closed-loop detection, and map construction. It also compares the characteristics and performance of various visual SLAM object detection algorithms. Lastly, it provides an outlook on future research directions and emerging trends in visual SLAM. Research has shown that visual SLAM based on object detection has significant improvements compared to traditional SLAM in dynamic point removal, data association, point cloud segmentation, and other technologies. It can improve the robustness and accuracy of the entire SLAM system and can run in real time. With the continuous optimization of algorithms and the improvement of hardware level, object visual SLAM has great potential for development.
Chapter
Global warming has increased the frequency of floods around the world. These floods destroy the environment and create navigational problems for rescue operations. This paper presents a novel probabilistic mapping technique that fuses the surface and underwater information using Octomaps to deliver the obstacle map of an unstructured area. An adaptive obstacle detection algorithm retrieves underwater information from a forward-looking high-frequency sonar, whereas a three-dimensional LiDAR maps the surface water environment. The mapping methodology is tested at several shallow water bodies, and promising results are provided.KeywordsUnderwater mappingSurface water mappingSonarLiDARSurface Water VehiclesUSV
Chapter
Climate researchers predict that heavy rain and flooding are becoming more frequent due to global warming. Flooding alters environments by raising the water level, creating new waterways, destroying buildings and roads. Due to these drastic changes, existing environmental maps become outdated and can only be used to a limited extent as references for planning and navigation tasks in disaster areas. This work proposes deploying semi-autonomous robotic systems to map the environmental conditions below and above water. Rescue forces then use the collected information for operation planning and navigation. The proposed concept is exemplified by a prototypical raft, a commercial water drone, and a crewed pontoon boat. Although the vehicle classes and propulsion systems differ, all three systems use the same control architecture. The architecture is a specialization of the behavior-based REACTiON framework for commercial land vehicles with a focus on off-road scenarios.
Preprint
Full-text available
Visual Simultaneous Localization and Mapping (vSLAM) has achieved great progress in the computer vision and robotics communities, and has been successfully used in many fields such as autonomous robot navigation and AR/VR. However, vSLAM cannot achieve good localization in dynamic and complex environments. Numerous publications have reported that, by combining with the semantic information with vSLAM, the semantic vSLAM systems have the capability of solving the above problems in recent years. Nevertheless, there is no comprehensive survey about semantic vSLAM. To fill the gap, this paper first reviews the development of semantic vSLAM, explicitly focusing on its strengths and differences. Secondly, we explore three main issues of semantic vSLAM: the extraction and association of semantic information, the application of semantic information, and the advantages of semantic vSLAM. Then, we collect and analyze the current state-of-the-art SLAM datasets which have been widely used in semantic vSLAM systems. Finally, we discuss future directions that will provide a blueprint for the future development of semantic vSLAM.