Figure 4 - uploaded by Roland Perko
Content may be subject to copyright.
depicts a screenshot of the 3D common operational picture within a virtual reality system where the subsurface operators get a simplified overview of the human detections and classifications.

depicts a screenshot of the 3D common operational picture within a virtual reality system where the subsurface operators get a simplified overview of the human detections and classifications.

Source publication
Conference Paper
Full-text available
Human lives are particularly at risk in critical security situations in underground train stations compared to surface events. Due to the closed situation of such subsurface events, considerable obstacles to the safe and efficient evacuation of people after an attack must be taken into account. Thus, this work presents a computer vision system base...

Similar publications

Article
Full-text available
This paper is concerned with the extraction of faces applying morphological operations on extracted skin regions using illumination invariant colorspace models . This method has an advantage over others in the view of computational burden and hence processing time since it is of set theoretic nature. It can be implemented in real time to facilitate...
Conference Paper
Full-text available
With the touch panel type interface, the operator cannot only look environmental information around the robot but also control the remote robot and camera by using the button on the touch-screen. In addition the operator can direct the robot to move along the same course as drawing on the environmental map on the touch-screen. Both touch screen and...
Book
Full-text available
Puji Syukur kami sampaikan atas terbitnya Jurnal IAFMI edisi ke 12 dengan tema Data Management in Operation and Maintenance. Tidak lupa kami sampaikan terima kasih untuk para penulis dan segenap pihak yang telah membantu terbitnya jurnal ini. Data manajemen dalam operasi dan maintenance di Industri migas merupakan salah satu komponen penting dalam...

Citations

... Previous studies have carried out the recognition of human body postures that can be detected by sensors [12], such as cameras and videos, in a dark room [18,28]. In this paper, the detection begins with 2D or 3D pose evaluation, which includes several aspects: (1) Estimation of human poses, such as sports scenes, people facing forward, people interacting with objects; (2) Estimation of poses in group photos; and (3) Estimation of poses of people performing synchronized activities [20]. Other studies have carried out pose recognition for activities such as standing, sitting, jumping, running, talking, picking up the phone, yoga, smoking, walking, fighting, and so on [12,[14][15][16][17][18][19][20][21]. ...
... In this paper, the detection begins with 2D or 3D pose evaluation, which includes several aspects: (1) Estimation of human poses, such as sports scenes, people facing forward, people interacting with objects; (2) Estimation of poses in group photos; and (3) Estimation of poses of people performing synchronized activities [20]. Other studies have carried out pose recognition for activities such as standing, sitting, jumping, running, talking, picking up the phone, yoga, smoking, walking, fighting, and so on [12,[14][15][16][17][18][19][20][21]. However, to obtain the right detection results, special assumptions are needed. ...
Article
Full-text available
Infrastructure development requires various considerations to maintain its continuity. Some public facilities cannot survive due to human indifference and irresponsible actions. Unfortunately, the government has to spend a lot of money, effort, and time to repair the damage. One of the destructive behaviors that can have an impact on infrastructure and environmental problems is littering. Therefore, this paper proposes a device as an alternative for catching littering rule violators. The proposed device can be used to monitor littering and provide warnings to help officers responsible for capturing the violators. In this innovation, the data obtained by the camera are sent to a mini-PC. The device will send warning information to a mobile phone when someone litters. Then, a speaker will turn on and issue a sound warning: “Do not litter”. The device uses pose detection and a recurrent neural network (RNN) to recognize a person’s activity. All activities can be monitored in a more distant place using IoT technology. In addition, this tool can also monitor environmental conditions and replace city guards to monitor the area. Thus, the municipality can save money and time.
... • Multiple-source-integrated (on-board and remote) navigation and positioning system in both GNSS-enabled and -denied environments integrating systems such as NIKE BLUETRACK [64] and RASPOS [65] • Threat detection and pose estimation as provided by NIKE SUBMOVECON [66] • Drone swarm landing dock ...
Article
Full-text available
In the last decades we have witnessed an increasing number of military operations in urban environments. Complex urban operations require high standards of training, equipment, and personnel. Emergency forces on the ground will need specialized vehicles to support them in all parts and levels of this extremely demanding environment including the subterranean and interior of infrastructure. The development of vehicles for this environment has lagged but offers a high payoff. This article describes the method for developing a concept for an urban operations vehicle by characterization of the urban environment, deduction of key issues, evaluation of related prototyping, science fiction story-typing of the requirements for such a vehicle, and comparison with field-proven and scalable solutions. Embedding these thoughts into a comprehensive research and development program provides lines of development, setting the stage for further research.
Preprint
Full-text available
Monitoring the movement and actions of humans in video in real-time is an important task. We present a deep learning based algorithm for human action recognition for both RGB and thermal cameras. It is able to detect and track humans and recognize four basic actions (standing, walking, running, lying) in real-time on a notebook with a NVIDIA GPU. For this, it combines state of the art components for object detection (Scaled YoloV4), optical flow (RAFT) and pose estimation (EvoSkeleton). Qualitative experiments on a set of tunnel videos show that the proposed algorithm works robustly for both RGB and thermal video.