HTC Vive HMD, controllers and base stations (originally from [25]).

HTC Vive HMD, controllers and base stations (originally from [25]).

Source publication
Chapter
Full-text available
This chapter describes a series of works developed in order to integrate ROS-based robots with Unity-based virtual reality interfaces. The main goal of this integration is to develop immersive monitoring and commanding interfaces, able to improve the operator’s situational awareness without increasing its workload. In order to achieve this, the ava...

Citations

... Our newly developed user interface facilitates the adaptability of cyber-physical robot fleets in high-stakes industrial environments, specifically the nuclear sector, by incorporating immersive human-robot interaction modalities such as virtual reality (Stadler, Cornet, & Frenkler, 2023;Roldán Gómez et al., 2019). Virtual reality technology offers the potential to significantly enhance the situational awareness of nuclear workers, enabling them to experience the nuclear site environment without being physically present. ...
... More on the definition of VR and the use of VR in practice and research can be found in [19]. According to [20], VR is suitable when personnel are away from the mission, the mission is dangerous, the scenario is large, or personnel have to manage many resources (e.g. multi-robot systems). ...
... Unlike the usual VR interface for a single robot as in [20], where the interface is centered in the robot and its payload, and a first-person view of the robot is adopted, we have different views in our VR interface, including a firstperson view of the robot in Figure 3 and a first-person view of the virtual staff as shown in Figure 4. Using the images from the robot's camera in our virtual environment, we can identify an asset and display its type, maintenance manual, etc. via the interface, display a real-time video stream, identify the Fig. 4 The VR data center with a virtual inspection robot from the virtual staff's perspective. ...
Preprint
Full-text available
In the context of Industry 4.0, the physical and digital worlds are closely connected, and robots are widely used to achieve system automation. Digital twin solutions have contributed significantly to the growth of Industry 4.0. Combining various technologies is a trend that aims to improve system performance. For example, digital twinning can be combined with virtual reality in automated systems. This paper proposes a new concept to articulate this combination, which has mainly been implemented in engineering research projects. However, there are currently no guidelines, plans, or concepts to articulate this combination. The concept will be implemented in data centers, which are crucial for enabling virtual tasks in our daily lives. Due to the COVID-19 pandemic, there has been a surge in demand for services such as e-commerce and videoconferencing. Regular maintenance is necessary to ensure uninterrupted and reliable services. Manual maintenance strategies may not be sufficient to meet the current high demand, and innovative approaches are needed to address the problem. This paper presents a novel approach to data center maintenance: real-time monitoring by an autonomous robot. The robot is integrated with digital 1 arXiv:2312.13076v1 [cs.RO] 20 Dec 2023 Springer Nature 2021 L A T E X template 2 How to Integrate Digital Twin and Virtual Reality in Robotics Systems? twins of assets and a virtual reality interface that allows human personnel to control it and respond to alarms. This methodology enables faster, more cost-effective, and higher quality data center maintenance. It has been validated in a real data centre and can be used for intelligent monitoring and management through joint data sources. The method has potential applications in other automated systems.
... In semi-autonomous systems, the operator must coordinate and detect errors, so efficient and intuitive human-robot interaction (HRI) is essential to maximizing such systems' potential [4]. Here, a welldesigned interface can improve situational awareness, workload, and decision-making [6]. In recent years, there has been increasing interest in virtual reality (VR) interfaces for multi-robot interaction [6]. ...
... Here, a welldesigned interface can improve situational awareness, workload, and decision-making [6]. In recent years, there has been increasing interest in virtual reality (VR) interfaces for multi-robot interaction [6]. However, there is limited research on VR interface's usability for multi-robot interaction in the context of the regular service inspection of cargo ships. ...
Conference Paper
Full-text available
The use of multi-robot systems is a field that can benefit from VR by strengthening understanding of the situation and enabling seamless interaction with the actors involved. This work investigates how the usability of a design for interaction with a multi-robot system for ship maintenance is assessed. Furthermore, comments from the participants are consulted as impulses for improving the design.
... Our newly developed user interface facilitates the adaptability of cyber-physical robot fleets in high-stakes industrial environments, specifically the nuclear sector, by incorporating immersive human-robot interaction modalities such as virtual reality (Stadler, Cornet, & Frenkler, 2023;Roldán et al., 2019). Virtual reality technology offers the potential to significantly enhance the situational awareness of nuclear workers, enabling them to experience the nuclear site environment without being physically present. ...
Preprint
Full-text available
The nuclear energy sector can benefit from mobile robots for remote inspection and handling, reducing human exposure to radiation. Advances in cyber-physical systems have improved robotic platforms in this sector through digital twin technology. Digital twins enhance situational awareness for robot operators, crucial for safety in the nuclear energy sector, and their value is anticipated to increase with the growing complexity of cyber-physical systems. We propose a multimodal immersive digital twin platform for cyber-physical robot fleets based on ROS-Unity3D. The system design integrates building information models, displays mission parameters, visualizes robot sensor data, and provides multi-modal user interaction through traditional and virtual reality interfaces. This enables real-time monitoring and management of fleets with heterogeneous robotic platforms. We performed a heuristic evaluation of our digital twin interface with robot operators from leading nuclear research institutions: Sellafield Ltd and Japan Atomic Energy Agency, performing a simulated robot inspection mission. The three usability themes that emerged from this evaluation and inspired our design recommendations include the flexibility of the interface, the individuality of each robot, and adaptation to expanding sensor visualization capabilities.
... Furthermore, subsequent steps of processing and registering the point clouds, object detection, or 3D reconstruction are usually needed to achieve the required functions [56,57]. [58] and [59] created digital twins of a robot to program, control, and visualize robot motions. Joint state data are exchanged between the physical and virtual robots in real time. ...
Preprint
Full-text available
Robots can greatly alleviate physical demands on construction workers while enhancing both the productivity and safety of construction projects. Leveraging a Building Information Model (BIM) offers a natural and promising approach to drive a robotic construction workflow. However, because of uncertainties inherent on construction sites, such as discrepancies between the designed and as-built workpieces, robots cannot solely rely on the BIM to guide field construction work. Human workers are adept at improvising alternative plans with their creativity and experience and thus can assist robots in overcoming uncertainties and performing construction work successfully. This research introduces an interactive closed-loop digital twin system that integrates a BIM into human-robot collaborative construction workflows. The robot is primarily driven by the BIM, but it adaptively adjusts its plan based on actual site conditions while the human co-worker supervises the process. If necessary, the human co-worker intervenes in the robot's plan by changing the task sequence or target position, requesting a new motion plan, or modifying the construction component(s)/material(s) to help the robot navigate uncertainties. To investigate the physical deployment of the system, a drywall installation case study is conducted with an industrial robotic arm in a laboratory. In addition, a block pick-and-place experiment is carried out to evaluate system performance. Integrating the flexibility of human workers and the autonomy and accuracy afforded by the BIM, the system significantly increases the robustness of construction robots in the performance of field construction work.
... We have developed a realistic simulator using the Unity game engine. This platform was created for developing video games but is becoming popular in the context of robotics to create simulators (Konrad, 2019), operator interfaces (Roldán et al., 2019), and data sets (Borkman et al., 2021). In the context of this work, Unity offers a flexible environment to develop a realistic but light simulator. ...
Article
Full-text available
Intervention teams act in hostile scenarios where reducing mission times and accident risks is critical. In these situations, the availability of accurate information about the environment plays a key role in ensuring the well‐being of rescuers and victims. This information required to plan the interventions in indoor emergencies encompasses the location of fires and the presence of dangerous gases. Robotics and remote sensing technologies can help emergency teams to obtain this information in real‐time without exposing themselves. Additionally, the accurate simulation of the environments allows the teams to plan their interventions, creating routes to safely access the affected areas and evacuate the victims. This article presents a robotic solution developed to satisfy the demands of intervention teams. More specifically, it describes an autonomous ground robot that can obtain real‐time location and environmental data from indoor fires, as well as a simulator that reproduces these emergency scenarios and facilitates mission planning. In this way, emergency teams can know the conditions in the scenario before, during, and after the intervention. Thus, risks are minimized by improving their situational awareness and reducing their exposure times during the mission. The system has been developed and validated in collaboration with the end‐users and under realistic harsh environments. During these experiments, the robot was used to detect fire sources and cold smoke and provide environmental information to firefighters. Additionally, the simulator provided alternative routes for accessing and exiting the scene faster and safer by dodging potentially dangerous areas.
... Several groups have investigated coupled ROS-Gazebo-Unity3D environments with some success [22][23][24][25]. Typically, the ROS-Gazebo-Unity3D environment is structured such that the physics simulations are performed in Gazebo while the visuals are captured in Unity3D with the position of the robot mediated through ROS. ...
Article
Full-text available
Simulation has proven to be a highly effective tool for validating autonomous systems while lowering cost and increasing safety. Currently, several dedicated simulation environments exist, but they are limited in terms of environment size, visual quality, and feature sets. As a result, many researchers have begun to consider repurposing game engines as simulators to take advantage of their greater flexibility, scalability, and graphical fidelity. This study investigates a robotics simulation environment based on the Unity3D game engine and Robot Operating System (ROS) middleware, collectively referred to as ROS-Unity3D, and compares it to the popular ROS-Gazebo robotics simulation environment. They are compared in terms of their architecture, environment creation process, resource usage, and accuracy while simulating an autonomous ground robot in real-time. Overall, with its variety of supported file types and powerful scripting interface for creating custom functionality, ROS-Unity3D is found to be a viable alternative to ROS-Gazebo. Test results indicate that ROS-Unity3D scales better to larger environments, has higher shadow quality, achieves the same or better visual-based SLAM performance, and is more capable of real-time LiDAR simulation than ROS-Gazebo. As for its advantages over ROS-Unity3D, ROS-Gazebo has a more streamlined interface between ROS and Gazebo, has more existing sensor plugins, and is more computer resource efficient for simulating small environments.
... Among the different immersive technologies, virtual reality has been chosen against augmented reality (AR), since it is more suitable for the exploration of extensive areas with large amounts of data [42]. VR requires the integration of different elements to deal with data description and management, display or rendering techniques and integration of users in simulation loops [43]. ...
Article
Full-text available
Smart cities have emerged as a strategy to solve problems that current cities face, such as traffic, security, resource management, waste, and pollution. Most of the current approaches are based on deploying large numbers of sensors throughout the city and have some limitations to get relevant and updated data. In this paper, as an extension of our previous investigations, we propose a robotic swarm to collect the data of traffic, pedestrians, climate, and pollution. This data is sent to a base station, where it is treated to generate maps and presented in an immersive interface. To validate these developments, we use a virtual city called SwarmCity with models of traffic, pedestrians, climate, and pollution based on real data. The whole system has been tested with several subjects to assess whether the information collected by the drones, processed in the base station, and represented in the virtual reality interface is appropriate. Results show that the complete solution, i.e., fleet control, data fusion, and operator interface, allows monitoring the relevant variables in the simulated city.
... Another recent work that employed ROS with an underwater robot is [41], where the authors proposed a new system design for underwater vehicles, combining the best features of both remotely operated vehicle and autonomous underwater vehicle technologies. A recent survey paper [40] describes a series of works developed to integrate ROS-based robots with Unity-based virtual reality interfaces. Several technologies and resources are analyzed, and multiple ROS packages and Unity assets are applied, such as multimaster fkie, Rosbridge suite, RosBridgeLib and SteamVR. ...
Article
Full-text available
The availability of frameworks and applications in the robotic domain fostered in the last years a spread in the adoption of robots in daily life activities. Many of these activities include the robot teleoperation, i.e. controlling its movements remotely. Virtual Reality (VR) demonstrated its effectiveness in lowering the skill barrier for such a task. This paper discusses the engineering and implementation of a general-purpose, open-source framework for teleoperating a humanoid robot through a VR headset. It includes a VR interface for articulating different robot actions using the VR controllers, without the need for training. Besides, it exploits the Robot Operating System (ROS) for the control and synchronization of the robot hardware, the distribution of the computation and its scalability. The framework supports the extension for operating other types of robots and using different VR configurations. We carried out a user experience evaluation with twenty users using System Usability Scale questionnaires and with six stakeholders on five different scenarios using the Software Architecture Analysis Method.
... Unity offers an easy user interface and a rich integrated development environment for robot simulations [56]. For example, Unity-based simulators have been linked with the ROS to develop and test navigation and control algorithms of unmanned aerial or ground vehicles [57], [58] as well as multirobot systems [59]. The virtual locomotion test environment and robot models were created in Unity 2018.4.12f1 (Unity Technologies ...
Article
Full-text available
Adaptability is a fundamental yet challenging requirement for mobile robot locomotion. This article presents $\alpha$ -WaLTR, a new adaptive wheel-and-leg transformable robot for versatile multiterrain mobility. The robot has four passively transformable wheels, where each wheel consists of a central gear and multiple leg segments with embedded spring suspension for vibration reduction. These wheels enable the robot to traverse various terrains, obstacles, and stairs while retaining the simplicity in primary control and operation principles of conventional wheeled robots. The chassis dimensions and the center of gravity location were determined by a multiobjective design optimization process aimed at minimizing the weight and maximizing the robot's pitch angle for obstacle climbing. Unity-based simulations guided the selection of the design variables associated with the transformable wheels. Following the design process, $\alpha$ -WaLTR with an embedded sensing and control system was developed. Experiments showed that the spring suspension on the wheels effectively reduced the vibrations when operated in the legged mode and verified that the robot's versatile locomotion capabilities were highly consistent with the simulations. The system-level integration with an embedded control system was demonstrated via autonomous stair detection, navigation, and climbing capabilities.