Post-processed map. Point cloud with corrected edges and objects.

Post-processed map. Point cloud with corrected edges and objects.

Source publication
Chapter
Full-text available
The current state of technology permits very accurate 3D reconstructions of real scenes acquiring information through quite different sensors altogether. A high precision modelling that allows simulating any element of the environment on virtual interfaces has also been achieved. This paper illustrates a methodology to correctly model a 3D reconstr...

Context in source publication

Context 1
... step increases the correct iden- tification of features, avoiding mapping an already visited area twice, as well as improving the alignment of object edges. Figure 4 presents the improved point cloud with fairly corrected edges and objects. ...

Citations

... First, parametric 3D modeling or CAD (computer-aided design) such as AutoCAD 1 and SketchUp 2 , is the preferred method for engineers and designers to create models by setting the parameters as the real thing: materials, weight, 2019)). (b) A room scene 3D reconstruction with a RGB-D camera (Navarro et al. (2017)). (c) A remote client-server system for real-time reconstruction (Stotko et al. (2019)). ...
... For example, Leotta et al. reconstructed urban buildings combining segmentation methods into mesh models (Leotta et al. (2019), Figure 3a). Navarro et al. used the mesh method to reconstruct indoor rooms into virtual worlds (Navarro et al. (2017), Figure 3b). Some researchers also studied how to make remote multiple clients realize realtime reconstruction access in the virtual environment. ...
Article
Full-text available
The metaverse is a visual world that blends the physical world and digital world. At present, the development of metaverse is still in the early stage, and there lacks a framework for the visual construction and exploration of metaverse. In this paper, we propose a framework that summarizes how graphics, interaction, and visualization techniques support the visual construction of the metaverse and user-centric exploration. We introduce three kinds of visual elements that compose the metaverse and the two graphical construction methods in a pipeline. We propose a taxonomy of interaction technologies based on interaction tasks, user actions, feedback and various sensory channels, and a taxonomy of visualization techniques that assist user awareness. Current potential applications and future opportunities are discussed in the context of visual construction and exploration of metaverse. We hope this paper can provide a stepping stone for further research in the area of graphics, interaction and visualization in metaverse.
... acquiring the scenario before the mission and detecting and mapping obstacles as it is performed). However, real-time reconstruction and immersion is more a goal than a reality, since the computational cost of the process of reconstructing the environment, improving the 3D map, creating a mesh and importing it in virtual reality is very high [53]. Furthermore, the interfaces must show diverse types of information: from the representation of the robots and their movements in the scenario, to the integration of their state, the readings of their sensors and the images of their cameras. ...
Chapter
Full-text available
This chapter describes a series of works developed in order to integrate ROS-based robots with Unity-based virtual reality interfaces. The main goal of this integration is to develop immersive monitoring and commanding interfaces, able to improve the operator’s situational awareness without increasing its workload. In order to achieve this, the available technologies and resources are analyzed and multiple ROS packages and Unity assets are applied, such as \(multimaster\_fkie\), \(rosbridge\_suite\), RosBridgeLib and SteamVR. Moreover, three applications are presented: an interface for monitoring a fleet of drones, another interface for commanding a robot manipulator and an integration of multiple ground and aerial robots. Finally, some experiences and lessons learned, useful for future developments, are reported.
Article
Full-text available
The Metaverse, a virtual reality (VR) space where users can interact with each other and digital objects, is rapidly becoming a reality. As this new world evolves, Artificial Intelligence (AI) is playing an increasingly important role in shaping its development. Integrating AI with emerging technologies in the Metaverse creates new possibilities for immersive experiences that were previously impossible. This paper explores how AI is integrated with technologies such as the Internet of Things, blockchain, Natural Language Processing, virtual reality, Augmented Reality, Mixed Reality, and Extended Reality. One potential benefit of using AI in the Metaverse is the ability to create personalized experiences for individual users, based on their behavior and preferences. Another potential benefit of using AI in the Metaverse is the ability to automate repetitive tasks, freeing up time and resources for more complex and creative endeavors. However, there are also challenges associated with using AI in the Metaverse, such as ensuring user privacy and addressing issues of bias and discrimination. By examining the potential benefits and challenges of using AI in the Metaverse, including ethical considerations, we can better prepare for this exciting new era of VR. This paper presents a comprehensive survey of AI and its integration with other emerging technologies in the Metaverse, as the Metaverse continues to evolve and grow, it will be important for developers and researchers to stay up to date with the latest developments in AI and emerging technologies to fully leverage their potential.
Chapter
Throughout the ages, scientists and researchers want to discover and deal with all the objects in our concrete world. Using the classical ways only, some restrictions stand in their way as the problems that they want to solve get more complicated and overlapping. To achieve their goal, they started to search for new techniques and one of the techniques that started to crystallize in the last decade is to reconstruct the object in digital form to become much easier to deal with. Three-Dimensional (3D) reconstruction is one of the most iconic techniques that attract researchers to visualize a physical object in a 3D representation to make it examinable. 3D reconstruction has applications in many disciplines, such as, the medical field, urban areas, civil engineering, games, and virtual reality. Recently, several studies have explored the use of 3D reconstruction and its integration with technologies like Virtual Reality (VR). In this paper, a review of the different approaches used to construct an object in 3D environment is presented. The challenges encountered in the process of creating a 3D model are also discussed along with proposals to overcome them.Keywords3D reconstructionVirtual reality2D images visualization
Article
Simulations and synthetic datasets have historically empower the research in different service robotics-related problems, being revamped nowadays with the utilization of rich virtual environments. However, with their use, special attention must be paid so the resulting algorithms are not biased by the synthetic data and can generalize to real world conditions. These aspects are usually compromised when the virtual environments are manually designed. This article presents [email protected] , an ecosystem of virtual environments and tools that allows for the management of realistic virtual environments where robotic simulations can be performed. Here “realistic” means that those environments have been designed by mimicking the rooms’ layout and objects appearing in 30 real houses, hence not being influenced by the designer’s knowledge. The provided virtual environments are highly customizable (lighting conditions, textures, objects’ models, etc.), accommodate meta-information about the elements appearing therein (objects’ types, room categories and layouts, etc.), and support the inclusion of virtual service robots and sensors. To illustrate the possibilities of [email protected] we show how it has been used to collect a synthetic dataset, and also exemplify how to exploit it to successfully face two service robotics-related problems: semantic mapping and appearance-based localization.
Article
Full-text available
This paper presents an integrated mapping of motion and visualization scheme based on a Mixed Reality (MR) subspace approach for the intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control-feedback methods for the teleoperation system is validated and compared. The robotic arm-hand system consists of a 6 Degrees-of-Freedom (DOF) industrial manipulator and a low-cost 2-finger gripper, which can be manipulated in a natural manner by novice users physically distant from the working site. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time 3D visual feedback from the robot working site. Imitation-based velocity-centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control and enables spatial velocity-based control of the robot Tool Center Point (TCP). The user control space and robot working space are overlaid through the MR subspace, and the local user and a digital twin of the remote robot share the same environment in the MR subspace. The MR-based motion and visualization mapping scheme for telerobotics is compared to conventional 2D Baseline and MR tele-control paradigms over two tabletop object manipulation experiments. A user survey of 24 participants was conducted to demonstrate the effectiveness and performance enhancements enabled by the proposed system. The MR-subspace-integrated 3D mapping of motion and visualization scheme reduced the aggregate task completion time by 48% compared to the 2D Baseline module and 29%, compared to the MR SpaceMouse module. The perceived workload decreased by 32% and 22%, compared to the 2D Baseline and MR SpaceMouse approaches.
Article
Full-text available
Semantic maps augment traditional representations of robot workspaces, typically based on their geometry and/or topology, with meta-information about the properties, relations and functionalities of their composing elements. A piece of such information could be: fridges are appliances typically found in kitchens and employed to keep food in good condition. Thereby, semantic maps allow for the execution of high-level robotic tasks in an efficient way, e.g. “Hey robot, Store the leftover salad”. This paper presents ViMantic, a novel semantic mapping architecture for the building and maintenance of such maps, which brings together a number of features as demanded by modern mobile robotic systems, including: (i) a formal model, based on ontologies, which defines the semantics of the problem at hand and establishes mechanisms for its manipulation; (ii) techniques for processing sensory information and automatically populating maps with, for example, objects detected by cutting-edge CNNs; (iii) distributed execution capabilities through a client–server design, making the knowledge in the maps accessible and extendable to other robots/agents; (iv) a user interface that allows for the visualization and interaction with relevant parts of the maps through a virtual environment; (v) public availability, hence being ready to use in robotic platforms. The suitability of ViMantic has been assessed using [email protected], a vast repository of data collected by a robot in different houses. The experiments carried out consider different scenarios with one or multiple robots, from where we have extracted satisfactory results regarding automatic population, execution times, and required size in memory of the resultant semantic maps.