The point cloud map of the rescue arena, which is coloured according to the height of each point.

The point cloud map of the rescue arena, which is coloured according to the height of each point.

Source publication
Article
Full-text available
Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robo...

Citations

... Therefore, the key to troubling the formation of virtual people lies in how to simulate the complex human interaction process in the real environment [8][9]. People, virtual people, and machines can interact with each other across time, space, and media to build a new type of interactive relationship network, breaking the boundary between real space and virtual space and the interpersonal interaction mode, which will be redefined [10][11]. ...
Article
Full-text available
In the era of rapid technological development, the integration of artificial intelligence and virtual reality technology has gradually become a research hotspot. Interpersonal interaction, as an important part of virtual reality, also presents opportunities and challenges. The study builds the virtual interpersonal interaction model using the emotion model and hierarchical emotion as the learning algorithm after exploring the key technology of virtual interpersonal interaction based on the UGC model. Finally, the complex interpersonal interaction in virtual reality is simulated to examine the use of virtual community users and the factors affecting interpersonal interaction in virtual communities. According to the results, the percentage of female users of virtual communities is 14.28 percentage points higher than that of male users. The majority of people aged 18-40 who use virtual communities account for 88.01% of the population. In terms of education distribution, the group of bachelor’s degree holders is the group that uses virtual communities the most frequently, accounting for 66.52%. The study concluded that 8 out of the 11 hypotheses it proposed were valid. Among them, hypothesis 3 and hypothesis 11 were supported by 0.01 significance, and hypothesis 9 had a significance of 0.017, which reached 0.05 significance level.
... Using stereo cameras and virtual reality technology, direct human-robot communication is integrated in the mixed reality interface, allowing the human operator to make changes to the collaboration model online while providing feedback on the execution status of the collaborative task [77], using multiple sensors to achieve safe and contactless HRC, as shown in Fig. 8 below. A HRC approach based on real-time mapping and online virtual reality visualization as proposed by Xiao et al. [78]. Wang et al. [79] used HTC VICE to build a virtual environment that allows a human operator to observe the work site through a motion tracking headset. ...
Article
Full-text available
Collaborative robots, also known as cobots, are designed to work alongside humans in a shared workspace and provide assistance to them. With the rapid development of robotics and artificial intelligence in recent years, cobots have become faster, smarter, more accurate, and more dependable. They have found applications in a broad range of scenarios where humans require assistance, such as in the home, healthcare, and manufacturing. In manufacturing, in particular, collaborative robots combine the precision and strength of robots with the flexibility of human dexterity to replace or aid humans in highly repetitive or hazardous manufacturing tasks. However, human–robot interaction still needs improvement in terms of adaptability, decision making, and robustness to changing scenarios and uncertainty, especially in the context of continuous interaction with human operators. Collaborative robots and humans must establish an intuitive and understanding rapport to build a cooperative working relationship. Therefore, human–robot interaction is a crucial research problem in robotics. This paper provides a summary of the research on human–robot interaction over the past decade, with a focus on interaction methods in human–robot collaboration, environment perception, task allocation strategies, and scenarios for human–robot collaboration in manufacturing. Finally, the paper presents the primary research directions and challenges for the future development of collaborative robots.
... The embedded hardware architecture plays an increasingly important role in various fields [1], but with the rapid development of robot intelligence, more and more data exchange between its sensors, actuators and computing devices, then the reliability and power consumption of the controller put forward higher requirements [2], at the same time in most of the robot control, such as the traditional joystick + 2D display control, its single participation in the control mode of operation, in the operation process is almost "senseless" experience, the lack of immersion and experience, easy to cause boring distraction, operational efficiency is not high, leading to a series of problems [3,4]. With the rapid development of virtual reality technology in recent years, there is another option for controlling robots, namely using VR devices to assist in controlling robots [5]. Operators can achieve subjective environment perception in VR headmounted displays, from a single manual control mode to an immersive control that matches people's daily behavior, and control robots to work in depth. ...
Article
Full-text available
According to the control requirements of robot, the micro controller was designed, including the interface for the wireless transmission module NRF24L01, the sampling circuit for collecting the pressure sensing value, the DC motor drive function module, and the auxiliary circuit for ensuring the normal operation of the controller. After passing the hardware testing, the design of the PTZ control system is completed. The virtual reality environment is constructed using unity and VS software, Oculus is used as the head-mounted display, and then the pose sensor of Oculus is used to analyze and extract the pose sensing data for the control of the PTZ. The industrial camera is used to send back video, based on HSV algorithm for target recognition, and displaying video images in the virtual environment. And the above data was sent to the gimbal control system to finalize the virtual reality environment. Finally, the debugging and verification of the hardware architecture based on sensing and driving and the virtual reality environment were completed. And the scenario tests were conducted for the gimbal following and force feedback virtual interaction ball respectively, which verified the use of virtual reality technology to assist in controlling the robot and obtain the sense of immersive control experience.
... Appropriate map representation can be effectively applied in the implementation of human-robot interaction. Furthermore, to enhance the efficiency and novelty of human-robot interaction, immersive interfaces based on virtual reality (VR) can be developed and utilized for linking the intelligent decisionmaking of human beings with the precise sensing and high motion ability of robots in a better manner (Xiao et al., 2020). ...
... WebSocket is a network protocol that defines how servers and clients communicate over the Internet (Wang et al., 2013), and only one handshake is required to establish the communication connection. To better develop immersive human-robot interfaces and instruct robots to perform specific tasks, Xiao et al. (2020) presented a novel interaction method based on realtime mapping and online VR visualization. Dense point-cloud maps were created in real-time on the robot side, and then the three-dimensional normal distributions transform maps were transmitted via the wireless network to the remote control station. ...
Article
Full-text available
Navigation and autonomous operation in farmland or greenhouse environments present significant challenges for agricultural mobile robots on account of unstructured scene conditions and complex task requirements. The immersive interfaces based on human–robot interaction have been extensively developed, which bridge the high‐level decision of human beings and the precise sensing and high motion ability of robots. In this study, we propose a human–robot interaction framework based on three‐dimensional mapping and virtual reality (VR) visualization. On the construction client, dense three‐dimensional point‐cloud maps are created based on the simultaneous localization and mapping real‐time appearance‐based mapping algorithm with a Kinect‐style depth camera. Subsequently, point‐cloud filtering and mesh creation methods are introduced for postprocessing the map models. A data communication network is employed to transmit the optimized models to the VR‐based exploration client. Utilizing VR visualization, the operator can gain an intuitive and comprehensive understanding of the environments to be explored. The three‐dimensional agricultural scenes are mapped into VR models through the system, which can combine the physical worlds with the VR spaces in a better manner. Compared with the video streaming‐based method for three‐dimensional mapping, the map models with colors and textures are entirely displayed in the VR headset, thus providing robust and global human–robot interaction interfaces and enhancing the immersive exploration for human beings to a considerable extent. Experimental results indicate that the three‐dimensional map models exhibit relatively sufficient construction integrity and yield favorable optimization effects. Additionally, according to a user questionnaire survey, immersive VR‐based interfaces can be more effectively utilized for the three‐dimensional mapping of mobile robots. Research evidence and results demonstrate that our proposed system framework offers positive assistance and support for enhancing human–robot interaction and executing specific robotic tasks in unstructured field agricultural environments. Code implementation and related data sets are shared at the link https://github.com/WangDongBUAA/Mapping_Interfacing .
... Industry and Remote Work. As Industry 4.0 progressed, virtual devices have proven their benefits in many sectors: in the design cycle of products and manufacturing systems [13], for programming machines [14], in the teleoperation industry [15], [16] and also for training novices [17], [18]. In any of these applications, virtual technologies provide the operator with a faithful virtual equivalent of the physical environment. ...
Article
Full-text available
Augmented and Virtual Reality (AR, VR), collectively known as Extended Reality (XR), are increasingly gaining traction thanks to their technical advancement and the need for remote connections, recently accentuated by the pandemic. Remote surgery, telerobotics, and virtual offices are only some examples of their successes. As users interact with AR and VR, they generate extensive behavioral data usually leveraged for measuring human activity, which could be used for profiling users’ identities or personal information (e.g., gender). However, several factors affect the efficiency of profiling, such as the technology employed, the action taken, the mental workload, the presence of bias, and the sensors available. To date, no study has considered all of these factors together and in their entirety, limiting the current understanding of XR profiling. In this work, we provide a comprehensive study on user profiling in virtual technologies (AR, VR). Specifically, we employ machine learning on behavioral data (i.e., head, controllers, and eye data) to identify users and infer their individual attributes (i.e., age, gender). Toward this end, we propose a general framework that can potentially infer any personal information from any virtual scenarios. We test our framework on eleven generic actions (e.g., walking, searching, pointing) involving low and high mental loads, derived from two distinct use cases: an AR everyday application (34 participants) and VR robot teleoperation (35 participants). Our framework limits the burden of creating technology- and actiondependent algorithms, also reducing the experimental bias evidenced in previous work, providing a simple (yet effective) baseline for future works. We identified users up to 97% F1-score in VR and 80% in AR. Gender and Age inference was also facilitated in VR, reaching up to 82% and 90% F1-score, respectively. Through an in-depth analysis of sensors’ impact, we found VR profiling resulting more effective than AR mainly because of the eye sensors’ presence.
... With the advent of Industry 4.0, the benefits of virtual devices have been repeatedly shown in many domains: in the design cycle of products and manufacturing systems [6], for programming machines [7], in the teleoperation industry [8,9] and also for training novices [10,11]. In any of these applications, virtual technologies allow the operator to perform work tasks while being immersed in virtual environment that faithfully emulates the physical one. ...
Preprint
Full-text available
Virtual and Augmented Reality (VR, AR) are increasingly gaining traction thanks to their technical advancement and the need for remote connections, recently accentuated by the pandemic. Remote surgery, telerobotics, and virtual offices are only some examples of their successes. As users interact with VR/AR, they generate extensive behavioral data usually leveraged for measuring human behavior. However, little is known about how this data can be used for other purposes. In this work, we demonstrate the feasibility of user profiling in two different use-cases of virtual technologies: AR everyday application ($N=34$) and VR robot teleoperation ($N=35$). Specifically, we leverage machine learning to identify users and infer their individual attributes (i.e., age, gender). By monitoring users' head, controller, and eye movements, we investigate the ease of profiling on several tasks (e.g., walking, looking, typing) under different mental loads. Our contribution gives significant insights into user profiling in virtual environments.
... Conventional human-robot interaction approaches often lack feedback about the environment around the robot, which can cause stress and fatigue to the operator [38]. On the other hand, the symbiotic HRC approach involves sensors that offer more information about the environment, leading to better planning of tasks, ease of programming the robot, natural form of interaction and decision support [35]. ...
Article
Full-text available
Industry 4.0, as an enabler of smart factories, focuses on flexible automation and customization of products by utilizing technologies such as the Internet of Things and cyber–physical systems. These technologies can also support the creation of virtual replicas which exhibit real-time characteristics of a physical system. These virtual replicas are commonly referred to as digital twins. With the increased adoption of digitized products, processes and services across manufacturing sectors, digital twins will play an important role throughout the entire product lifecycle. At the same time, collaborative robots have begun to make their way onto the shop floor to aid operators in completing tasks through human–robot collaboration. Therefore, the focus of this paper is to provide insights into approaches used to create digital twins of human–robot collaboration and the challenges in developing these digital twins. A review of different approaches for the creation of digital twins is presented, and the function and importance of digital twins in human–robot collaboration scenarios are described. Finally, the paper discusses the challenges of creating a digital twin, in particular the complexities of modelling the digital twin of human–robot collaboration and the exactness of the digital twin with respect to the physical system.
... The remote control is accomplished by using the keyboard to communicate directly with the robot via the Roslibjs library, allowing the user to change both the orientation and speed of the robot. While the web interface provides the most responsiveness to the operator, it suffers from delays associated with data display 2022 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) Santa Maria da Feira, Portugal -April [29][30]2022 and transmission. The work done by J. Xiao et al. uses virtual reality to aid the robot control [29]. ...
... While the web interface provides the most responsiveness to the operator, it suffers from delays associated with data display 2022 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) Santa Maria da Feira, Portugal -April [29][30]2022 and transmission. The work done by J. Xiao et al. uses virtual reality to aid the robot control [29]. The operator interacts with the robot he will control via virtual reality. ...
... One of these cameras has been modified to include pan and tilt functionality. As a result, the operator has the option 2022 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) Santa Maria da Feira, Portugal -April [29][30]2022 of observing in one of two directions: downward, to see the path and nearby obstacles, or forward to see a greater distance. These images were transmitted using the GStreamer framework, where they were compressed with the H.264 codec and sent over TCP. ...
Article
Full-text available
Recent years have witnessed an increased utilization of robotics systems in agricultural settings. While fully autonomous farming holds great potential, most systems fall short of meeting the demands of present‐day agricultural operations. The use of human labor or teleoperated robots is also limited due to the physiological constraints of humans and the shortcomings of interfaces used to control robots. To harness the strengths of autonomous capabilities and endurance of robots, as well as the decision‐making capabilities of humans, human–robot collaboration (HRC) has emerged as a viable approach. By identifying the various applications of HRC in current research and the infrastructure employed to develop them, interested parties seeking to utilize collaborative robotics in agriculture can gain a better understanding of the possibilities and challenges they may encounter. In this review, an overview of existing HRC applications in the agricultural domain is provided. Additionally, general trends and weaknesses are identified within the research corpus. This review serves as a presentation of the state‐of‐the‐art research of HRC in agriculture for professionals considering the adoption of HRC. Robotics engineers can utilize this review as a resource for easily accessing information on the hardware, software, and algorithms employed in building HRC systems for agriculture.