Fig 3 - uploaded by Minas Liarokapis
Content may be subject to copyright.
Bimanual telemanipulation with the developed framework. The user guides robot arms with VR controllers, receiving visual and haptic feedback. The framework offers grasping and manipulation assistance in the form of an affordance menu, where the user selects the option corresponding to the desired task.

Bimanual telemanipulation with the developed framework. The user guides robot arms with VR controllers, receiving visual and haptic feedback. The framework offers grasping and manipulation assistance in the form of an affordance menu, where the user selects the option corresponding to the desired task.

Source publication
Conference Paper
Full-text available
The concept of teleoperation has been studied since the advent of robotics and has found use in a wide range of applications, including exploration of remote or dangerous environments (e.g., space missions, disaster management), telepresence based time optimisation (e.g., remote surgery) and robot learning. While a significant amount of research ha...

Context in source publication

Context 1
... final set of experiments utilised the entire framework in a remote manipulation setting. The VR system and visual interface were set up in a remote location, streaming the VR controller data and receiving the video feed. The operator was in control of both robot arms and hands, monitoring them on the feedback screen (Fig. 3). Two tasks were performed in this context: 1) Pick and place: A ball was placed at an arbitrary position on the table and the operator was asked to pick it up choosing a grasp from the different available affordance options. 2) Pouring: A bottle filled with small loose components and a cup were placed in the workspace and the operator ...

Similar publications

Preprint
Full-text available
Many teleoperation tasks require three or more tools working together, which need the cooperation of multiple operators. The effectiveness of such schemes may be limited by communication. Trimanipulation by a single operator using an artificial third arm controlled together with their natural arms is a promising solution to this issue. Foot-control...

Citations

... Beyond training, VR applications extend to various sectors, enhancing internal operations in settings such as operating rooms and hospitals, exemplified by the HoloSurg system developed by Exovite [17]. Furthermore, VR's reach expands into non-industrial domains, such as the Thyssen-Bornemisza Museum, where visitors can explore visual representations of paintings in a realistic virtual environment [18]. ...
Article
Full-text available
Over the past few years, the industry has experienced significant growth, leading to what is now known as Industry 4.0. This advancement has been characterized by the automation of robots. Industries have embraced mobile robots to enhance efficiency in specific manufacturing tasks, aiming for optimal results and reducing human errors. Moreover, robots can perform tasks in areas inaccessible to humans, such as hard-to-reach zones or hazardous environments. However, the challenge lies in the lack of knowledge about the operation and proper use of the robot. This work presents the development of a teleoperation system using HTC Vive Pro 2 virtual reality goggles. This allows individuals to immerse themselves in a fully virtual environment to become familiar with the operation and control of the KUKA youBot robot. The virtual reality experience is created in Unity, and through this, robot movements are executed, followed by a connection to ROS (Robot Operating System). To prevent potential damage to the real robot, a simulation is conducted in Gazebo, facilitating the understanding of the robot’s operation.
... In our previous work, we proposed an assistive, affordances oriented teleoperation framework to simplify remote manipulation with dexterous hands using commercially available virtual reality controllers [20]. The framework employed object recognition using an external camera to present a set of affordances to the user, who determines the most appropriate grasp type from the possible choices that enables successful task execution. ...
Conference Paper
Full-text available
Over the last decades, significant research effort has been put into creating Electromyography (EMG) based controllers for intuitive, hands-free control of robotic arms and hands. To achieve this, machine learning models have been employed to decode human motion and intention using EMG signals as input and to deliver several applications, such as prosthesis control using gesture classification. Despite the advances introduced by new deep learning techniques, real-time control of robot arms and hands using EMG signals as input still lacks accuracy, especially when a plethora of gestures are included as labels in the case of classification. This has been observed to be due to the noise and non-stationarity of the EMG signals and the increased dimensionality of the problem. In this paper, we propose an intuitive, affordances-oriented EMG-based telemanipulation framework for a robot arm-hand system that allows for dexterous control of the device. An external camera is utilized to perform scene understanding and object detection and recognition, providing grasping and manipulation assistance to the user and simplifying control. Object-specific Transformer-based classifiers are employed based on the af-fordances of the object of interest, reducing the number of possible gesture outputs, dividing and conquering the problem, and resulting in a more robust and accurate gesture decoding system when compared to a single generic classification model. The performance of the proposed system is experimentally validated in a remote telemanipulation setting, where the user successfully performs a set of dexterous manipulation tasks.
... These robots are obtaining market value due to their capabilities, including executing complex tasks simply and with a fast and precise response. It becomes easy for the robot to work in hazardous environments [1][2][3]. ...
... In other environments, such as military applications and a scenario such as the COVID-19 pandemic, orientation and self-localization play a vital role. 3. ...
Article
Full-text available
Automation in the modern world has become a necessity for humans. Intelligent mobile robots have become necessary to perform various complex tasks in healthcare and industry environments. Mobile robots have gained attention during the pandemic; human–robot interaction has become vibrant. However, there are many challenges in obtaining human–robot interactions regarding maneuverability, controllability, stability, drive layout and autonomy. In this paper, we proposed a stability and control design for a telepresence robot called auto-MERLIN. The proposed design simulated and experimentally verified self-localization and maneuverability in a hazardous environment. A model from Rieckert and Schunck was initially considered to design the control system parameters. The system identification approach was then used to derive the mathematical relationship between the manipulated variable of robot orientation control. The theoretical model of the robot mechanics and associated control were developed. A design model was successfully implemented, analyzed mathematically, used to build the hardware and tested experimentally. Each level takes on excellent tasks for the development of auto-MERLIN. A higher level always uses the services of lower levels to carry out its functions. The proposed approach is comparatively simple, less expensive and easily deployable compared to previous methods. The experimental results showed that the robot is functionally complete in all aspects. A test drive was performed over a given path to evaluate the hardware, and the results were presented. Simulation and experimental results showed that the target path is maintained quite well.
... In addition, VR is generally used for visualization and to enable operators to interact with 3D environments derived from the 3D physical world. Such applied cognitive engineering thinking to designing HRI interfaces is nothing new [Gorjup et al. 2019], because researchers, to some degree, have already agreed that this type of technology can enhance operators' perception and presence in the virtual environment. Nevertheless, and oddly enough, VR technology has not yet been widely adopted in mainstream HRIs, especially in commercial products. ...
Conference Paper
Full-text available
This paper addresses the relations between the artifacts, tools, and technologies that we make to fulfill user-centered teleoperations in the cyber-physical environment. We explored the use of a virtual reality (VR) interface based on customized concepts of Worlds-in-Miniature (WiM) to teleoperate unmanned ground vehicles (UGVs). Our designed system supports teleoperators in their interaction with and control of a miniature UGV directly on the miniature map. Both moving and rotating can be done via body motions. Our results showed that the miniature maps and UGV represent a promising framework for VR interfaces.
... However, these partially automated tasks can significantly benefit from human-robot cooperation by means of shared-control architectures [35]. In this sense, many contributions have been developed focusing on the human-robot interaction in teleoperation tasks [21,[36][37][38][39][40][41][42], as is the case of this work. ...
... Telepresence [22] provides the user with an interface which makes the direct control task less dependent on his or her skills and concentration. Telepresence for direct control teleoperation is a strong trend in recent research developments due to the introduction of visual interfaces [30], virtual and augmented reality [39], haptic devices [43], or a combination of them [31,40,44]. For example, the authors in [22] provided the user with an interface which makes the direct control task less dependent on his or her skills and concentration. ...
Article
Full-text available
This work proposes a new interface for the teleoperation of mobile robots based on virtual reality that allows a natural and intuitive interaction and cooperation between the human and the robot, which is useful for many situations, such as inspection tasks, the mapping of complex environments, etc. Contrary to previous works, the proposed interface does not seek the realism of the virtual environment but provides all the minimum necessary elements that allow the user to carry out the teleoperation task in a more natural and intuitive way. The teleoperation is carried out in such a way that the human user and the mobile robot cooperate in a synergistic way to properly accomplish the task: the user guides the robot through the environment in order to benefit from the intelligence and adaptability of the human, whereas the robot is able to automatically avoid collisions with the objects in the environment in order to benefit from its fast response. The latter is carried out using the well-known potential field-based navigation method. The efficacy of the proposed method is demonstrated through experimentation with the Turtlebot3 Burger mobile robot in both simulation and real-world scenarios. In addition, usability and presence questionnaires were also conducted with users of different ages and backgrounds to demonstrate the benefits of the proposed approach. In particular, the results of these questionnaires show that the proposed virtual reality based interface is intuitive, ergonomic and easy to use.
... Despite the low cost of IMUbased systems, they are often inappropriate for tasks that require high accuracy. This can be addressed by utilizing Virtual Reality (VR) controllers to capture human motion, as demonstrated in [12] and [13]. VR-based teleoperation systems are often intuitive to use, but they limit the user to the area visible by the lighthouse sensors. ...
Conference Paper
Full-text available
Modern collaborative and industrial robots are typically accompanied by proprietary control interfaces, which may also offer basic teaching functionality. However, many such interfaces are not suited for frequent reconfigu-ration of the robot system, which is essential in agile manufacturing and research. To flatten the learning curve between different interface variants and efficiently integrate external components into the process, generic robot teaching interfaces can be utilized. This paper proposes a new wearable, open-source, robot teaching interface and focuses on evaluating and comparing it with other affordable generic robot teaching interfaces in assembly task programming. Wireless input devices, including a standard keyboard, a gaming console controller, and a 6D mouse have been considered. The devices are compared in terms of perceived usability, subjective workload, and time efficiency when programming insertion tasks through a waypoint-based teaching scheme.
... However, these tasks can be partially automated, allowing the cooperation between human and robot, introducing shared-control architectures [46]. Hence, many recent contributions have focused on human-robot interaction and, more specifically, on advanced robot teleoperation [16,30,31,[47][48][49][50][51], which is also the case of this paper. ...
... Telepresence [36] allows the user to perform the robot teleoperation task by means of an interface, achieving a result less dependent on their skills. Telepresence is currently a trending research topic thanks to the introduction of new technologies, such as augmented and virtual reality [51], visual interfaces [42], haptic devices [52], or a combination of them [16,30,43], to perform direct control teleoperation. For instance, the authors in [53] proposed a low-cost telerobotic system based on virtual reality technology and the homunculus model of mind. ...
Article
Full-text available
Teleoperation of bimanual robots is being used to carry out complex tasks such as surgeries in medicine. Despite the technological advances, current interfaces are not natural to the users, who spend long periods of time in learning how to use these interfaces. In order to mitigate this issue, this work proposes a novel augmented reality-based interface for teleoperating bimanual robots. The proposed interface is more natural to the user and reduces the interface learning process. A full description of the proposed interface is detailed in the paper, whereas its effectiveness is shown experimentally using two industrial robot manipulators. Moreover, the drawbacks and limitations of the classic teleoperation interface using joysticks are analyzed in order to highlight the benefits of the proposed augmented reality-based interface approach.
... Although different objects and instructions were obviously used, the bimanual capability and the intuitive control of our setup would, in all likelihood, be the reason behind the shorter times obtained in our experiment. While bimanual teleoperation has been widely studied [34,35,36,37,38,39], the tracking method used is often wired or dependent on external tracking [40,41], including also our previously published video 2 [42], which has initiated this study. In this work, we use a wearable, wireless and independent device both for hand movement recognition and upper-body tracking. ...
Article
Full-text available
Objective. Bimanual humanoid platforms for home assistance are nowadays available, both as academic prototypes and commercially. Although they are usually thought of as daily helpers for non-disabled users, their ability to move around, together with their dexterity, makes them ideal assistive devices for upper-limb disabled persons, too. Indeed, teleoperating a bimanual robotic platform via muscle activation could revolutionize the way stroke survivors, amputees and patients with spinal injuries solve their daily home chores. Moreover, with respect to direct prosthetic control, teleoperation has the advantage of freeing the user from the burden of the prosthesis itself, overpassing several limitations regarding size, weight, or integration, and thus enables a much higher level of functionality. Approach. In this study, nine participants, two of whom suffer from severe upper-limb disabilities, teleoperated a humanoid assistive platform, performing complex bimanual tasks requiring high precision and bilateral arm/hand coordination, simulating home/office chores. A wearable body posture tracker was used for position control of the robotic torso and arms, while interactive machine learning applied to electromyography of the forearms helped the robot to build an increasingly accurate model of the participant’s intent over time. Main results. All participants, irrespective of their disability, were uniformly able to perform the demanded tasks. Completion times, subjective evaluation scores, as well as energy- and time- efficiency show improvement over time on short and long term. Significance. This is the first time a hybrid setup, involving myoeletric and inertial measurements, is used by disabled people to teleoperate a bimanual humanoid robot. The proposed setup, taking advantage of interactive machine learning, is simple, non-invasive, and offers a new assistive solution for disabled people in their home environment. Additionnally, it has the potential of being used in several other applications in which fine humanoid robot control is required.
... Interfaces like joysticks, mechanical buttons, and touch-pads have been used, but they usually require a steep learning curve to perform the required task efficiently, as they offer limited functionality. In order to make such interfaces as intuitive as possible, hand held controller assisted teleoperation interfaces have been explored to train and control robotic systems [172,173,174]. However, with a diverse range of computing devices, a lot of the time a situation arises that it is preferred not to have a physical device while interacting with a robotic or computing device [1]. ...
... Moreover, even in cases of automated tasks with sophisticated AI, autonomous systems can be improved by working together with a human operator, that is, introducing shared-control architectures (Johnson & Vera, 2019). That being the case, a rich body of contributions in the field of teleoperation, of which this work is part of, has been developing, with its focus put on deepening and improving human-robot interaction in teleoperation (Clark et al., 2019;Girbés-Juan et al., 2021;Gorjup et al., 2019;Lu et al., 2018;Nicolis et al., 2018;Selvaggio et al., 2018), rather than solving specific problems such as the ones cited above. ...
... Still in the field of direct control, but with a more assisted approach of teleoperation, telepresence (Niemeyer et al., 2016) provides the human operator with an interface which makes the direct control task less dependent on his or her skills and concentration. Telepresence is a strong trend in recent research developments, with the introduction of virtual or augmented reality (Gorjup et al., 2019;Solanes et al., 2020), visual interfaces (Yoon et al., 2018) and haptic devices (Selvaggio et al., 2019) or the combination of these different elements (Clark et al., 2019;Girbés-Juan et al., 2021;Saracino et al., 2020) to direct control teleoperation. A particular case can be found in (Nicolis et al., 2018), where one arm of a bimanual robot is teleoperated to grasp a target object, while the other develops an automatic task of visual-servoing to keep the object in sight of a camera and avoiding occlusions, thus making the teleoperation easier. ...
Article
This work develops a method to perform surface treatment tasks using a bimanual robotic system, i.e. two robot arms cooperatively performing the task. In particular, one robot arm holds the workpiece while the other robot arm has the treatment tool attached to its end-effector. Moreover, the human user teleoperates all the six coordinates of the former robot arm and two coordinates of the latter robot arm, i.e. the teleoperator can move the treatment tool on the plane given by the workpiece surface. Furthermore, a force sensor attached to the treatment tool is used to automatically attain the desired pressure between the tool and the workpiece and to automatically keep the tool orientation orthogonal to the workpiece surface. In addition, to assist the human user during the teleoperation, several constraints are defined for both robot arms in order to avoid exceeding the allowed workspace, e.g. to avoid collisions with other objects in the environment. The theory used in this work to develop the bimanual robot control relies on sliding mode control and task prioritisation. Finally, the feasibility and effectiveness of the method are shown through experimental results using two robot arms.