Software components used in the use case.

Software components used in the use case.

Source publication
Article
Full-text available
Human–robot collaboration (HRC) is one of the key aspects of Industry 4.0 (I4.0) and requires intuitive modalities for humans to communicate seamlessly with robots, such as speech, touch, or bodily gestures. However, utilizing these modalities is usually not enough to ensure a good user experience and a consideration of the human factors. Therefore...

Contexts in source publication

Context 1
... numbers varying from 0 (straight, over the upper threshold) to 2 (bent, under the lower threshold). Thumb Index Middle Ring Little Pose Horns 2 0 2 2 0 Index and middle Horns 2 0 2 2 0 Index and middle straight 2 0 0 2 2 Index and ring straight 2 0 2 0 Little straight 2 2 2 2 0 Horns 2 0 2 2 0 Index and middle straight 2 0 0 2 2 Index and ring straight 2 0 2 0 2 Index, middle, and little The software of the CaptoGlove, Capto Suite, requires that the operating system (OS) be Windows 10. In addition, in the development phase, there were restrictions recognized The software of the CaptoGlove, Capto Suite, requires that the operating system (OS) be Windows 10. ...
Context 2
... numbers varying from 0 (straight, over the upper threshold) to 2 (bent, under the lower threshold). Thumb Index Middle Ring Little Pose Horns 2 0 2 2 0 Index and middle Horns 2 0 2 2 0 Index and middle straight 2 0 0 2 2 Index and ring straight 2 0 2 0 Little straight 2 2 2 2 0 Horns 2 0 2 2 0 Index and middle straight 2 0 0 2 2 Index and ring straight 2 0 2 0 2 Index, middle, and little The software of the CaptoGlove, Capto Suite, requires that the operating system (OS) be Windows 10. In addition, in the development phase, there were restrictions recognized The software of the CaptoGlove, Capto Suite, requires that the operating system (OS) be Windows 10. ...
Context 3
... numbers varying from 0 (straight, over the upper threshold) to 2 (bent, under the lower threshold). Thumb Index Middle Ring Little Pose Horns 2 0 2 2 0 Index and middle Horns 2 0 2 2 0 Index and middle straight 2 0 0 2 2 Index and ring straight 2 0 2 0 Little straight 2 2 2 2 0 Horns 2 0 2 2 0 Index and middle straight 2 0 0 2 2 Index and ring straight 2 0 2 0 2 Index, middle, and little The software of the CaptoGlove, Capto Suite, requires that the operating system (OS) be Windows 10. In addition, in the development phase, there were restrictions recognized The software of the CaptoGlove, Capto Suite, requires that the operating system (OS) be Windows 10. ...
Context 4
... the use case, two local level and one global level software components were employed. The components and their short explanations are presented in Table 2. Orchestrator application that handles process enactment and task assignment Global ...

Citations

... Industry 4.0 merges analogue and digital systems and promises factories to achieve a greater variety of products with less downtime. Therefore, this trend has been accepted in both research and industry as a means of managing the new consumption paradigm (Nguyen-Ngoc, Lasa & Iriarte, 2022;Rautiainen, Pantano, Traganos, Ahmadi, Saenz, Mohammed et al., 2022). Besides the benefits of this new revolution, some challenges were highlighted, such as unemployment for low-qualified workers, the increasing precariousness in society, and the demand for training to accomplish the requirements caused by the labor market (Kowal, Włodarz, Brzychczy & Klepka, 2022;Sony, 2020). ...
Article
Full-text available
Purpose: This paper presents the concept of an Employee Suggestion System (ESS) that integrates a strategy originated from Neuro-Linguistic Programming with application in Coaching (Disney Strategy) to face the human challenges of Industry 4.0.Design/methodology/approach: A four-phase methodology was followed, starting with a systematic literature review of the ESS to obtain a theoretical perspective about this concept and its characteristics. Subsequently, 30 interviews were carried out to recognize the ESSs of three partner companies and, as well as to perceive the receptivity of the new concept of ESS. Finally, the concept of the system was modelled, prototyped and tested and combines the Japanese (Kaizen Teian) and American ESS (Kaizen Teian adapted to the western industry) approaches.Findings: Given the existing systems in organizations, the platform presented brings more maturity to the suggestions made (through the Disney strategy applied in Coaching), greater visibility of their status and evaluation, and greater promotion of workforce engagement (through the promotion of voice behaviour). At the same time, it supports the collection of tacit ideas from employees, preserving organizational knowledge and, therefore, a source of competitive advantage.Originality/value: This paper presents a digital tool with Lean origins, which includes Coaching principles, essential in empowering the workforce (through the voice behaviour) and preserving organizational knowledge. It is a platform built in a way adapted to today's Lean shop floor and intends to prove itself as a resource to promote happy, engaged and committed employees.
... Multimodal inputs were gathered through a GUI for examining use cases and identifying the most efficient manner of communication for the task. Additionally, multimodal inputs have been developed in tandem using fusion systems (Rossi et al. 2013;Liu et al. 2018;Reddy and Basir 2010) as well as individually (Nuzzi et al. 2021;Drawdy and Yanik 2015;Chen et al. 2018;Wang et al. 2019;Rautiainen et al. 2022). ...
... Indeed, several studies have demonstrated that adaptation and personalization of robot perception systems and in turn behaviour to that of the human improves the quality of interaction and leads to greater user acceptance (Churamani et al. 2017;Di Napoli et al. 2018;Caleb-Solly et al. 2018). Apart from that, researchers found that gesture personalization during a collaborative task reduced the mental and physical workload of the humans and was thus increasingly preferred by the participants (Rautiainen et al. 2022). ...
Article
Full-text available
Achieving safe collaboration between humans and robots in an industrial work-cell requires effective communication. This can be achieved through a robot perception system developed using data-driven machine learning. The challenge for human–robot communication is the availability of extensive, labelled datasets for training. Due to the variations in human behaviour and the impact of environmental conditions on the performance of perception models, models trained on standard, publicly available datasets fail to generalize well to domain and application-specific scenarios. Thus, model personalization involving the adaptation of such models to the individual humans involved in the task in the given environment would lead to better model performance. A novel framework is presented that leverages robust modes of communication and gathers feedback from the human partner to auto-label the mode with the sparse dataset. The strength of the contribution lies in using in-commensurable multimodes of inputs for personalizing models with user-specific data. The personalization through feedback-enabled human–robot communication (PF-HRCom) framework is implemented on the use of facial expression recognition as a safety feature to ensure that the human partner is engaged in the collaborative task with the robot. Additionally, PF-HRCom has been applied to a real-time human–robot handover task with a robotic manipulator. The perception module of the manipulator adapts to the user’s facial expressions and personalizes the model using feedback. Having said that, the framework is applicable to other combinations of multimodal inputs in human–robot collaboration applications.
... Examples are visual systems (e.g., to display texts or videos) or haptic systems (e.g., to trigger vibration alerts). Moreover, acoustic signals can be used, such as speech output and other sound signals [4]. On the other hand, the communication channel from the human to the machine should enable the consideration of the human's actions by the machine. ...
Article
Full-text available
This paper presents a novel method for online tool recognition in manual assembly processes. The goal was to develop and implement a method that can be integrated with existing Human Action Recognition (HAR) methods in collaborative tasks. We examined the state-of-the-art for progress detection in manual assembly via HAR-based methods, as well as visual tool-recognition approaches. A novel online tool-recognition pipeline for handheld tools is introduced, utilizing a two-stage approach. First, a Region Of Interest (ROI) was extracted by determining the wrist position using skeletal data. Afterward, this ROI was cropped, and the tool located within this ROI was classified. This pipeline enabled several algorithms for object recognition and demonstrated the generalizability of our approach. An extensive training dataset for tool-recognition purposes is presented, which was evaluated with two image-classification approaches. An offline pipeline evaluation was performed with twelve tool classes. Additionally, various online tests were conducted covering different aspects of this vision application, such as two assembly scenarios, unknown instances of known classes, as well as challenging backgrounds. The introduced pipeline was competitive with other approaches regarding prediction accuracy, robustness, diversity, extendability/flexibility, and online capability.
... Multimodal inputs were gathered through a GUI for examining use cases and identifying the most efficient manner of communication for the task. Additionally, multimodal inputs have been developed in tandem using fusion systems (Rossi et al. 2013), (Liu et al. 2018), (Reddy and Basir 2010) as well as individually (Chen et al. 2018;Drawdy and Yanik 2015;Nuzzi et al. 2021;Rautiainen et al. 2022;Wang et al. 2019). ...
... Indeed, several studies have demonstrated that adaptation and personalization of robot perception systems and in turn behaviour to that of the human improves the quality of interaction and leads to greater user acceptance (Caleb-Solly et al. 2018;Churamani et al. 2017;Di Napoli et al. 2018). Apart from that, researchers found that gesture personalization during a collaborative task reduced the mental and physical workload of the humans and was thus increasingly preferred by the participants (Rautiainen et al. 2022). ...
Preprint
Full-text available
Achieving safe collaboration between humans and robots in an industrial work-cell requires effective communication. This can be achieved through a robot perception system developed using data-driven machine learning. The challenge for human-robot communication is the availability of extensive, labelled datasets for training. Due to the variations in human behaviour and the impact of environmental conditions on the performance of perception models, models trained on standard, publicly available datasets fail to generalize well to domain and application-specific scenarios. Thus, model personalization involving the adaptation of such models to the individual humans involved in the task in the given environment would lead to better model performance. A novel framework is presented that leverages robust modes of communication and gathers feedback from the human partner to auto-label the mode with the sparse dataset. The strength of the contribution lies in using in-commensurable multimodes of inputs for personalizing models with user-specific data. The personalization through feedback-enabled human-robot communication (PF-HRCom) framework is implemented on the use of facial expression recognition as a safety feature to ensure that the human partner is engaged in the collaborative task with the robot. Additionally, PF-HRCom has been applied to a real-time human-robot handover task with a robotic manipulator. The perception module of the manipulator adapts to the user's facial expressions and personalizes the model using feedback. Having said that, the framework is applicable to other combinations of multimodal inputs in human-robot collaboration applications.
... Moreover ROS Noetic, the current distribution of the original ROS, has been under development for over half a decade and will be the last. ROS2 is also compatible with more complex technologies such as the Fiware platform [33], an open-source platform for developing Internet of Things (IoT) applications. ...
Article
Full-text available
Multi-agent system research is a hot topic in different application domains. In robotics, multi-agent robot systems (MRS) can realize complex tasks even if the behavior of each individual agent seems simple thanks to the cooperation between them. Although many control algorithms for MRS are proposed, few experimental results are validated on real data, being essential to building new testbeds to conduct MRS research and teaching. Moreover, most existing platforms for experimentation do not offer an overall solution allowing software and hardware design tools. This paper describes the design and operation of Robotic Park, a new indoor experimental platform for research in multi-agent systems. The heterogeneity and flexibility of its configuration are two of its main features. It supports control strategies design and validation of MRS algorithms. Experiences can be carried out in a virtual environment, in a physical environment, or under a hybrid scheme, as digital twins have been developed in Gazebo and Webots. Currently, two types of aerial vehicles, the Crazyflie 2.X and the DJI Tello, are available. It also includes two types of differential mobile robots, the Turtlebot3 and the Khepera IV. Both internal and external positioning systems using different technologies such as Motion Capture or Ultra-WideBand are also available for experiences. All components are connected through ROS2 (Robot Operating System 2) which enables experiences under a centralized, distributed, or hybrid scheme, and different communication strategies can be implemented. A mixed reality experience that addresses the problem of formation control using event-based control illustrates the platform usage.
Preprint
Full-text available
This paper explores strategies for fostering efficient vocal communication and collaboration between human workers and collaborative robots (cobots) in assembly processes. Vocal communication enables division of attention of the worker, as it frees the visual attention, and the worker’s hands, dedicated to the task at hand. Speech generation and speech recognition are pre-requisites for effective vocal communication. The study focuses on cobot assistive tasks, where the human is in charge of the work and performs the main tasks while the cobot assists the worker in various peripheral jobs, such as bringing tools, parts, or materials, and returning them, or disposing them; or screwing or packaging the products. A nuanced understanding is necessary for understanding how human-robot interactions can be optimized to enhance overall productivity and safety. Through a comprehensive review of relevant literature and empirical studies, this manuscript identifies key factors influencing successful vocal communication, and proposes practical strategies for implementation.
Article
Gesture control is one of the effective and flexible communication method between humans and robots. However, it always depends on complex hardware and configurations in human-robot collaboration systems. Simplifying the design of gesture-interaction systems and avoiding miscommunication are challenging problems. In this paper, we proposed a method that utilizes an RGB sensor to realize spatial human-robot collaboration. A random forest based depth estimator is presented to supplement the additional spatial information for hand gesture recognition. Additionally, we demonstrate the construction of secure human-robot collaboration scenarios in Unity and validate our approach in real-world settings, based on which a digital twin system oriented to human-machine collaboration is constructed to realize rapid human-machine task simulation, safety specification testing, and real-scene applications development.
Chapter
In the present paper we propose an approach to model the humanoid service robot interaction with a human, based on two mathematical formalisms, the Intuitionistic Fuzzy Sets (IFSs) and the Generalized Nets (GNs). Moreover, in the present work we are using one of the extensions of the ordinary GNs, the so called, Intuitionistic Fuzzy GNs of the first type (IFGN1). The input data for the proposed model is namely from the embedded sensors and the peripherals of the robot which give the possibility of multi-modal interaction. The IFGN1-model allows the development of a more detailed and complex model for optimization and improvement of the human-robot interaction in industrial, service, co-manipulation, medical and healthcare applications.KeywordsHuman-robot interactionIntuitionistic fuzzy estimationGeneralized netsIntuitionistic fuzzy generalized net of first type