Conference Paper

Towards Vertical and Horizontal Extension of Shared Control Concept

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Other literature proposes frameworks for the description of humanmachine cooperation models (Abbink et al., 2018;Flemisch et al., 2016;Pacaux-Lemoine and Itoh, 2015;Rothfuß et al., 2020). However, these works attempt to describe human-machine cooperation by a one-dimensional model, therefore missing a clear description of the various dimensions influencing the interaction and in particular of how the dimensions relate to or enable a high quality of the cooperation. ...
... It is also deemed an important basis for human-machine cooperation and collaboration (Flemisch et al., 2016). However, it only provides a basic framework for the perception-action cycle in human-machine haptic interaction which is not sufficient for a full description of all aspects in human-machine interaction, e.g. the influence of the joint actions and decisions and their respective representations (Pacaux-Lemoine and Itoh, 2015). For example, besides the coupling depicted in Fig. 3, the overall system behavior is affected by an upper level of interaction, usually not included in shared control approaches. ...
... This frame does not specify how the various tasks in the different levels are shared nor does it explain the various effects each partner has on their/its counterpart at each level (cf. Know-How-To-Cooperate concept in Pacaux- Lemoine and Itoh, 2015). 7 The layer model for the task dimension defines an interplay between each of the levels. ...
Article
The notion of symbiosis has been increasingly mentioned in research on physically coupled human-machine systems. Yet, a uniform specification on which aspects constitute human-machine symbiosis is missing. By combining the expertise of different disciplines, we elaborate on a multivariate perspective of symbiosis as the highest form of interaction in physically coupled human-machine systems, characterized by a oneness of the human and the machine. Four dimensions are considered: Task, interaction, performance, and experience. First, human and machine accomplish a common objective by completing tasks conceptualized on a decomposition, a decision and an action level (task dimension). Second, each partner possesses an internal representation of the oneness they form, including the partner’s inner states (e. g. experiences) and their joint influence on the environment. This representation constitutes the “symbiotic understanding” between both partners, being the basis of a joint and highly coordinated action (interaction dimension). Third, the symbiotic interaction leads to synergetic effects regarding the complementary strengths of the partners, resulting in a higher overall performance (performance dimension). Fourth, symbiotic systems specifically change the user’s experiences, like flow, acceptance, sense of agency, and embodiment (experience dimension). Our multivariate perspective allows a clear description of symbiotic human-machine systems and helps to bridge barriers between different disciplines.
... Other literature proposes frameworks for the description of humanmachine cooperation models (Abbink et al., 2018;Flemisch et al., 2016;Pacaux-Lemoine and Itoh, 2015;Rothfuß et al., 2020). However, these works attempt to describe human-machine cooperation by a one-dimensional model, therefore missing a clear description of the various dimensions influencing the interaction and in particular of how the dimensions relate to or enable a high quality of the cooperation. ...
... It is also deemed an important basis for human-machine cooperation and collaboration (Flemisch et al., 2016). However, it only provides a basic framework for the perception-action cycle in human-machine haptic interaction which is not sufficient for a full description of all aspects in human-machine interaction, e.g. the influence of the joint actions and decisions and their respective representations (Pacaux-Lemoine and Itoh, 2015). For example, besides the coupling depicted in Fig. 3, the overall system behavior is affected by an upper level of interaction, usually not included in shared control approaches. ...
... This frame does not specify how the various tasks in the different levels are shared nor does it explain the various effects each partner has on their/its counterpart at each level (cf. Know-How-To-Cooperate concept in Pacaux- Lemoine and Itoh, 2015). 7 The layer model for the task dimension defines an interplay between each of the levels. ...
Preprint
The notion of symbiosis has been increasingly mentioned in research on physically coupled human-machine systems. Yet, a uniform specification on which aspects constitute human-machine symbiosis is missing. By combining the expertise of different disciplines, we elaborate on a multivariate perspective of symbiosis as the highest form of physically coupled human-machine systems. Four dimensions are considered: Task, interaction, performance, and experience. First, human and machine work together to accomplish a common task conceptualized on both a decision and an action level (task dimension). Second, each partner possesses an internal representation of own as well as the other partner's intentions and influence on the environment. This alignment, which is the core of the interaction, constitutes the symbiotic understanding between both partners, being the basis of a joint, highly coordinated and effective action (interaction dimension). Third, the symbiotic interaction leads to synergetic effects regarding the intention recognition and complementary strengths of the partners, resulting in a higher overall performance (performance dimension). Fourth, symbiotic systems specifically change the user's experiences, like flow, acceptance, sense of agency, and embodiment (experience dimension). This multivariate perspective is flexible and generic and is also applicable in diverse human-machine scenarios, helping to bridge barriers between different disciplines.
... Also dierent forms of cooperation can be distinguished Schmidt [199] described three forms of cooperation, namely integrative (different strengths complement each other), augmentative (compensates human limitations), and debative (nd the best solution together) [197,198]. Pacaux-Lemonine and Itoh [200] distinguish between an agent's know-how (internal ability to solve problems and external ability to get information and act) and ...
... know-how-to-cooperate (the internal ability to be cooperative and external ability to communicate). They propose a general theoretical framework of sharing functions using a horizontal and vertical extension of shared control [200]. ...
... We proposed a framework helping designers and developers to explore trust-supporting features in a more structured way and from a more generic perspective. The framework is inspired by the horizontal and vertical extension of shared/cooperative control [200] and another framework for trust calibration by de Visser et al. [221]. It combines the levels of task abstraction (strategical, tactical, operational) with the (slightly modied) stages of information processing. ...
Thesis
Full-text available
Automated vehicles are gradually entering the market and the technology promises to increase road safety and comfort, amongst other advantages. An important construct guiding humans' interaction with safety-critical systems is trust, which is especially relevant as most drivers are consumers rather than domain experts, such as pilots in aviation. The successful introduction of automated vehicles on the market requires to raise the trust of technology skeptics, but at the same time prevent overtrust. Overtrust is already suspected of having contributed to a couple of - even fatal - accidents with existing driving automation systems. Consequently, there is a need to investigate the topic of trust in the context of automated vehicles and design systems which maintain safety by preventing both distrust and overtrust, a process also called "trust calibration". As the possibility to engage in non-driving related tasks is an important consumer desire, this work proposes to consider drivers' multitasking demands already in the vehicle design process to prevent emerging trust issues. Therefore, a framework integrating theoretical considerations from the domains of trust, human-machine cooperation, and multitasking is proposed. By aligning overall goals between the operator and the system whilst supporting drivers in tasks at the strategical, tactical, and operation level of control, a more trustworthy cooperation should be achieved. A series of studies was conducted to identify important dimensions of trust in driving automation as well as scenarios leading to distrust and overtrust. Those scenarios were then used to demonstrate how the structured approach provided by the framework allows for designing in-vehicle interfaces. Three interaction concepts aiming to support drivers in the different levels of automation were designed and evaluated in driving simulator studies. Results highlight the potential of multimodal as well as attentive user interfaces (interruption management) to deal with overtrust, and augmented reality visualizations to raise acceptance of drivers distrusting the automation. All approaches confirmed to improve the subjective trust of the operator and demonstrate the structured approach provided by the framework can assist to design more trustworthy in-vehicle interfaces, which is important for a successful and safe implementation of driving automation systems.
... Considering shared control as cooperation on the control (operational) level, this cooperation could be considered as interaction based on results of problem-solving processes that were conducted by both agents individually [19]. Parasuraman et al. [21] suggested that automation can be applied to four functions, in particular to functions that reside not necessarily at the end of the decision-making process: information acquisition, information analysis, decision and action selection, and action implementation. ...
... Parasuraman et al. [21] suggested that automation can be applied to four functions, in particular to functions that reside not necessarily at the end of the decision-making process: information acquisition, information analysis, decision and action selection, and action implementation. Pacaux-Lemoine and Itoh [19] refer to cooperation within these functions as horizontal extension of shared control. Cooperative interaction between different levels (strategic, tactical, and operational), e.g. the driver decides when to perform a lane change (tactical level) and the vehicle performs the lane change (operational level), is considered as vertical extension of shared control [19]. ...
... Pacaux-Lemoine and Itoh [19] refer to cooperation within these functions as horizontal extension of shared control. Cooperative interaction between different levels (strategic, tactical, and operational), e.g. the driver decides when to perform a lane change (tactical level) and the vehicle performs the lane change (operational level), is considered as vertical extension of shared control [19]. This is in line with the previously described concept of cooperative human-machine interaction on the strategic, tactical and operational level. ...
Conference Paper
Full-text available
The role and respective tasks of human drivers are changing due to the introduction of automation in driving. Full automation, where the driver is only a passenger, is still far-off. Consequently, both academia and industry investigate how the interaction between automated vehicles and their drivers could look like and how responsibilities could be allocated. Different approaches have been proposed to allow to deal with shortcomings of automated vehicles: control shifts (handovers and takeovers), shared control, and cooperation. While there are models and frameworks for individual areas, a big picture is still missing in literature. We propose a first overview that aims to bring the three areas in relation based on the particular differences (presence of mode changes, duration of interaction, and level of interaction).
...  Human-machine cooperation: that is "how was the cooperation between participants and the system?" This is evaluated using the answers of the participants to a questionnaire based on the human-machine cooperation model [27]. ...
... A questionnaire has been developed using a seven-point Linkert scale where a value of 1 corresponded to "Not at all" and value of 7 corresponds to "Totally". The questions were related to the participant and shuttle Know-How (KH) and Know-How-to-Cooperate (KHC) levels [27]. The results reveal a significant effect of the organization on the human operator KHC regarding the control of the shuttles (p-value = 0.049). ...
Article
Full-text available
Humans are currently experiencing the fourth industrial revolution called Industry 4.0. This revolution came about with the arrival of new technologies that promise to change the way humans work and interact with each other and with machines. It aims to improve the cooperation between humans and machines for mutual enrichment. This would be done by leveraging human knowledge and experience, and by reactively balancing some complex or complicated tasks with intelligent systems. To achieve this objective, methodological approaches based on experimental studies should be followed to ensure a proper evaluation of human-machine system design choices. This paper proposes an experimental study based on a platform that uses an intelligent manufacturing system made up of mobile robots, autonomous shuttles using the principle of intelligent products, and manufacturing robots in the context of Manufacturing 4.0. Two experiments were conducted to evaluate the impact of teamwork human-machine cooperation, performance, and workload of the human operator. The results showed a lower level of participants’ assessment of time demand and physical demand in teamwork conditions. It was also found that the team working improves the subjective human operator Know-how-to-cooperate when controlling the autonomous shuttles. Moreover, the results showed that in addition to the work organization, other personal parameters, such as the frequency of playing video games could affect the performance and state of the human operator. They raised the importance of further analysis to determine cooperative patterns in a group of humans that can be adapted to improve human-machine cooperation.
... Common Work Space Fig. 2. Multi-level cooperation [11] Models of cooperation is very useful to identify quite all the points of interaction between agents, and then to identify the impact of such points on trust. Next part proposes to use these models to extract some dimensions of trust. ...
... Model of coperation of an human operator and an assistance system (adapted from[11]) ...
Conference Paper
Better human machine cooperation is based on human trust in the machine. It has not been established, however, the system design which assures the machine is trusted by human operators in an appropriate manner. This is because the notion of trust is still vague since it has several aspects. The paper proposes to adopt a human-machine cooperation framework that the authors have developed. Based on the human-machine cooperation framework, each aspect of trust is identified. Examples are given to help readers understand each aspect.
... Fig. 1 contains an illustration of this articulation. In this figure, the cooperative agent model is used to highlight interaction between Human and machine [26]. KH functions of agents are in interaction by the means of a Common Work Space represented by the blue area in the middle of the figure. ...
... The function allocation can be predefined and updated according to the information on the current situation. Model of cooperative activities (adpated from [26]). ...
... This model of cooperation is presented above for one type of problem solving and one type of activity, but it can also be used for several levels of abstraction, and several levels of activity such as the usual strategic, tactical and operational levels. Cooperation and sharing control between Human and the assistance system are so extended to all levels with the multi-level cooperation approach [13]. This approach is detailed in the next part. ...
... Fig. 1. Human-Machine cooperation model[13]. ...
Conference Paper
Since the start of industrialization, machine capabilities have increased in such a way that the control of processes by humans is becoming increasingly complex. This is especially the case in Intelligent Manufacturing Systems for which processes tend to be so autonomous that humans are more and more unaware of processes running, particularly when humans may need to intervene to update the manufacturing plan or to modify the process configuration if machines or intelligent entities need assistance. The present paper proposes solutions based on the use of Human(s)-Machine(s) Cooperation (HMC) principles to support humans in the process control. The aim of these principles is to adopt a human-centered approach for the design and evaluation of assistance systems and processes, as well as their interaction with humans. Two main complementary features of HMC, the know-how and the know-how-to-cooperate, are detailed. They provide a very useful approach to design task allocation, support for mutual understanding and communication between one human operator and one Artificial Self Organizing system. An assistance system resulting from this approach was evaluated and first results highlighted the improvement of global performance and acceptability.
... If cooperation goes beyond the control level towards guidance of maneuvers, Flemisch et al. (2011Flemisch et al. ( , 2015) speak of cooperative guidance and control -as part of a cooperative automation. Lemoine and Itoh (2015) called this a 'vertical extension' of the shared control concept, and related this to their 'horizontal extension' along the ...
... If cooperation goes beyond the control level towards guidance of maneuvers, Flemisch et al. (2011Flemisch et al. ( , 2015) speak of cooperative guidance and control -as part of a cooperative automation. Lemoine and Itoh (2015) called this a 'vertical extension' of the shared control concept, and related this to their 'horizontal extension' along the Abstract: Shared Control, where the machine and the human share tasks and control the situation together, and its extension cooperative automation are promising approaches to overcome automationinduced problems, such as lack of situation awareness and degradation of skill. However, the design of Shared Controllers and/or cooperative human-machine systems should be done in a very careful manner. ...
Conference Paper
Shared Control, where the machine and the human share tasks and control the situation together, and its extension cooperative automation are promising approaches to overcome automation-induced problems, such as lack of situation awareness and degradation of skill. However, the design of Shared Controllers and/or cooperative human-machine systems should be done in a very careful manner. One of the major issues is conflicts between the human and the machine: how to detect these conflicts, and how to resolve them, if necessary? A complicating factor is that when the human is right, conflicts are undesirable (resulting in nuisance, degraded performance, etc), but when the machine is right, conflicts are desirable (warning the operator, or proper assistance or overruling). Research has pointed out several types and causes of conflicts, but offers no coherent framework for design and evaluation guidelines. In this paper, we propose such a theoretical framework in order to structure and relate different types of conflicts. The framework is inspired by a hierarchical task analysis, and identifies five possible sources of conflicts: intent, information gathering, information processing, decision-making and action implementation. Examples of conflicts in several application domains such as automobile, telerobotics, and surgery are discussed to illustrate the applicability of this framework.
... Cooperation is commonly understood as "the action or process of working together towards common goals" [72], where each person is responsible for a portion of the problem to solve [82,85]. In order for it to be called "cooperative," an artifcial agent must have the ability to solve a given problem and be able to cooperate with other agents by, for example, producing a common plan [73]. Cooperators also have to commit to the joint activity, show mutual responsiveness, and provide support to each other [13]. ...
Conference Paper
Full-text available
A central problem for chatbots in the customer care domain revolves around how people collaborate with the agent to achieve their own situated goals. The majority of the previous research, however, relied on experiments within artificial settings, rather than on observation of real-world interactions. Moreover, such research mostly analyzed users’ responses to communication breakdowns, rather than the wider collaboration strategies utilized during a conversation. In this paper, we qualitatively analyzed 12,477 real-world exchanges with a task-based chatbot using a Grounded Theory approach as a rigorous coding method to analyze the data. We identified two main aspects of collaboration, behavioral and conversational, and for each aspect we highlighted the different strategies that users perform to “work together” with the agent. These strategies may be utilized from the very beginning of the conversation or in response to misunderstandings in the course of ongoing interactions and may show different evolving dynamics.
... Shared control methods involving multiple humans and/or multiple robots (Crandall et al. 2017;Gao et al. 2005; Musi� c and Hirche 2017; Shang et al. 2017;Tso et al. 1999), or methods where human and automation jointly arrive at a plan, decision, or strategy have been conceived as well (e.g. Kaber and Endsley 2004;McCourt et al. 2016;Pacaux-Lemoine and Itoh 2015). It is further noted that shared control, like traded control, does not need to refer to the entire task but can also be applied to separate control inputs. ...
Article
Full-text available
A major question in human-automation interaction is whether tasks should be traded or shared between human and automation. This work presents reflections—which have evolved through classroom debates between the authors over the past 10 years—on these two forms of human-automation interaction, with a focus on the automated driving domain. As in the lectures, we start with a historically informed survey of six pitfalls of automation: (1) Loss of situation and mode awareness, (2) Deskilling, (3) Unbalanced mental workload, (4) Behavioural adaptation, (5) Misuse, and (6) Disuse. Next, one of the authors explains why he believes that haptic shared control may remedy the pitfalls. Next, another author rebuts these arguments, arguing that traded control is the most promising way to improve road safety. This article ends with a common ground, explaining that shared and traded control outperform each other at medium and low environmental complexity, respectively. Practitioner summary: Designers of automation systems will have to consider whether humans and automation should perform tasks alternately or simultaneously. The present article provides an in-depth reflection on this dilemma, which may prove insightful and help guide design. Abbreviations: ACC: Adaptive Cruise Control: A system that can automatically maintain a safe distance from the vehicle in front; AEB: Advanced Emergency Braking (also known as Autonomous Emergency Braking): A system that automatically brakes to a full stop in an emergency situation; AES: Automated Evasive Steering: A system that automatically steers the car back into safety in an emergency situation; ISA: Intelligent Speed Adaptation: A system that can limit engine power automatically so that the driving speed does not exceed a safe or allowed speed.
... A part of these recommendations was translated into design proposals and developed, and another part was treated through a Human-Machine Cooperation (HMC) approach. To that aim, as a first step, a model of HMC for remote driving was defined in [7] adapted from [8]. This model is composed by 4 agents (1 human and 3 technical systems) and by a Common Workspace (CWS). ...
Conference Paper
Full-text available
Automatic train control is already operational for metro applications for different Grades of Automation (GoA1 up to GoA4). Compared to urban systems, the situation of other rail line systems (mainlines, regionals and freight) is more complex, due to a larger and interconnected rail network, a wide fleet of rolling stock and a complex and diversified operating system in a completely open environment. One of the technological building blocks necessary for the future Autonomous Train is Railways Remote Driving, which will allow remote control of rolling stock to handle certain modes. The main objective of the TC-Rail project is to demonstrate the possibility of safely driving a convoy from a remote site, without a driver in the train cab, with globally at least equivalent safety level (the GAME principle) to that obtained in the presence of a driver in the cab. Railways Remote Driving is based on the setup of a secured adaptable communication system offering the required train-to-ground connectivity as well as the development of Human Machine Interfaces, and a Sensing System including at first video. The project tackles the safety and the security challenges of such a system.
... This section provides an overview of the concept of shared-control, based on an analysis of the major studies of the last twenty years dealing with human-machine cooperation and automated driving [8,9,34,39,[55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72]. The aim of this section is to clarify the concept of shared-control, to find consensus in the research community, and to avoid misuse of the term, especially in the automotive industry. ...
Thesis
Full-text available
Automated vehicles (AVs) have emerged as a technological solution to compensate for the shortcomings of manual driving. However, this technology is not yet mature enough to completely replace the driver, as this raises technical, social, and legal issues. However, accidents continue to happen and new technological solutions are needed to improve road safety. In this context, the shared control approach, in which the driver remains in the control loop and, together with automation, forms a well-coordinated team that continuously collaborates on the tactical and control levels of the driving task, is a promising solution to improve manual driving performance by taking advantage of the latest advances in automated driving technology. This strategy aims to promote the development of more advanced and more cooperative driver assistance systems compared to those available in commercial vehicles. In this sense, automated vehicles will be the supervisors that drivers need and not the other way around. This thesis addresses in depth the issue of shared control in automated vehicles, both from a theoretical and practical perspective. First, a comprehensive review of the state of the art is presented to provide an overview of the concepts and applications that researchers have been working on for the last two decades. Then, a hands-on approach is taken by developing a controller to assist the driver in lateral control of the vehicle. This controller and its associated decision-making system (Arbitration Module) will be integrated into the general automated driving framework and validated on a simulation platform with real drivers. Finally, the developed controller is applied to two systems. The first to assist a distracted driver and the other to implement a safety feature in overtaking maneuvers on two-way roads. At the end, the most relevant conclusions and future research perspectives for shared control in automated driving are presented.
... Frameworks. Many recent works [27,30,55,65,66,72,74,75] have iteratively constructed a human-machine interaction model composed of layers of shared and cooperative control, assistance, and automation [73]. As explained by Pacaux-Lemoine & Flemisch, human-machine cooperation involves a human and AI agent communicating with each other and controlling a system, via a Common Work Space [66,72]. ...
Conference Paper
Full-text available
Shared control is an emerging interaction paradigm in which a human and an AI partner collaboratively control a system. Shared control unifies human and artificial intelligence, making the human’s interactions with computers more accessible, safe, precise, effective, creative, and playful. This form of interaction has independently emerged in contexts as varied as mobility assistance, driving, surgery, and digital games. These domains each have their own problems, terminology, and design philosophies. Without a common language for describing interactions in shared control, it is difficult for designers working in one domain to share their knowledge with designers working in another. To address this problem, we present a dimension space for shared control, based on a survey of 55 shared control systems from six different problem domains. This design space analysis tool enables designers to classify existing systems, make comparisons between them, identify higher-level design patterns, and imagine solutions to novel problems.
... We refer to review papers for additional consideration and detail (Ghasemi et al., 2019;Marcano et al., 2020;Wang et al., 2020). Shared control methods involving multiple humans and/or multiple robots (Crandall et al., 2017;Gao et al., 2005;Musić & Hirche, 2017;Shang et al., 2017;Tso et al., 1999), or methods where human and automation jointly arrive at a plan, decision, or strategy have been conceived as well (e.g., Abbink et al., 2018;Kaber & Endsley, 2004;McCourt et al., 2016;Pacaux-Lemoine & Itoh, 2015). It is further noted that shared control, like traded control, does not need to refer to the entire task but can also be applied to separate control inputs. ...
Preprint
Full-text available
A major question in human-automation interaction is whether tasks should be traded or shared between human and automation. This work presents reflections—which have evolved through classroom debates between the authors over the past ten years—on these two forms of human-automation interaction, with a focus on the automated driving domain. As in the lectures, we start with a historically informed survey of six pitfalls of automation: 1. Loss of situation and mode awareness, 2. Deskilling, 3. Unbalanced mental workload, 4. Behavioral adaptation, 5. Misuse, and 6. Disuse. Next, one of the authors explains why he believes that haptic shared control may remedy the pitfalls. Next, another author rebuts these arguments, arguing that traded control is the most promising way to improve road safety. This article ends with a common ground, explaining that shared and traded control outperform each other at medium and low environmental complexity, respectively.
... Cooperation is presented from different perspectives such as levels of task [28] and the type of function shared between humans and machines [29], [30]. The vertical dimension (operational, tactical, and strategic levels) and horizontal extension (information gathering, information analysis, decision-making, and action implementation) of shared control have been extended in recent work [31], [32]. Abbink et al. [17] proposed a hierarchical framework with communication of symbols, signs, and signals at four levels, strategic, tactical, operational, and executional (STOE). ...
Article
Full-text available
Artificial intelligence (AI) technology has greatly expanded human capabilities through perception, understanding, action, and learning. The future of AI depends on cooperation between humans and AI. In addition to a fully automated or manually controlled machine, a machine can work in tandem with a human with different levels of assistance and automation. Machines and humans cooperate in different ways. Three strategies for cooperation are described in this article, as well as the nesting relationships among different control methods and cooperation strategies. Based on human thinking and behavior, a hierarchical human–machine cooperation (HMC) framework is improved and extended to design safe, efficient, and attractive systems. We review the common methods of perception, decision-making, and execution in the HMC framework. Future applications and trends of HMC are also discussed.
... All the steps of the descending phase use the models of cooperative agents, levels and layers of cooperation as well as the notion of Common Work Space defined some years ago with works dealing with Human-Machine System design. For detailed presentation the reader can refer to next documents [15], [16], [17] ; however those definitions are briefly presented and explained throughout the description of the method explaining their usefulness with a focus on industry. ...
Preprint
Full-text available
This overview article describes the design and use space of integrating humans and cyber-physical systems in Industry 4.0, with special regards to the interplay of analysis, design and evaluation methods and phases. Starting with an introduction into the challenges of Industry 4.0 and an overview on existing methods of system design, the design and use space of method models is described and exemplified with examples from Industry 4.0. An extended U-Method of iterative analysis, design and evaluation is derived, described in theory and exemplified with practical examples. An outlook identifies potential roadmaps of future design and evaluation methods especially for industry 4.0.
... Human-machine cooperation model (adapted from[16]) ...
Conference Paper
Levels of automation implemented in the railway domain are already high, especially with automated metro. Nevertheless, high levels of automation in the urban sector cannot be directly transposed to train control in mainline or freight sectors, since the train driving environment is more likely to face various unexpected events. The autonomous train is still the final objective; nevertheless, before reaching such a high level of automation, the first step will be to design intermediate levels where human operator is kept in the loop of train control. As for the scope of the TC-Rail project, presented in this paper, the design of remote driving is a necessary first stage that serves two objectives: the optimization of freight railway traffic and a way to recover from an autonomous train, as a degraded driving mode. Railway literature rarely addresses these topics as remote driving may lead to highly risky and complex situations. The goal of this paper is to identify how such risks can be addressed through a Human/machine cooperation approach. A first analysis has been conducted and some reflections are presented. The different steps proposed by this approach are illustrated by the use case stemming from the framework of the TC-Rail project.
... The support of the KHC, so called the Common Work Space, enables agents to be aware of environment, but is also enriched by the team situation awareness dealing with past, current and future activity of other agents [9]. Human-machine cooperation model [10]. ...
Conference Paper
Today, technology provides many ways for humans to exchange their points of view about pretty much everything. Visual, audio and tactile media are most commonly used by humans, and they support communication in such a natural way that we don’t even actively think about using them. But what about people who have lost motor or sensory capabilities for whom it is difficult or impossible to control or perceive the output of such technologies? In this case, perhaps the only way to communicate might be to use brain signals directly. The goal of this study is therefore towards providing people with tetraplegia, who may be confined to their room or bed, with a telepresence tool that facilitates the daily interactions so many of us take for granted. In our case, the telepresence tool is a robot that is remotely controlled. It can act as a medium for the user in their everyday life with the design of a virtual link with friends and relatives located in remote rooms or places or with different environments to explore. Therefore, the objective is to design a Human-Machine System that enables the control of a robot using thoughts alone. The technological part is composed of a brain-computer interface and a visual interface to implement an “emulated haptic shared control” of the robot. Shared motion control is implemented between the user and the robot as well as an adaptive function allocation to manage the difficulty of the situation. The control schema that exploits this “emulated haptic feedback” has been designed and evaluated using a Human-Machine Cooperation framework and the benefit of this type of interaction has been evaluated with five participants. Initial results indicate better control and cooperation with the “emulated haptic feedback” than without.
... A model of human-machine cooperation is proposed in [11]. The KH and KHC of each agent from the same team are linked through the Common Work Space (CWS). ...
... Regarding this information, task sharing at the control layer could be modified. A complementary model has been proposed by and Pacaux-Lemoine and Itoh (2015). The model used at each level is the one presented in the previous part. ...
Article
Full-text available
Over the last centuries, we have experienced scientific, technological, and societal progress that enabled the creation of intelligent-assisted and automated machines with increasing abilities and that require a conscious distribution of roles and control between humans and machines. Machines can be more than either fully automated or manually controlled, but can work together with the human on different levels of assistance and automation in a hopefully beneficial cooperation. One way of cooperation is that the automation and the human have a shared control over a situation, e.g., a vehicle in an environment. Another way of cooperation is that they trade control. Cooperation can include shared and traded control. The objective of this paper is to give an overview on the development towards a common meta-model of shared and cooperative assistance and automation. The meta-models based on insight from the h(orse)–metaphor and Human–Machine Cooperation principles are presented and combined to propose a framework and criteria to design safe, efficient, ecological, and attractive systems. Cooperation is presented from different points of view such as levels of activity (operational, tactical and strategic levels) as well as the type of function shared between human and machine (information gathering, information analysis, decision selection, and action implementation). Examples will be provided in the aviation domain, in the automotive domain with the automation of driving, as well as in robotics and in manufacturing systems highlighting the usefulness of new automated function but also the increase of systems complexity.
... Agents gather and analyze information about others in order to infer their KH and KHC. The model of cooperative activities that we use [9]. ...
Conference Paper
Full-text available
The study presented in this paper is in the context of providing a telepresence platform for people with tetraplegia, who may be confined to their room or bed. The eventual aim is to provide these people with a system that allows them to remotely control a robot, which can act as a medium for him/her in their everyday life, e.g. by enabling interactions with friends and relatives who may be located in other rooms or even remote places and exploring different environments. In this paper, we deal with the specific challenge of cooperation between a robot and a human, who is only able to control the device using thoughts alone. The system is therefore composed of a brain computer interface (BC1) and a visual interface to implement an “emulated haptic shared control” of the robot. The aim is to share motion control between the human and the robot according to the difficulty of the situation. The control schema that exploits this “emulated haptic feedback” has been designed and evaluated using human-machine cooperation (HMC) and has been compared with more standard controls. We report on an initial experiment that has been conducted to test the feasibility of the approach. Preliminary results highlight the interest of the approach but also the challenges that remain to be overcome.
... They aimed to capture shared and traded control at different levels in a framework referred to as "shared and cooperative control" [61]. Recent work in conceptual modeling [134], [135] has linked a hierarchical dimension of control (operational, tactical, and planning levels) to a "horizontal" extension in terms of information gathering, analysis, decision-making, and execution. ...
Article
Full-text available
Shared control is an increasingly popular approach to facilitate control and communication between humans and intelligent machines. However, there is little consensus in guidelines for design and evaluation of shared control, or even in a definition of what constitutes shared control. This lack of consensus complicates cross fertilization of shared control research between different application domains. This paper provides a definition for shared control in context with previous definitions, and a set of general axioms for design and evaluation of shared control solutions. The utility of the definition and axioms are demonstrated by applying them to four application domains: automotive, robot-assisted surgery, brain–machine interfaces, and learning. Literature is discussed for each of these four domains in light of the proposed definition and axioms. Finally, to facilitate design choices for other applications, we propose a hierarchical framework for shared control that links the shared control literature with traded control, co-operative control, and other human–automation interaction methods. Future work should reveal the generalizability and utility of the proposed shared control framework in designing useful, safe, and comfortable interaction between humans and intelligent machines.
... Human-machine cooperation model(Pacaux & Itoh, 2015) ...
Article
Human-machine systems are omnipresent in our everyday life. These systems are very important; they allow human and machine understanding and working together efficiently. But poor choices during its design can have important impacts, principally in terms of safety, security and performance. The design of human-machine systems takes into account the human factors, and both human and machine limitations and abilities through the levels of automation. Therefore, it is needful for the design of such systems to do a preliminary study regarding the possible interactions between human and machine. In this context, we are interested in robotic domain especially in the supervision and control of a self-organized swarm of robots adapted for the human needs. This paper illustrates a methodological approach based on a human-machine cooperation model used to identify different interactions between human and robot and then to define adequate levels of automation taking into account the capacities and abilities of both human and robot according to the situation.
... It has been suggested that these human factors issues can be addressed by establishing an effective collaboration among the human drivers and the automation systems. Shared and cooperative controls were introduced as methods to keep interacting agents, human and automation, always in the direct control loop [18], [19]. ...
Article
Full-text available
Automation systems that regard humans as the final authority have been found to be efficient and are therefore widely accepted. However, it has been argued that automation needs to be allowed to act autonomously in some time-critical situations, such as road traffic accidents. Possible interactions might occur between humans, especially those not well trained as car drivers, and authorised autonomous systems. A study using a driving simulator was designed to examine human-machine interactions when driving with two types of assistance systems: sharing of steering control that provides haptic control guidance through the steering wheel to resist hazardous lane changes, and an automatic cooperative system that acts autonomously to avoid hazardous lane changes. Whilst the drivers were in charge of steering in all circumstances when sharing the steering control, they were unable to steer their vehicles during the autonomous control. Results showed that increasing automation authority does not necessarily lead to improved safety. Other factors like human-machine cooperation need to be considered when the assistance system experiences functional limitations. Although lane change crashes were significantly reduced when the drivers were supported by the autonomous system, drivers reacted earlier and more convenient when supported by the haptic system.
... Adaptive automation (14) is one of the promising approaches to address these problems by assisting the human depending on the situation (15). Dynamic allocation of control authority allows an assistance system to have more than one level of automation (8,16). ...
Article
Full-text available
This paper discusses design of control authority shifting between a driver and an assistance system while changing lanes under shared control. Two means are distinguished for system’s intervention into steering maneuver during lane change: (i) to increase the steering friction torque to resist hazardous lane changing maneuver (haptic system), and (ii) to cancel the steering input by the driver in order to prevent hazardous lane change (mixed system). Results of an experiment with a driving simulator showed: 1) Drivers’ reaction time was earlier and the drivers were less error-prone when supported by the haptic system, 2) Both systems improved safety up to a certain level, 3) More collisions were observed when drivers had to avoid sudden traffic changes while receiving assistance, and 4) There was a positive relationship between drivers’ willingness to cooperate with a system and their ability to regain the control.
... These can be called the horizontal extension and the vertical extension, respectively, of the shared control concept in (M.-P. Pacaux-Lemoine & Itoh, 2015). ...
Article
Full-text available
Since the start of industrialization, machine capabilities have increased in such a way that human control of processes has evolved from simple (with mechanization) to cognitive (with computerization), and even emotional (with semi/full automation). The processes have also evolved from simple to complicated, and now complex systems. This is notably the case with Intelligent Manufacturing Systems in which processes have become so autonomous that humans are unaware of the processes running, while they may need to intervene to update the manufacturing plan or modify the process configuration if a machine breaks down, or to assist process-intelligent entities when they find themselves in a deadlock. This paper highlights the lack of attention paid to the correct integration of humans in Intelligent Manufacturing Systems and provides solutions based on Human-Machine Cooperation principles to retain humans in the process control loop with different levels of involvement identified by the levels of automation. The aim of these principles is to propose a human-centered approach to design and evaluate systems, processes, and their interactions with humans. Herein, these principles are detailled and applied to Intelligent Manufacturing Systems using Artifical Self-Organizing systems (ASO) as an example. An assistance system was designed to support cooperation between ASO and human operators. Experiments were conducted to evaluate the system and its utility in improving the performance of Human-Machine Systems, as well as its acceptability with regard to human factors. The results presented highlight the advantages of the approach.
... Human-machine cooperation model(Pacaux & Itoh, 2015) ...
Conference Paper
Human-machine systems are omnipresent in our everyday life. These systems are very important; they allow human and machine understanding and working together efficiently. But poor choices during its design can have important impacts, principally in terms of safety, security and performance. The design of human-machine systems takes into account the human factors, and both human and machine limitations and abilities through the levels of automation. Therefore, it is needful for the design of such systems to do a preliminary study regarding the possible interactions between human and machine. In this context, we are interested in robotic domain especially in the supervision and control of a self-organized swarm of robots adapted for the human needs. This paper illustrates a methodological approach based on a human-machine cooperation model used to identify different interactions between human and robot and then to define adequate levels of automation taking into account the capacities and abilities of both human and robot according to the situation.
... Sharing acts and exchanging information between the human operator and the system can improve task performance [13]. Furthermore, keeping the driver in the direct control loop may reduce the feeling of lost control and skills degradation associated with traded control systems that put the human in a supervisory position [14], [15]. ...
... Fig. 11. Model of cooperative activities [25] IV. CONCLUSIONS AND PERSPECTIVES This paper has presented an empirical study related to the effects of adaptability of LoA on human operator performance and SA in a dynamic control task. ...
... Other aspects still remain to be aligned. Some ideas have been described in (Pacaux-Lemoine & Itoh 2015), and propose horizontal and vertical extensions of the cooperation concept. Horizontal extension concerns cooperation between layers, and vertical extension proposes 2016 IFAC/IFIP/IFORS/IEA HMS Aug. 30 -Sept. ...
Conference Paper
As an introduction to the session of shared and cooperative control, this article will briefly look into the history, start with definitions and sketch a common framework of shared and cooperative control that sees the two phrases not as different concepts, but as different perspectives or foci on a common design space of shared intentionality, control and cooperation between humans and machines. One working hypothesis which the session will explore is that shared control can be understood as cooperation at the control level, while human machine cooperation can include shared control, but can also extend towards cooperation at higher levels, e.g. of guidance and navigation, of maneuvers and goals. We propose to view the relationship between shared control and human-machine cooperation as being similar to the relationship between the sharp, pointy tip and the (blunt) shaft of a spear. Shared control is where cooperation comes sharply into effect at the control level, but to be truly effective it should be supported by cooperation on all levels of interaction beyond the control level, e.g. on the guidance and navigation level.
Thesis
De nos jours, en Agriculture, la robotique est pressentie comme la révolution à venir. Ayant déjà fait ses preuves dans l’Industrie (automobile, électronique, etc.), son application dans le monde agricole commence à travers la création d’entreprises spécialisées dans ce domaine et les divers investissements publiques pour soutenir son développement. Toutefois l’image la plus répandue de la robotique, à travers des machines totalement autonomes dans leurs actions, n’est pas encore atteintes. Un robot peut soit être supervisé à distance, soit être supervisé localement, soit réaliser une tâche en parallèle du travail de l’agriculteur, soit coopérer avec ce dernier, soit être téléopéré directement via une télécommande dans ses moindres actions. Toutefois pour obtenir le meilleur des robots dans l’agriculture, faire évoluer ces différents cas de figures au regard de la situation et de l’agriculteur est la clé d’une coopération homme-robot réussie. Ce travail de thèse contribue à cela en proposant de considérer le niveau de compétence d’un opérateur dans l’évolution du niveau d’autonomie du robot. Cette contribution traite deux problématiques : l’évaluation du niveau de compétence d’un opérateur et l’évolution de l’autonomie d’un robot liée à la compétence. Identifier la compétence d’un opérateur lorsqu’il réalise une tâche nécessite de reconnaître les activités réalisées par l’opérateur lors de la coopération. Avec pour hypothèse de travail que l’activité d’un opérateur reflète sa compétence, l’approche retenue consiste à évaluer la compétence d’un opérateur à travers les trois activités primaires qu’il met en œuvre pour réaliser la tâche : l’observation, la planification et l’exécution. L’identification du niveau de compétence en lien avec la tâche est ensuite réalisée en s’appuyant sur différentes approches telles que les systèmes d’inférences floues. Afin de tester cette approche, le modèle d’activité a été appliqué dans le cadre d’une tâche de picking. L’évaluation de chaque activité primaire a été réalisée à travers 3 indicateurs originaux relatifs à la séquence d’actions réalisée par l’opérateur, à l’ordre d’exécution des sous-tâches et à l’adéquation du comportement de l’opérateur à la situation de travail identifiée. L’étude des résultats sur un panel d’environ 20 sujets a permis de définir des niveaux de compétences qui ont ensuite été validés en les comparant avec les résultats d’une analyse experte. Concernant l’évolution du niveau d’autonomie, un système d’adaptation de l’autonomie ayant pour ambition, d’une part, de combler un manque ou une perte de compétence de l’opérateur et, d’autre part, de l’aider à utiliser le robot de la meilleure des façons au regard de la situation pour réaliser la tâche demandée sont proposés. Le système se compose de deux blocs principaux. Le premier porte sur l’évaluation de la compétence en s’appuyant sur l’approche précédemment citée, sur l’évaluation de l’apprentissage de l’opérateur et enfin sur l’identification du niveau de compétence de ce dernier. Le second reprend le résultat du premier bloc et définit le niveau d’autonomie du robot en considérant le niveau de compétence et son évolution. Une expérimentation comparant les résultats de l’évolution de l’acceptabilité et de la performance entre un groupe témoin et un groupe expérimental a été réalisée. Il en résulte la validation du bon fonctionnement du système en situation et un niveau d’acceptabilité du robot à autonomie adaptative plus élevé que celui d’un robot classique.
Chapter
In this chapter, the authors focus on the design and evaluation of industrial cyber‐physical systems that will be interacting or even cooperating with a human operator. Since they are dealing with systems dedicated to Industry 4.0, these human operators are called Operators 4.0. The designer must thus identify the tasks and sub‐tasks of the human operators and the Human–Industrial Cyber‐Physical System (HICPS) at each level of activity, also known as the decision‐making level, such as the operational, tactical and strategic levels regularly mentioned in industrial systems organizations. The definition and organization of the functions of the human operators and the HICPS are important steps in the design process, but the design of the interface, external representation of the Common Workspace, and thus support of the cooperative activity are also a crucial step to the success of the new HICPS.
Chapter
This work presents a new approach to evaluate operators skills regarding their activities. This approach is based on an activity model composed of three primary activities. For each primary activities, an indicator has been proposed. The method has been applied in the case of a picking task. Results are compared with expert analysis and seem consistent. The approach shows there is no clear link between performance and skills.
Chapter
The aim of this paper is to study to what extent the current state-of-the-art in human-machine cooperation can be applied or adapted in the emerging context in Industry 4.0 where the “machines” are autonomous cyber-physical systems with whom the cooperation is done. A review of 20 papers has been realized. A discussion is made, pointing out the advances and limits of existing state-of-the-art applied in the context of autonomous cyber-physical systems. An illustration in the domain of the maintenance phase of an autonomous cyber-physical system is provided to explain the conclusions of our review.
Article
Full-text available
The last decade has shown an increasing interest on advanced driver assistance systems (ADAS) based on shared control , where automation is continuously supporting the driver at the control level with an adaptive authority. A first look at the literature offers two main research directions: 1) an ongoing effort to advance the theoretical comprehension of shared control, and 2) a diversity of automotive system applications with an increasing number of works in recent years. Yet, a global synthesis on these efforts is not available. To this end, this article covers the complete field of shared control in automated vehicles with an emphasis on these aspects: 1) concept, 2) categories, 3) algorithms, and 4) status of technology. Articles from the literature are classified in theory-and application-oriented contributions. From these, a clear distinction is found between coupled and uncoupled shared control. Also, model-based and model-free algorithms from these two categories are evaluated separately with a focus on systems using the steering wheel as the control interface. Model-based controllers tested by at least one real driver are tabulated to evaluate the performance of such systems. Results show that the inclusion of a driver model helps to reduce the conflicts at the steering. Also, variables such as driver state, driver effort, and safety indicators have a high impact on the calculation of the authority. Concerning the evaluation, driver-in-the-loop simulators are the most common platforms, with few works performed in real vehicles. Implementation in experimental vehicles is expected in the upcoming years.
Article
Full-text available
With the increase in technological capabilities, system designers can imagine new possibilities for Humans to interact with smart systems and delegate some of their tasks to these systems. The interactions and support provided by the systems would be so useful that some authors already propose considering them as symbiotic. The objective of this paper is to examine the ethical risks relevant to designing symbiotic systems in the context of Industry 4.0. These risks are presented and discussed. The paper addresses these risks using the human-machine cooperation approach, which allows a detailed analysis of all the kinds of interactions between humans and machines. Three use cases derived from Industry 4.0 objectives were studied within the framework of the ANR HUMANISM project (Human-Machine cooperation for flexible production Systems). These use cases are detailed to underline the interest of this approach regarding the identification, suppression or mitigation of ethical risks.
Chapter
This chapter presents an intelligent wearable system based on the design of an instrumented garment specially designed for firefighters and focuses on the context of crisis management. A powerful application of intelligent garments is related to risk management in hostile environments. By learning from data measured by the garment's sensors, a local decision support system was created in order to relate fatigue and stress indicators to the measured physiological signals. The chapter outlines the design process of the intelligent garment for firefighters, including the general architecture of the wearable system, choice of sensors and microcontroller, textile design and integration of the electronic components. It discusses signal processing methods used for extracting information about fatigue and stress conditions and provides an analysis of the results obtained. The chapter proposes a cooperation plan between a robot and a firefighter wearing the smart garment.
Article
Full-text available
To introduce this special issue of shared and cooperative control, we will look into history of tools in cooperation between humans and aim to unify the plethora of related concepts and definitions that have been proposed in recent years, such as shared control, human–machine cooperation and cooperative guidance and control. Concretely, we provide definitions to relate these concepts and sketch a unifying framework of shared and cooperative control that sees the different concepts as different perspectives or foci on a common design space of shared intentionality, control and cooperation between humans and machines. One working hypothesis which the article explores is that shared control can be understood as cooperation at the control layer, while human–machine cooperation can include shared control, but can also extend towards cooperation at higher layers, e.g., of guidance and navigation, of maneuvers and goals. The relationship between shared control and human–machine cooperation is compared to the relationship between the sharp, pointy tip and the (blunt) shaft of a spear. Shared control is where cooperation comes sharply into effect at the control layer, but to be truly effective it should be supported by cooperation on all layers beyond the operational layer, e.g., on the tactical and strategic layer. A fourth layer addresses the meta-communication about the cooperation and supports the other three layers in a traversal way.
Article
Full-text available
Increasingly sophisticated and robust automotive automation systems are being developed to be applied in all aspects of driving. Benefits, such as improving safety, task performance, and workload have been reported. However, several critical accidents involving automation assistance have also been reported. Although automation systems may work appropriately, human factors such as drivers errors, overtrust in and overreliance on automation due to lack of understanding of automation functionalities and limitations as well as distrust caused by automation surprises may trigger inappropriate human–automation interactions that lead to negative consequences. Several important methodologies and efforts for improving human–automation interactions follow the concept of human-centered automation, which claims that the human must have the final authority over the system, have been called. Given that the human-centered automation has been proposed as a more cooperative automation approach to reduce the likelihood of human–machine misunderstanding. This study argues that, especially in critical situations, the way control is handed over between agents can improve human–automation interactions even when the system has the final decision-making authority. As ways of improving human–automation interactions, the study proposes adaptive sharing of control that allows dynamic control distribution between human and system within the same level of automation while the human retains the final authority, and adaptive trading of control in which the control and authority shift between human and system dynamically while changing levels of automation. Authority and control transitions strategies are discussed, compared and clarified in terms of levels and types of automation. Finally, design aspects for determining how and when the control and authority can be shifted between human and automation are proposed with recommendations for future designs.
Article
The purpose of this paper was to understand how an agent's performance is affected when interaction workflows are incorporated in its information model and decision-making process. Our expectation was that this incorporation could reduce errors and faults on agent's operation, improving its interaction performance. We based this expectation on the existing challenges in designing and implementing artificial social agents, where an approach based on predefined user scenarios and action scripts is insufficient to account for uncertainty in perception or unclear expectations from the user. Therefore, we developed a framework that captures the expected behavior of the agent into descriptive scenarios and then translated these into the agent's information model and used the resulting representation in probabilistic planning and decision making to control interaction. Our results indicated an improvement in terms of specificity while maintaining precision and recall, suggesting that the hypothesis being proposed in our approach is plausible. We believe the presented framework will contribute to the field of cognitive robotics, e.g., by improving the usability of artificial social companions, thus overcoming the limitations imposed by approaches that use predefined static models for an agent's behavior resulting in non-natural interaction.
Article
Over the last centuries we have experienced scientific, technological and societal progress that enabled the creation of intelligent assisted and automated machines with increasing abilities, and require a conscious distribution of roles and control between humans and machines. Machines can be more than either fully automated or manually controlled, but can work together with the human on different levels of assistance and automation in a hopefully beneficial cooperation. One way of cooperation is that the automation and the human have a shared control over a situation, e.g. a vehicle in an environment. The objective of this paper is to provide a common meta model of shared and cooperative assistance and automation. The meta models based on insight from the H(orse)-methaphor (Flemisch et al, 2003; Goodrich et al., 2006) and Human-Machine Cooperation principles (Hoc and Lemoine, 1998; Pacaux-Lemoine and Debernard, 2002; Pacaux-Lemoine, 2014), are presented and combined in order to propose a framework and criteria to design safe, efficient, ecological and attractive systems. Cooperation is presented from different points of view such as levels of activity (operational, tactical and strategic levels) (Lemoine et al, 1996) as well as the type of function shared between Human and machine (information gathering, information analysis, decision selection, action implementation) (Parasuraman et al., 2000). Examples will be provided in the aviation domain (e.g. Goodrich et. al 2012) and the automotive domain with the automation of driving (Hoeger et al, 2008; Flemisch et al., 2016; Tricot et al., 2004; Pacaux-Lemoine et al, 2004; Pacaux-Lemoine et al., 2015).
Conference Paper
Full-text available
Car manufacturers and automotive suppliers design more and more Advanced Driving Assistance Systems (ADAS) that are quickly installed into cars. Nevertheless, such ADAS address different types of driving activities and driver's behaviors; they might concern longitudinal and lateral controls, lane changes, navigation, and try to take into account driver state and behavior. But what about the complementarity of such different ADAS if they are all together implemented in one car? And what about their interactions with the driver? Interactions have to be defined according to the levels of automation selected by the car manufacturer. In the French national project CoCoVeA (French acronym for Cooperation between Driver and Automated Vehicle), three levels of automation have been studied involving several existing ADAS. And this paper proposes a methodological approach to identify competences and capacities of driver and ADAS in the case of many driving situations in order to check complementarity, function allocation, authority management, as well as cooperative aspects in order to assess reliability of such a Human-machine system and the acceptance and even the attractiveness of highly automated vehicle.
Presentation
Full-text available
Presentation and use of the Human-Machine Cooperation Model to identify adequate levels of automation to support Human-Robot Cooperation.
Conference Paper
Full-text available
Especially in life-critical systems decision-making entails cognitive functions such as monitoring, as well as fault prevention and recovery. People involved in the control and management of such systems play two kinds of roles: positive thanks to their unique involvement and capacity to deal with the unexpected; and negative with their ability to make errors. But they are also able to detect and correct these mistakes and able to learn from them. Thus human-machine system designer can allow the humans an innovative behavior to be “aware” and to cope with unknown situations by enhancing Situation Awareness (SA). As humans are more and more involved in collective works the constructs of team-SA are important. But the literature shows a great variety and some incoherence in their definitions. That makes difficult to build a design methodology favoring human SA. In parallel, human machine cooperation models have been developed in the last two decades and validated in different dynamic application fields: Air Traffic Control, fighter aircraft cockpit, reconnaissance robot. These studies showed an increase of the problem solving capabilities and a decrease of workload when the tasks are performed by cooperative teams. In this paper we first synthesize main team-SA constructs, we then present principles of humans-machines cooperation and present a Common Work Space as a medium that allow cooperation. We propose to extend it in order to enrich team-SA constructs.
Article
Full-text available
This paper gives a theoretical framework to describe, analyze, and evaluate the driver’s overtrust in and overreliance on ADAS. Although “overtrust” and “overreliance” are often used as if they are synonyms, this paper differentiates the two notions rigorously. To this end, two aspects, (1) situation diagnostic aspect and (2) action selection aspect, are introduced. The first aspect is to describe overtrust, and it has three axes: (1-1) dimension of trust, (1-2) target object, and (1-3) chances of observation. The second aspect, (2), is to describe overreliance on the ADAS, and it has other three axes: (2-1) type of action selected, (2-2) benefits expected, and (2-3) time allowance for human intervention.
Article
Full-text available
Independent mobility is core to being able to perform activities of daily living by oneself. However, powered wheelchairs are not an option for a large number of people who are unable to use conventional interfaces, due to severe motor–disabilities. Non-invasive brain–computer interfaces (BCIs) offer a promising solution to this interaction problem and in this article we present a shared control architecture that couples the intelligence and desires of the user with the precision of a powered wheelchair. We show how four healthy subjects are able to master control of the wheelchair using an asynchronous motor–imagery based BCI protocol and how this results in a higher overall task performance, compared with alternative synchronous P300–based approaches.
Article
Full-text available
This paper extends previous research on two approaches to human-centred automation: (1) intermediate levels of automation (LOAs) for maintaining operator involvement in complex systems control and facilitating situation awareness; and (2) adaptive automation (AA) for managing operator workload through dynamic control allocations between the human and machine over time. Some empirical research has been conducted to examine LOA and AA independently, with the objective of detailing a theory of human-centred automation. Unfortunately, no previous work has studied the interaction of these two approaches, nor has any research attempted to systematically determine which LOAs should be used in adaptive systems and how certain types of dynamic function allocations should be scheduled over time. The present research briefly reviews the theory of human-centred automation and LOA and AA approaches. Building on this background, an initial study was presented that attempts to address the conjuncture of these two approaches to human-centred automation. An experiment was conducted in which a dual-task scenario was used to assess the performance, SA and workload effects of low, intermediate and high LOAs, which were dynamically allocated (as part of an AA strategy) during manual system control for various cycle times comprising 20, 40 and 60% of task time. The LOA and automation allocation cycle time (AACT) combinations were compared to completely manual control and fully automated control of a dynamic control task performed in conjunction with an embedded secondary monitoring task. Results revealed LOA to be the driving factor in determining primary task performance and SA. Low-level automation produced superior performance and intermediate LOAs facilitated higher SA, but this was not associated with improved performance or reduced workload. The AACT was the driving factor in perceptions of primary task workload and secondary task performance. When a greater percentage of primary task time was automated, operator perceptual resources were freed-up and monitoring performance on the secondary task improved. Longer automation cycle times than have previously been studied may have benefits for overall human–machine system performance. The combined effect of LOA and AA on all measures did not appear to be ‘additive’ in nature. That is, the LOA producing the best performance (low level automation) did not do so at the AACT, which produced superior performance (maximum cycle time). In general, the results are supportive of intermediate LOAs and AA as approaches to human-centred automation, but each appears to provide different benefits to human–machine system performance. This work provides additional information for a developing theory of human-centred automation.
Article
Full-text available
Semi-autonomous operation with shared control between the human operator and control computer has been developed and examined for a large-scale manipulator for gripping and lifting heavy objects in unstructured dynamical environments. The technique has been implemented on an electro-hydraulic actuated crane arm with redundant kinematic structure. Several modes of automation and interaction were evaluated. Experiments show satisfactory smoothness in the transitions between autonomous, shared and manual control, doubled performance in log loading for inexperienced operators while experienced operators reported reduced workload.
Article
Full-text available
Haptic shared control was investigated as a human-machine interface that can intuitively share control between drivers and an automatic controller for curve negotiation. As long as automation systems are not fully reliable, a role remains for the driver to be vigilant to the system and the environment to catch any automation errors. The conventional binary switches between supervisory and manual control has many known issues, and haptic shared control is a promising alternative. A total of 42 respondents of varying age and driving experience participated in a driving experiment in a fixed-base simulator, in which curve negotiation behavior during shared control was compared to during manual control, as well as to three haptic tunings of an automatic controller without driver intervention. Under the experimental conditions studied, the main beneficial effect of haptic shared control compared to manual control was that less control activity (16% in steering wheel reversal rate, 15% in standard deviation of steering wheel angle) was needed for realizing an improved safety performance (e.g., 11% in peak lateral error). Full automation removed the need for any human control activity and improved safety performance (e.g., 35% in peak lateral error) but put the human in a supervisory position. Haptic shared control kept the driver in the loop, with enhanced performance at reduced control activity, mitigating the known issues that plague full automation. Haptic support for vehicular control ultimately seeks to intuitively combine human intelligence and creativity with the benefits of automation systems.
Article
Full-text available
Colle : " Remote control of a biomimetics robot assistance system for disabled persons" -Modelling Measurement and Control, 2002, to appear. Abstract The goal of the ARPH project is to restore autonomy to disabled people by increasing their field of intervention. The process will involve an assistive system composed of a mobile robot-mounted arm and a control station that allows it to be remote-controlled. The human-machine cooperation will take place through a client-server computational architecture. The ergonomic question therefore is to find out the cognitive problems involved when an operator carries out a remote control action on the environment. Thereafter, we shall proceed to examine how behavioural neuroscience can bridge the existing gap between humans and machines. This gap is categorised as "disembodiment". In the course of our research, the reduction of the disembodiment was studied in two ways. Firstly, from the robot to the human, by evaluating how the implementation of human-like behaviour of the visual anticipation on the steering can improve the robot control. Secondly, the study focused on the human-robot sense, by testing if we can observe appropriation signs of the machine in the body schema of the operator. All the results are discussed in terms of pertinence of the neuroscientific approach for the conception of physical and functional architecture of a teleoperated robot of rehabilitation.
Conference Paper
Full-text available
This paper sketches the concept of haptic-multimodal coupling between operator, coautomation, base system and environment. Haptic-multimodal couplings use mainly the haptic interaction resource, e.g. the combination of hands and feet with active inceptors like active sidesticks or steering wheels, and complement this with e.g. visual and acoustic feedback. Haptic-multimodal couplings can serve as a base for shared control, and, if the co-automation has a minimum of understanding of and reactivity to the human operator, for a cooperative control between operator and automation. The paper gives a brief introduction of shared and cooperative control, starting with examples in the non-technical world, and sketches the basic structure the couplings and coupling schemes. While much of the design space is yet to be explored and described more systematically, some combinations of haptic-multimodal couplings can already be applied, e.g. to the cooperative control of an intelligent ground vehicle or in telerobotics. The paper briefly describes examples of an automation-initiated de-coupling of a driver and of a helicopter pilot in case of an emergency maneuver, and the coupling between an operator and a satellite control for a berthing maneuver.
Article
Full-text available
Literature points to persistent issues in human- automation interaction, which are caused either when the human does not understand the automation or when the automation does not understand the human. Design guidelines for human-automation interaction aim to avoid such issues and commonly agree that the human should have continuous interaction and communication with the automation system and its authority level and should retain final authority. This paper argues that haptic shared control is a promising approach to meet the commonly voiced design guidelines for human-automation interaction, espe- cially for automotive applications. The goal of the paper is to provide evidence for this statement, by discussing sev- eral realizations of haptic shared control found in literature. We show that literature provides ample experimental evi- dence that haptic shared control can lead to short-term performance benefits (e.g., faster and more accurate vehicle control; lower levels of control effort; reduced demand for visual attention). We conclude that although the continuous intuitive physical interaction inherent in haptic shared control is expected to reduce long-term issues with human- automation interaction, little experimental evidence for this is provided. Therefore, future research on haptic shared control should focus more on issues related to long-term use such as trust, overreliance, dependency on the system, and retention of skills.
Article
Full-text available
Technical developments in computer hardware and software now make it possible to introduce automation into virtually all aspects of human-machine systems. Given these technical capabilities, which system functions should be automated and to what extent? We outline a model for types and levels of automation that provides a framework and an objective basis for making such choices. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.
Article
This paper deals with the research of means to build a Human-Machine Cooperation in the Air Traffic Control. The experiment described aims at evaluating a principle of dynamic allocation of conflict resolution on a large scale simulator of air traffic control. Artificial and human agents have to cooperate to exchange information, to take decision and to command the air traffic. To analyse and define cooperation between agents, the know-how and the know-how-to-cooperate are defined. These concepts give a structure which allows to build a Common Work Space to support more efficiently the cooperation.
Article
When humans interface with machines, the control interface is usually passive and its response contains little information pertinent to the state of the environment. Usually, information flows through the interface from human to machine but not so often in the reverse direction. This work proposes a control architecture in which bi-directional information transfer occurs across the control interface, allowing the human to use the interface to simultaneously exert control and extract information. In this alternative control architecture, which we call shared control, the human utilizes the haptic sensory modality to share control of the machine interface with an automatic controller. We present a fixed-base driving simulator experiment in which subjects take advantage of a haptic steering wheel, which aids them in a path following task. Results indicate that the haptic steering wheel allows a significant reduction in visual demand while improving path following performance.
Conference Paper
Deep-sea mining is an envisioned solution to cope with the fast increasing demand for rare-earth metals and decreasing supplies from conventional mines. It could involve a hydraulically actuated suspended grab to excavate metal-rich minerals from the seabed. Due to environmental uncertainties such an operation cannot be automated and should therefore be controlled by teleoperation, which traditionally suffers from sub optimal performance and limited situation awareness. The current study proposes two methods of hap tic feedback, natural force feedback and hap tic shared control, to improve the control of a grab in deep-sea mining. Natural force feedback is offered to improve the transparency of the system, which is hypothesized to improve situation awareness of the operation. Secondly it is hypothesized to reduce control effort by guiding the operator when offering hap tic shared control. Besides the individual effect, combining both hap tic feedback methods should also improve the overall task performance of the operation. A deep-sea mining simulation experiment is conducted to investigate the effect of these two hap tic feedback methods and their combination on operator control behaviour. The results show improvement of situation awareness (i.e. control errors) when offering natural force feedback and a reduction of control effort (i.e. control inputs) when offering hap tic shared control. However the results do not show an increase of task performance (i.e. excavated rock production) for either method. Although reduction of control error and effort will result eventually in long-term performance benefits. Combining both methods is therefore the best hap tic feedback method for improving a deep-sea mining teleoperation using a grab.
Article
An experiment is described that aimed at evaluating a principle of dynamic task allocation (DTA) of conflict resolution between aircraft in air-traffic control on a large scale simulator. It included three cognitive agents: the radar controller (RC), in charge of conflict detection and resolution; the planning controller (PC), in charge of entry-exit coordination and of workload regulation; and a conflict resolution computer system (SAINTEX), able to manage only simple conflicts. Within this three-agent paradigm, three conditions were compared: (a) a control condition (without computer assistance); (b) an explicit condition (PC and RC in charge of the allocation); and (c) an assisted explicit condition (SAINTEX proposed allocations which could be changed by PC). Comparisons were made on the basis of a detailed cognitive analysis of verbal protocols. The more the assistance, the more anticipative the mode of operation in controllers and the easier the human-human cooperation (HHC). These positive effects of the computer support are interpreted in terms of decreased workload and increased shared information space. In addition, the more the controllers felt responsible for task allocation, the more they criticized the machine operation.
Article
The introduction of information technology based on digital computers for the design of man-machine interface systems has led to a requirement for consistent models of human performance in routine task environments and during unfamiliar task conditions. A discussion is presented of the requirement for different types of models for representing performance at the skill-, rule-, and knowledge-based levels, together with a review of the different levels in terms of signals, signs, and symbols. Particular attention is paid to the different possible ways of representing system properties which underlie knowledge-based performance and which can be characterised at several levels of abstraction-from the representation of physical form, through functional representation, to representation in terms of intention or purpose. Furthermore, the role of qualitative and quantitative models in the design and evaluation of interface systems is mentioned, and the need to consider such distinctions carefully is discussed.
Article
The major categories of models of the past two decades are reviewed in order to pinpoint their strengths - and perhaps their weaknesses - in that framework. This review includes such models as McKnight & Adams' task analysis, Kidd & Laughery's early behavioral computer simulations, the linear control models (such as McRuer & Weir's), as well as some more recent concepts such as Naeaetaenen & Summala's, Wilde's and Fuller's risk coping models which already carry some cognitive weight. Having proposed my answers to these questions an attempt is made to formulate an alternative approach, based on production systems as developed by J. R. Anderson.
Article
This paper proposes a semi-autonomous collision avoidance system for the prevention of collisions between vehicles and pedestrians and objects on a road. The system is designed to be compatible with the human-centered automation principle, i.e., the decision to perform a maneuver to avoid a collision is made by the driver. However, the system is partly autonomous in that it turns the steering wheel independently when the driver only applies the brake, indicating his or her intent to avoid the obstacle. With a medium-fidelity driving simulator, we conducted an experiment to investigate the effectiveness of this system for improving safety in emergency situations, as well as its acceptance by drivers. The results indicate that the system effectively improves safety in emergency situations, and the semi-autonomous characteristic of the system was found to be acceptable to drivers.
Article
This paper deals with the research of means to build a human–machine cooperation in the air traffic control. The experiment described aims at evaluating a principle of dynamic allocation of conflict resolution on a large scale simulator of air traffic control. Artificial and human agents have to cooperate to exchange information, to make a decision and to control the air traffic. To analyse and define cooperation between agents, the know-how and the know-how-to-cooperate are defined. These concepts give a structure which allows to build a common work space to support the cooperation more efficiently.
Conception and evaluation of an advanced cooperative driving assistance tool
  • M.-P Pacaux-Lemoine
  • J Ordioni
  • J.-C Popieul
  • S Debernard
  • P Millot
Pacaux-Lemoine M.-P., Ordioni J., Popieul J.-C., Debernard S., Millot P. (2004). Conception and evaluation of an advanced cooperative driving assistance tool. Proceedings of the IEEE International Conference on Vehicle Power and Propulsion, Paris, France, octobre
Authority and responsibility in Humanmachine systems: is machine-initiated trading of authority permissible in the human-centered automation framework
  • T Inagaki
  • T Sheridan
Inagaki, T., Sheridan, T. (2008). Authority and responsibility in Humanmachine systems: is machine-initiated trading of authority permissible in the human-centered automation framework, Proceeding of Applied Human Factors and Ergonomics.
Distributed decision making: Cognitive models for cooperative work
  • K Schmidt
  • J Rasmussen
  • B Brehmer
  • J Leplat
What do we know, what should we do?
  • J A Michon
  • L Evans
  • R Schwing
Learning haptic feedback for guiding driver behavior
  • M A Goodrich
  • M Quigley
Goodrich M. a. & Quigley M. (2004) Learning haptic feedback for guiding driver behavior, 2004 IEEE Int. Conf. Syst. Man Cybern. (IEEE Cat. No.04CH37583), vol. 3, pp. 2507-2512, 2004.
Sharing Control With Haptics: Seamless Driver Support From Manual to Automatic Control
  • M Mulder
  • D A Abbink
  • E R Boer
Mulder M., Abbink D. A., Boer E. R.(2012) Sharing Control With Haptics: Seamless Driver Support From Manual to Automatic Control, Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 54, no. 5, pp. 786-798, May 2012
Shaping Couzin-Like Torus Swarms through Coordinated Mediation
  • Jung S.-Y Brown
  • D S Goodrich
Jung S.-Y., Brown D. S., Goodrich M. (2013) Shaping Couzin-Like Torus Swarms through Coordinated Mediation, 2013 IEEE Int. Conf. Syst. Man, Cybern., pp. 1834-1839, Oct. 2013.
An attempt for generic concepts toward Human Machine Cooperation
  • P Millot
  • M.-P Lemoine
Millot P., Lemoine M.-P. (1998). An attempt for generic concepts toward Human Machine Cooperation. IEEE SMC, California, San Diego, USA, octobre.
Levels of automation and human-machine cooperation: Application to human-robot interaction
  • M P Pacaux-Lemoine
  • S Debernard
  • A Godin
  • B Rajaonah
  • F Anceaux
  • F Vanderhaegen
Pacaux-Lemoine MP., Debernard S., Godin A., Rajaonah B., Anceaux F., Vanderhaegen F. (2011). Levels of automation and human-machine cooperation: Application to human-robot interaction, Proceedings of the 18th IFAC World Conference, Milano Italy, September.
Towards Levels of Cooperation
  • M.-P Pacaux-Lemoine
  • F Vanderhaegen
Pacaux-Lemoine M.-P., Vanderhaegen F. (2013). Towards Levels of Cooperation. IEEE SMC Conference, Manchester, UK, octobre.
Distributed decision making: Cognitive models for cooperative work
  • K Schmidt
Schmidt K., "Cooperative Work: a conceptual framework", In Rasmussen J., Brehmer B., and Leplat J. (Eds), Distributed decision making: Cognitive models for cooperative work, pp 75-110, John Willey and Sons, Chichester, 1991.