Fig 1 - uploaded by Patrick M Pilarski
Content may be subject to copyright.
Wearable robot limb system used in experiments, including a four degree-of-freedom arm, control electronics, and vibrotactile feedback sleeve. 

Wearable robot limb system used in experiments, including a four degree-of-freedom arm, control electronics, and vibrotactile feedback sleeve. 

Source publication
Article
Full-text available
Many people suffer from the loss of a limb. Learning to get by without an arm or hand can be very challenging, and existing prostheses do not yet fulfil the needs of individuals with amputations. One promising solution is to provide greater communication between a prosthesis and its user. Towards this end, we present a simple machine learning inter...

Contexts in source publication

Context 1
... experimental platform used in this work was a custom-designed robotic arm called the Extra Robotic Manipulator (XRM, Fig.1), wearable by able-bodied sub- jects. ...
Context 2
... velocity, motor temperature, voltage, and load. To communicate feedback about these sensors to the user, we designed a custom sleeve embedded with four vibration motors (termed tactors) similar to those used in a cellphone or pager. With the sleeve donned, one tactor each was located over the person's shoulder, elbow, wrist, and hand, as shown in Figs. 1 and 2. The platform therefore emulated the capacity for actuation and vibrotactile feedback found in many common prosthetic ...

Similar publications

Preprint
Full-text available
Code comprehension has been recently investigated from physiological and cognitive perspectives through the use of medical imaging. Floyd et al (i.e., the original study) used fMRI to classify the type of comprehension tasks performed by developers and relate such results to their expertise. We replicate the original study using lightweight biometr...

Citations

... For example, a human user trains a pattern recognizing prosthetic with knowledge that the device is adapting to their signals. a Robotic upper-limb prosthetic, b Human using a research prosthesis [28], c Human using a supernumerary limb [29] 3.5 Model the other agent as pursuing a goal ...
... One fruitful avenue for experimentation, as explored in Parker et al. [29], is to deliberately reduce the agency of the human by removing control options and/or sensory inputs as they complete a task. In this way, the authors were able to elucidate how different levels of agency in the machine contribute to the performance of the partnership. ...
Article
Full-text available
In this work, we present a perspective on the role machine intelligence can play in supporting human abilities. In particular, we consider research in rehabilitation technologies such as prosthetic devices, as this domain requires tight coupling between human and machine. Taking an agent-based view of such devices, we propose that human–machine collaborations have a capacity to perform tasks which is a result of the combined agency of the human and the machine. We introduce communicative capital as a resource developed by a human and a machine working together in ongoing interactions. Development of this resource enables the partnership to eventually perform tasks at a capacity greater than either individual could achieve alone. We then examine the benefits and challenges of increasing the agency of prostheses by surveying literature which demonstrates that building communicative resources enables more complex, task-directed interactions. The viewpoint developed in this article extends current thinking on how best to support the functional use of increasingly complex prostheses, and establishes insight toward creating more fruitful interactions between humans and supportive, assistive, and augmentative technologies.
... At each point in time, the copilot queried its learned value function V (h, s) with the current colour values of each of the six fruits, and, if the value of V (h, s) was positive for a fruit, triggered an audio cue that was unique to that fruit-each fruit had a characteristic sound. In other words, feedback from the copilot to the human pilot was based on a pre-determined function that mapped learned predictions to specific actions (an example of Pavlovian control and communication, c.f., Modayil et al. (2014) and Parker et al. (2014)). ...
Preprint
Humans make decisions and act alongside other humans to pursue both short-term and long-term goals. As a result of ongoing progress in areas such as computing science and automation, humans now also interact with non-human agents of varying complexity as part of their day-to-day activities; substantial work is being done to integrate increasingly intelligent machine agents into human work and play. With increases in the cognitive, sensory, and motor capacity of these agents, intelligent machinery for human assistance can now reasonably be considered to engage in joint action with humans---i.e., two or more agents adapting their behaviour and their understanding of each other so as to progress in shared objectives or goals. The mechanisms, conditions, and opportunities for skillful joint action in human-machine partnerships is of great interest to multiple communities. Despite this, human-machine joint action is as yet under-explored, especially in cases where a human and an intelligent machine interact in a persistent way during the course of real-time, daily-life experience. In this work, we contribute a virtual reality environment wherein a human and an agent can adapt their predictions, their actions, and their communication so as to pursue a simple foraging task. In a case study with a single participant, we provide an example of human-agent coordination and decision-making involving prediction learning on the part of the human and the machine agent, and control learning on the part of the machine agent wherein audio communication signals are used to cue its human partner in service of acquiring shared reward. These comparisons suggest the utility of studying human-machine coordination in a virtual reality environment, and identify further research that will expand our understanding of persistent human-machine joint action.
... Examples of advanced assistive devices intended to support human activities include semi-autonomous wheelchairs (Millán et al. 2010, Viswanathan et al. 2014, exoskeletons for both paraplegics and their care givers (Herr 2009), smart living environments (Rashidi and Mihailidis 2013), and with an amputation using the University of Alberta Bento Arm (Dawson et al. 2014) with conventional myoelectric control to complete a manipulation task. (c) control of a supernumerary limb by a non-amputee subject (Parker et al. 2014). ...
... With increasing agency on the prosthetic side, this feedback has the potential to contain more than just direct sensory information. An adaptive or goal-seeking assistant is also able to communicate what it has learned about the environment, the task, or the director-e.g., the prediction-based feedback demonstrated by Parker et al. (2014) and Edwards et al. (2016). ...
... The impact of feedback from an adaptive prosthetic assistant is quantified in work by Parker et al. (2014). In their work, three different kinds of feedback were used to supply a director with information about how best to control the movements of a wearable robot in the form of a supernumerary limb (Fig. 1c)-no feedback, mechanistic feedback, and adaptive feedback in the form of predictions (with the last two cases representing the agency combinations in Fig. 3a and Fig. 3b respectively). ...
Article
Full-text available
This work presents an overarching perspective on the role that machine intelligence can play in enhancing human abilities, especially those that have been diminished due to injury or illness. As a primary contribution, we develop the hypothesis that assistive devices, and specifically artificial arms and hands, can and should be viewed as agents in order for us to most effectively improve their collaboration with their human users. We believe that increased agency will enable more powerful interactions between human users and next generation prosthetic devices, especially when the sensorimotor space of the prosthetic technology greatly exceeds the conventional control and communication channels available to a prosthetic user. To more concretely examine an agency-based view on prosthetic devices, we propose a new schema for interpreting the capacity of a human-machine collaboration as a function of both the human's and machine's degrees of agency. We then introduce the idea of communicative capital as a way of thinking about the communication resources developed by a human and a machine during their ongoing interaction. Using this schema of agency and capacity, we examine the benefits and disadvantages of increasing the agency of a prosthetic limb. To do so, we present an analysis of examples from the literature where building communicative capital has enabled a progression of fruitful, task-directed interactions between prostheses and their human users. We then describe further work that is needed to concretely evaluate the hypothesis that prostheses are best thought of as agents. The agent-based viewpoint developed in this article significantly extends current thinking on how best to support the natural, functional use of increasingly complex prosthetic enhancements, and opens the door for more powerful interactions between humans and their assistive technologies.
Conference Paper
Learning to get by without an arm or hand can be very challenging, and existing prostheses do not yet fill the needs of individuals with amputations. One promising solution is to improve the feedback from the device to the user. Towards this end, we present a simple machine learning interface to supplement the control of a robotic limb with feedback to the user about what the limb will be experiencing in the near future. A real-time prediction learner was implemented to predict impact-related electrical load experienced by a robot limb; the learning system's predictions were then communicated to the device's user to aid in their interactions with a workspace. We tested this system with five able-bodied subjects. Each subject manipulated the robot arm while receiving different forms of vibrotactile feedback regarding the arm's contact with its workspace. Our trials showed that using machine-learned predictions as a basis for feedback led to a statistically significant improvement in task performance when compared to purely reactive feedback from the device. Our study therefore contributes initial evidence that prediction learning and machine intelligence can benefit not just control, but also feedback from an artificial limb. We expect that a greater level of acceptance and ownership can be achieved if the prosthesis itself takes an active role in transmitting learned knowledge about its state and its situation of use.
Conference Paper
Powered prosthetic arms with numerous controllable degrees of freedom (DOFs) can be challenging to operate. A common control method for powered prosthetic arms, and other human-machine interfaces, involves switching through a static list of DOFs. However, switching between controllable functions often entails significant time and cognitive effort on the part of the user when performing tasks. One way to decrease the number of switching interactions required of a user is to shift greater autonomy to the prosthetic device, thereby sharing the burden of control between the human and the machine. Our previous work with adaptive switching showed that it is possible to reduce the number of user-initiated switches in a given task by continually optimizing and changing the order in which DOFs are presented to the user during switching. In this paper, we combine adaptive switching with a new machine learning control method, termed autonomous switching, to further decrease the number of manual switching interactions required of a user. Autonomous switching uses predictions, learned in real time through the use of general value functions, to switch automatically between DOFs for the user. We collected results from a subject performing a simple manipulation task with a myoelectric robot arm. As a first contribution of this paper, we describe our autonomous switching approach and demonstrate that it is able to both learn and subsequently unlearn to switch autonomously during ongoing use, a key requirement for maintaining human-centered shared control. As a second contribution, we show that autonomous switching decreases the time spent switching and number of user-initiated switches compared to conventional control. As a final contribution, we show that the addition of feedback to the user can significantly improve the performance of autonomous switching. This work promises to help improve other domains involving human-machine interaction—in particular, assistive or rehabilitative devices that require switching between different modes of operation such as exoskeletons and powered orthotics.