Figure - available from: Minds and Machines
This content is subject to copyright. Terms and conditions apply.
Components of a brain–computer interface with tactile feedback. The solid lines show the neuronal signal as it travels between the brain and the device and is converted into movement. The dotted lines show the digital feedback signal as it travels from the device to the brain where it is received as tactile experience. The initial neuronal activity is recorded by a signal acquisition device. The signal acquisition device consists of a glass electrode filled with neurotropic factor that is implanted directly onto specific neurons in the brain. The neurotropic factors cause neurites to fill the glass electrode, which relays the signal along gold wires to a transmitter. The signal is then amplified and transmitted to the BCI Control. The BCI Control consists of a feature extractor, feature translator, control interface, and device controller, which act to translate the raw signal into physical device commands. Thus, the BCI control converts the neuronal signal into movement of the device. As the device moves and interacts with the environment, sensors attached to the device detect specified aspects of the environment (e.g., whether a surface is hard or soft). Those sensors then relay a digital signal back through the BCI Control, which converts the information into a signal that can be interpreted by the brain. The signal is then received by the brain as, in this case, tactile feedback. Note that the brain also receives visual feedback from observing the movement of the device.
Adapted from Mason and Birch (2003)

Components of a brain–computer interface with tactile feedback. The solid lines show the neuronal signal as it travels between the brain and the device and is converted into movement. The dotted lines show the digital feedback signal as it travels from the device to the brain where it is received as tactile experience. The initial neuronal activity is recorded by a signal acquisition device. The signal acquisition device consists of a glass electrode filled with neurotropic factor that is implanted directly onto specific neurons in the brain. The neurotropic factors cause neurites to fill the glass electrode, which relays the signal along gold wires to a transmitter. The signal is then amplified and transmitted to the BCI Control. The BCI Control consists of a feature extractor, feature translator, control interface, and device controller, which act to translate the raw signal into physical device commands. Thus, the BCI control converts the neuronal signal into movement of the device. As the device moves and interacts with the environment, sensors attached to the device detect specified aspects of the environment (e.g., whether a surface is hard or soft). Those sensors then relay a digital signal back through the BCI Control, which converts the information into a signal that can be interpreted by the brain. The signal is then received by the brain as, in this case, tactile feedback. Note that the brain also receives visual feedback from observing the movement of the device. Adapted from Mason and Birch (2003)

Source publication
Article
Full-text available
I use a hypothetical case study of a woman who replaces here biological arms with prostheses controlled through a brain–computer interface the explore how a BCI might interpret and misinterpret intentions. I define pre-veto intentions and post-veto intentions and argue that a failure of a BCI to differentiate between the two could lead to some trou...

Similar publications

Article
Full-text available
Hybrid neurophysiological signals, such as the combination of electroencephalography (EEG) and electromyography (EMG), can be used to reduce road traffic accidents by obtaining the driver's intentions in advance and accordingly applying appropriate auxiliary controls. However, whether they can be used in combination and can achieve better results i...
Article
Full-text available
Brain-computer interfaces (BCIs) based on rapid serial visual presentation (RSVP) have been widely used to categorize target and non-target images. However, it is still a challenge to detect single-trial event related potentials (ERPs) from electroencephalography (EEG) signals. Besides, the variability of EEG signal over time may cause difficulties...
Article
Full-text available
Brain computer interfaces (BCI) is a tool that can make user requests to computerized systems by directly processing brain signals. In order to perform the procedures to be performed, brain signals must be classified. For this purpose, many classificationalgorithms have been tried with machine learning. The purpose of this study is to talk about bo...
Article
Full-text available
Background Brain–computer interfaces (BCIs) have been proven to be effective for hand motor recovery after stroke. Facing kinds of dysfunction of the paretic hand, the motor task of BCIs for hand rehabilitation is relatively single, and the operation of many BCI devices is complex for clinical use. Therefore, we proposed a functional-oriented, port...
Article
Full-text available
In order to construct the multimedia remote interactive operations for the locked-in patients, this paper proposed a novel BCI system that constructed by the EEG signals with convolutional neural network. To constructing the remote interactive operations, the temporal and spatial features of electroencephalogram (EEG) signals were extracted by long...

Citations

... Other kinds of robot-involved death include death by autonomous vehicle (King, 2019), and death by medical instrument failure or accident (Tucker, 2018). Death with the involvement of robotic appendages and prostheses, or "killer robot arms," is another emerging venue (Gurney, 2018). Robot-related lethal incidents can also be on a large, societal scale as well as in the context of the individual. ...
Chapter
Robots, artificial intelligence, and autonomous vehicles are associated with substantial narrative and image-related legacies that often place them in a negative light. This chapter outlines the basics of the “dramaturgical” and technosocial approaches that are used throughout this book to gain insights about how these emerging technologies are affecting deeply-seated social and psychological processes. The robot as an “other” in the workplace and community—an object of attention and discussion– has been a frequently-utilized theme of science fiction as well as a topic for research analysis, with many people “acting out” their anxieties and grievances. Human-AI contests and displays of robotic feats are often used to intimidate people and reinforce that individuals are not in control of their own destinies, which presents unsettling prospects for the future.
... But active BCIs, and BCIs more generally, may soon be used by individuals with various other pathologies, or even healthy individuals [5][6][7]. This possibility merits focusing our attention on potential ethical problems related to the use of BCIs [5,[8][9][10][11][12][13][14]. 1 I will be focusing on a problem that Allan McCay [16] has posed: how are we able to find BCI users criminally responsible for crimes that they commit using BCIs in common law jurisdictionssuch as England and Wales, Australia, the United States, and Canadawhich require the satisfaction of actus reus for criminal responsibility? 2 Throughout, I will assume that the BCI always measures neural activity accurately, such that if an agent intends the BCI to perform some particular process (and imagines what he believes is required to bring about that process), then the BCI will perform it. This assumption enables me to avoid dealing with whether there is a responsibility gap, where BCIs perform processes that lead to outcomes for which someone is seemingly responsible but where no one can in fact be reasonably held responsible [18][19][20][21][22][23][24]. ...
... That is, they may measure an intention and produce some output due to this measurement prior to the agent having the ability to cancel this intention. If agents typically have the ability to cancel intentions before they issue in action, but BCIs preclude this ability, agents may not satisfy mens rea for the putatively criminal events their BCIs cause [14]. Because the purpose of this paper is to investigate actus reus, I will ignore any such complications regarding mens rea and BCIs. ...
Article
Full-text available
Brain-computer interfaces allow agents to control computers without moving their bodies. The agents imagine certain things and the brain-computer interfaces read the concomitant neural activity and operate the computer accordingly. But the use of brain-computer interfaces is problematic for criminal law, which requires that someone can only be found criminally responsible if they have satisfied the actus reus requirement: that the agent has performed some (suitably specified) conduct. Agents who affect the world using brain-computer interfaces do not obviously perform any conduct, so when they commit crimes using brain-computer interfaces it is unclear how they have satisfied actus reus. Drawing on a forthcoming paper by Allan McCay, I suggest three potential accounts of the conduct that satisfies actus reus: the agent’s neural firings, his mental states, and the electronic activity in his brain-computer interface. I then present two accounts which determine how actus reus may be satisfied – one a counterfactual and the other a minimal sufficiency account. These accounts are lent plausibility because they are analogous to the but-for and NESS (Necessary Element in a Sufficient Set) tests for causation which are generally accepted tests for causation in legal theory. I argue that due to the determinations of these accounts and considerations regarding the relationship between the mind and brain, actus reus is satisfied by either the agent’s neural activity or brain-computer interface electrical activity. Which of these satisfies actus reus is determined by how well the brain-computer interface is functionally integrated with the agent.
... Other kinds of robot-involved death include death by autonomous vehicle (King 2018), and death by medical instrument failure or accident (Tucker 2018). Death with the involvement of robotic appendages and prostheses, or "killer robot arms," is another emerging venue (Gurney 2018). Robot-related lethal incidents can also be on a large, societal scale as well as in the context of the individual. ...
Article
Full-text available
Abstract: Manufacturing-, hospital-, and transportation-related deaths and injuries are, unfortunately, often among the grim effects of production and mobility. These deaths and injuries have not ceased to be a factor despite decades of efforts on safety and risk management. Deaths and injuries that are associated with robots and other autonomous entities are often placed in a different light than other sorts of incidents, with themes and images of murder, domination, and malice introduced from science fiction and popular discourse into these events. Potentials for anti-robot backlash and security breaches have apparently also increased, despite extensive research on how to make robots more palatable and attractive to human workers. This article explores how deaths and injuries by robots and autonomous systems have been distinguished from other kinds of lethal incidents; it examines the implications of these assignments for how the incidents are handled in terms of safety and risk assessment, as well as in discourse on work itself. The kinds of methodical and detailed after-crash analyses of airline accidents are needed to analyze incidents related to autonomous entities. As the kinds and numbers of robots and autonomous systems (including self-driving vehicles and individuals’ prostheses) increases, variations in the narrative themes associated with these deaths are developing and efforts to foster more accepting attitudes on the part of humans to robots have expanded, despite the value of the human survival tendencies that are linked with healthy distrust and distance. The article discusses how “death by robot” narratives are employed in efforts to characterize workplace and infrastructure automation issues, including the prospects for subsequent anti-robot sabotage or destruction on the part of workers. Promotional rather than protective design strategies and social discourse could unfortunately influence workers to be at ease with robots that are potentially unpredictable and dangerous. The article explores how the robot-human death connection has the potential to shape the character of many workplaces, and projects futures in which the “wildness” and eccentricities of both robots and humans can coexist safely and be respected. Keywords: Automation, Psychological Research, Ethics, Death, Safety, Robots, Unemployment, Change Management, Privacy, Artificial Intelligence, Human Systems
... Neurotechnologies that seek to enhance people could produce inequalities between people who can afford the devices and those who cannot. Furthermore, integrating neurotechnologies into human beings to enhance valued abilities or introduce new ones challenges fundamental understandings of what grounds human dignity, such as autonomy or agency (Gilbert, 2015;Goering et al., 2017;Gurney, 2018), authenticity (Kraemer, 2013b;Mackenzie and Walker, 2015), and virtue (Jotterand, 2011). ...
Article
Full-text available
The development of novel neurotechnologies, such as brain-computer interface (BCI) and deep-brain stimulation (DBS), are very promising in improving the welfare and life prospects many people. These include life-changing therapies for medical conditions and enhancements of cognitive, emotional, and moral capacities. Yet there are also numerous moral risks and uncertainties involved in developing novel neurotechnologies. For this reason, the progress of novel neurotechnology research requires that diverse publics place trust in researchers to develop neural interfaces in ways that are overall beneficial to society and responsive to ethical values and concerns. In this article, we argue that researchers and research institutions have a moral responsibility to foster and demonstrate trustworthiness with respect to broader publics whose lives will be affected by their research. Using Annette Baier’s conceptual analysis of trust, which takes competence and good will to be its central components, we propose that practices of ethical reflexivity could play a valuable role in fostering the trustworthiness of individual researchers and research institutions through building and exhibiting their moral competence and good will. By ethical reflexivity, we mean the reflective and discursive activity of articulating, analyzing, and assessing the assumptions and values that might be underlying their ethical actions and projects. Here, we share an ethics dialog tool—called the Scientific Perspectives and Ethics Commitments Survey (or SPECS)—developed by the University of Washington’s Center of Neurotechnology (CNT) Neuroethics Thrust. Ultimately, the aim is to show the promise of ethical reflexivity practices, like SPECS, as a method of enhancing trustworthiness in researchers and their institutions that seek to develop novel neurotechnologies for the overall benefit of society.
Chapter
Brain co-processors are brain–computer interfaces that use artificial intelligence (AI) to convert brain activity patterns into brain stimulation patterns for restoring or augmenting brain function. Brain co-processors can be used for medical applications such as rewiring neural circuits for rehabilitation after brain injury, modulating brain activity to treat depression, and reanimating paralyzed limbs. They can also be used for human augmentation, e.g., accelerating learning or enhancing memory. We explore the ethical, moral, and social justice issues arising from the development of brain co-processor technologies. The interaction between AI and the brain in these technologies brings up some unique challenges not previously considered in neurotechnology.KeywordsBrain–machine interfaceArtificial intelligenceBrain co-processorHuman augmentationProsthesisBrain-to-brain interfaceNeuromarketingConsentIdentityAgency
Chapter
Brain–Computer Interface (BCI) technology is a promising research area in many domains. Brain activity can be interpreted through both invasive and noninvasive monitoring devices, allowing for novel, therapeutic solutions for individuals with disabilities and for other non-medical applications. However, a number of ethical issues have been identified from the use of BCI technology. In previous work published in 2020, we reviewed the academic discussion of the ethical implications of BCI technology in the previous 5 years by using a limited sample to identify trends and areas of concern or debate among researchers and ethicists. In this chapter, we provide an overview on the academic discussion of BCI ethics and report on the findings for the next phase of this work, which systematically categorizes the entire sample. The aim of this work is to collect and synthesize all the pertinent academic scholarship into the ethical, legal, and social implications (ELSI) of BCI technology. We hope this study will provide a foundation for future scholars, ethicists, and policy makers to understand the landscape of the relevant ELSI concepts and pave the way for assessing the need for regulatory action. We conclude that some emerging applications of BCI technology—including commercial ventures that seek to meld human intelligence with AI—present new and unique ethical concerns.KeywordsBrain–computer interface (BCI)Brain–machine interface (BMI)Ethical, legal, and social issues (ELSI)NeuroethicsScoping review
Chapter
Brain–Computer Interface (BCI) technology is a promising and rapidly advancing research area. It was initially developed in the context of early government-sponsored futuristic research in biocybernetics and human–machine interaction in the United States (US) [1]. This inspired Jacques Vidal to suggest providing a direct link between the inductive mental processes used in solving problems and the symbol-manipulating, deductive capabilities of computers, and to coin the term “Brain-Computer Interface” in his seminal paper published in 1973 [2]. Recent developments in BCI technology, based on animal and human studies, allow for the restoration and potential augmentation of faculties of perception and physical movement, and even the transfer of information between brains. Brain activity can be interpreted through both invasive and noninvasive monitoring devices, allowing for novel, therapeutic solutions for individuals with disabilities and for other non-medical applications. However, a number of ethical and policy issues have been identified in context of the use of BCI technology, with the potential for near-future advancements in the technology to raise unique new ethical and policy questions that society has never grappled with before [3, 4]. Once again, the US is leading in the field with many commercial enterprises exploring different realistic and futuristic applications of BCI technology. For instance, a US company named Synchron recently received FDA approval to proceed with first-in-human trials of its endovascularly implanted BCI device [5].
Chapter
Robot-inflicted deaths and injuries are often, unfortunately, among the grim side effects of production and mobility despite decades of efforts on safety and risk management. Deaths and injuries that are associated with robots and other autonomous entities are often placed in a different light than other sorts of incidents in dramaturgical perspective; the sense of these deaths as being engendered outside of human control often intensifies their personal and social impacts. Themes and images of murder, domination, and malice introduced from science fiction and popular discourse often emerge along with expressed feelings of creepiness that are akin to those associated with monsters and zombies. Trauma associated with these robotic attacks for onlookers, associates, and first responders can be devastating and have lasting impacts. Potentials for associated anti-robot backlash and security breaches have also increased, despite extensive research on how to make robots more palatable and attractive to humans.
Article
Brain-computer interfaces (BCI) are an emerging technology that can read the brain signals of users, derive behavioral intentions, and manifest them in the control of electronic technologies. While there are potentially great benefits of this technology, there may also be risks associated with their use. However, it is not clear if these risks are being considered in the early BCI literature. This systematic review aimed to identify the scope and types of BCI risks discussed in the literature and the methods used to identify these risks. Following the PRISMA protocol, 1184 articles were initially identified with the final selection of 58 published articles following systematic exclusion. Analysis of the included articles derived 20 different risks, which were categorized into seven risk themes spanning physical health risks through to legal and societal concerns. Only one study in the review used a method of risk assessment, with most articles identifying risks through discussion and opinion pieces. The findings highlight a lack of an empirical and comprehensive understanding of the risks that BCI technology could pose. It is concluded that further work is necessary to proactively identify BCI risks using formal risk assessment methods to inform early users and to direct risk control measures for BCI developers and regulators.