Figure - uploaded by Markus Kneer
Content may be subject to copyright.
Ethical Theory Types Taxonomy Dimension

Ethical Theory Types Taxonomy Dimension

Source publication
Article
Full-text available
Increasingly complex and autonomous systems require machine ethics to maximize the benefits and minimize the risks to society arising from the new technology. It is challenging to decide which type of ethical theory to employ and how to implement it effectively. This survey provides a threefold contribution. First, it introduces a trimorphic taxono...

Contexts in source publication

Context 1
... was clear that both a dimension referring to the implementation object (the ethical theory; cf. Table 1) and a dimension regarding the technical aspects of implementing those theories (cf. Table 4) were necessary. ...
Context 2
... on the distinct types of ethical theories introduced above, this sub-section develops a simple typology of ethical machines, summarized in Table 1. ...
Context 3
... In the context of machine ethics, the focus is solely on agent relative duties. Hence, no distinction is made between agent-centered and patient-centered theories of deontological ethics in the taxonomy summarized in Table 1. ...

Citations

... Bottom-up ethics, in which AI systems learn values from interaction with other moral subjects and then seek to imitate ethical behavior, in contrast with "top-down ethics" that are specified by AI system designers [1,[18][19]. Top-down ethics would not classify as social choice unless the designers select a social choice framework. ...
... The challenge of social choice manipulation, alongside other concerns about social choice, could be resolved by limiting the scope of what social choice is used for, with some other type(s) of ethics used elsewhere. AI systems can be designed with a hybrid of social choice and non-social choice frameworks [19,22]. For example, an AI system could be designed to focus mainly on maximizing total experienced utility, and to also use social choice wherever doing so does not significantly reduce total experienced utility. ...
Article
Full-text available
Work on AI ethics often calls for AI systems to employ social choice ethics, in which the values of the AI are matched to the aggregate values of society. Such work includes the concepts of bottom-up ethics, coherent extrapolated volition, human compatibility, and value alignment. This paper describes a major challenge that has previously gone overlooked: the potential for aggregate societal values to be manipulated in ways that bias the values held by the AI systems. The paper uses a “red teaming” approach to identify the various ways in which AI social choice systems can be manipulated. Potential manipulations include redefining which individuals count as members of society, altering the values that individuals hold, and changing how individual values are aggregated into an overall social choice. Experience from human society, especially democratic government, shows that manipulations often occur, such as in voter suppression, disinformation, gerrymandering, sham elections, and various forms of genocide. Similar manipulations could also affect AI social choice systems, as could other means such as adversarial input and the social engineering of AI system designers. In some cases, AI social choice manipulation could have catastrophic results. The design and governance of AI social choice systems needs a separate ethical standard to address manipulations, including to distinguish between good and bad manipulations; such a standard affects the nature of aggregate societal values and therefore cannot be derived from aggregate societal values. Alternatively, designers of AI systems could use a non-social choice ethical framework.
... While scholars in machine ethics and ethical artificial intelligence are focussing on defining ethical agents able to make ethical choices based on automated decision procedures (cf. Alonso [4]; Bostrom and Yudkowsky [5]; Moor, [7]; Tolmeijer et al. [6]), rational choice theory is generally presented as a standard way of assessing and managing decision making under risks and uncertainty in artificial intelligence (Bales [30]; Kochenderfer [15]; Russell and Norvig [10]). Returning to the Artificial Rescue Coordination Center, how would a machine decide what ought to be done on the grounds of expected utility in the long run? ...
... For a given set of parameters (x, pr, and n), we studied the percentage of sequences that resulted in a total utility that was below the benchmark value and, following Thoma, we took this percentage as the probability (understood here as the relative frequency given 10 000 simulations) of saving fewer lives in the long run than if one always went for Accident 1. For the sake of the thought experiment and as in Thoma's analysis, we considered .5% as the cut-off point below which it would be irrational to not always choose Accident 2. 6 Table 2 shows variations on length n for different values of x and Thoma's initial parameter pr = .5. ...
... Indeed, given a fixed probability pr, greater values of n are required for smaller values of x to reach the benchmark value. Similarly, in 6 While from the perspective of automated decision making it would be crucial to determine the cut-off point below which it would be irrational not to choose Accident 2, this value is (for the moment) instrumental here and will only be used to exemplify the importance of the parameters in the simulations (see Peterson and Broersen [31]). ...
Article
Full-text available
Part of the literature on machine ethics and ethical artificial intelligence focuses on the idea of defining autonomous ethical agents able to make ethical choices and solve dilemmas. While ethical dilemmas often arise in situations characterized by uncertainty, the standard approach in artificial intelligence is to use rational choice theory and maximization of expected utility to model how algorithm should choose given uncertain outcomes. Motivated by the moral proxy problem, which proposes that the appraisal of ethical decisions varies depending on whether algorithms are considered to act as proxies for higher- or for lower-level agents, this paper introduces the moral prior problem, a limitation that, we believe, has been genuinely overlooked in the literature. In a nutshell, the moral prior problem amounts to the idea that, beyond the thesis of the value-ladenness of technologies and algorithms, automated ethical decisions are predetermined by moral priors during both conception and usage. As a result, automated decision procedures are insufficient to produce ethical choices or solve dilemmas, implying that we need to carefully evaluate what autonomous ethical agents are and can do, and what they aren’t and can’t.
... 5 For a dated but still helpful overview of the top-down approach (and its associated challenges), see Wallach and Allen (2008). For a more recent discussion of these challenges with the top-down approach, see Wallach et al. (2020); Tolmeijer et al. (2020). In fact, the challenges associated with the top-down implementation of evaluative constraints have proven so difficult that few if any modern approaches to implementing evaluative constraints in AI systems attempt to use the top-down approach, though some do use top-down rules in a "hybrid" system. ...
Article
Full-text available
The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, if one particular philosophical view about value is true, these strategies are positively distorting. The natural alternative according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem for e-AI developers.
... Advances in logical reasoning have raised expectations for ethical behaviour in artificial agents [2,3]. Progress has been notable in utilizing various logics to mechanize moral problem-solving [21,30], including addressing classic dilemmas like the Trolley Problem [29]. This dilemma involves a runaway trolley heading towards several individuals, and an observer must decide whether to divert it to save five lives at the expense of causing harm to one person on an alternate track. ...
... Fourth, a contribution is made to machine ethics [7]. The paper goes beyond previous studies that have considered different forms of normative ethics [8], i.e., normative statements about what should be done, by focusing on the behavioral ethics of what people actually do when under pressure [9][10][11][12], such as driving faster than speed limits and/or across road junctions as traffic lights turn from amber to red [13]. Behavioral ethics is fundamentally an ecological analytical framework within which people might not adhere to regulations, etc., if doing so would be unfair because doing so would undermine their survival in their preferred states. ...
Article
Full-text available
Hybrid machine learning encompasses predefinition of rules and ongoing learning from data. Human organizations can implement hybrid machine learning (HML) to automate some of their operations. Human organizations need to ensure that their HML implementations are aligned with human ethical requirements as defined in laws, regulations, standards, etc. The purpose of the study reported here was to investigate technical opportunities for representing human ethical requirements in HML. The study sought to represent two types of human ethical requirements in HML: locally simple and locally complex. The locally simple case is road traffic regulations. This can be considered to be a relatively simple case because human ethical requirements for road safety, such as stopping at red traffic lights, are defined clearly and have limited scope for personal interpretation. The locally complex case is diagnosis procedures for functional disorders, which can include medically unexplained symptoms. This case can be considered to be locally complex because human ethical requirements for functional disorder healthcare are less well defined and are more subject to personal interpretation. Representations were made in a type of HML called Algebraic Machine Learning. Our findings indicate that there are technical opportunities to represent human ethical requirements in HML because of its combination of human-defined top down rules and bottom up data-driven learning. However, our findings also indicate that there are limitations to representing human ethical requirements: irrespective of what type of machine learning is used. These limitations arise from fundamental challenges in defining complex ethical requirements, and from potential for opposing interpretations of their implementation. Furthermore, locally simple ethical requirements can contribute to wider ethical complexity.
... Despite these existing technical approaches and the (inevitable) emergence of AMAs, the true compatibility of morality and computing remains challenged [42], leaving some barriers to successfully implement computational ethics. In particular, it is questionable how exactly we would program ethical decision-making into machines and whether "ethics [is] the sort of thing that is amendable to programming" [23, p. 1], among others, due to its abstract nature [43] and context-dependency [18]. In addition to (or maybe precisely because of) these existing challenges, scholars have raised the question of whether we should pursue codifying ethics and the development of AMAs at all [28]. ...
Article
Full-text available
AI systems are increasingly put into contexts where computed decisions must be guided by ethical considerations. To develop ethically grounded algorithms and technologies, scholars have suggested computational ethics as an essential frontier, which aims to translate ethical principles into computer code. However, computational ethics has received little attention in academic literature so far, with existing work mainly focusing on its technical implementation, while many open questions concerning its (societal and ethical) implications still need to be resolved. Therefore, in this study, we interviewed 12 experts from philosophy, AI and cognitive sciences to shed light on computational ethics beyond a technical perspective. Findings suggest that indicated supporting and opposing arguments can be clustered into pragmatic/practical, societal and epistemic reasons, all of which need to be contemplated when engaging in computational ethics and developing resulting artificial moral agents. Furthermore, the mentioned recommendations for companies’ technological design and development, for industry’s governance measures and academia’s research endeavors are recapitulated and summarized in a holistic framework that aims to facilitate a reflected implementation of ‘ethics in and by design’ in the future.
... Two recent reviews in the psychological literature took stock of some of the garnered insights (Bonnefon et al., 2024;Ladak et al., 2023), and several other reviews have surveyed some of the core questions and initial answers (Bigman et al., 2019;Malle, 2016;Misselhorn, 2018;Pereira & Lopes, 2020). The range of questions is broad: how to design machines that follow norms and make moral judgments and decisions (Cervantes et al., 2020;Malle & Scheutz, 2019;Tolmeijer et al., 2021) and how humans do and will perceive such (potential) moral machines (Malle et al., 2015;Shank & DeSanti, 2018;Stuart & Kneer, 2021); legal and ethical challenges that come with robotics (Lin et al., 2011), such as challenges posed by social robots (Boada et al., 2021;Salem et al., 2015), autonomous vehicles (Bonnefon et al., 2016;Zhang et al., 2021), autonomous weapons systems (Galliott et al., 2021), and large language models (Harrer, 2023;Yan et al., 2024); deep concerns over newly developed algorithms that perpetuate sexism, racism, or ageism; and tension over the use of robots in childcare, eldercare, and health care, which is both sorely needed and highly controversial (Sharkey & Sharkey, 2010;Sio & Wynsberghe, 2015). ...
Chapter
The term moral psychology is commonly used in at least two different senses. In the history of philosophy, moral psychology has referred to a branch of moral philosophy that addresses conceptual and theoretical questions about the psychological basis of morality, often (but not always) from a normative perspective (Tiberius, 2015). In the empirical investigations of psychology, anthropology, sociology, and adjacent fields, moral psychology has examined the cognitive, social, and cultural mechanisms that serve moral judgment and decision making, including emotions, norms, and values, as well as biological and evolutionary contributions to the foundations of morality. Since 2010, over 6,000 articles in academic journals have investigated the nature of morality from a descriptive-empirical perspective, and this is the perspective the current handbook emphasizes. Our overarching goal in this volume, however, is to bring philosophical and psychological perspectives on moral psychology into closer contact while maintaining a commitment to empirical science as the foundation of evidence. Striving toward this goal, we have tried to cast a wide net of questions and approaches, but naturally we could not cover all topics, issues, and positions. We offer some guidance to omitted topics later in this introduction, which we hope will allow the reader to take first steps into those additional domains. The chapters try to strike a balance between being up to date in a fast-moving field and making salient insights that have garnered attention for an extended time.
... But, even these authors do not provide any substantial arguments for accepting these claims. 5 Tolmeijer et al. (2020) are an exception with regard to predictability, but they merely mention that a machine that is trained via a topdown approach would have predictable behavior. They do not discuss seems worth pointing out, and I will explore some ways of thinking about how these qualities could be particularly advantageous in ethical AI. ...
Article
Full-text available
This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three major worries about such approaches, and I attempt to show that advocates of top-down approaches have some plausible avenues of response. I focus most of my attention on what I call the ‘technical manual objection’ to top-down approaches, inspired by the work of Annas (2004). In short, the idea is that top-down approaches treat ethical decision-making as being merely a matter of following some ethical instructions in the same way that one might follow some set of instructions contained in a technical manual (e.g., computer manual), and this invites sensible skepticism about the ethical wisdom of machines that have been trained on those approaches. I respond by claiming that the objection is successful only if it is understood as targeting machines that have certain kinds of goals, and it should not compel us to totally abandon top-down approaches. Such approaches could still be reasonably employed to design ethical AI that operate in contexts that include fairly noncontroversial answers to ethical questions. In fact, we should prefer top-down approaches when it comes to those types of context, or so I argue, due to the advantages I claim for them.
... Machine ethics is thus the study of generating ethically permissible action plans for autonomous agents. While recent studies of machine ethics typically assume a fully observable environment [23]; such an assumption is not realistic for decision-making because real-world implementations will always include noise and uncertainty. ...
Chapter
Full-text available
Recent developments in computational machine ethics have adopted the assumption of a fully observable environment. However, such an assumption is not realistic for the ethical decision-making process. Epistemic reasoning is one approach to deal with a non-fully observable environment and non-determinism. Current approaches to computational machine ethics require careful designs of aggregation functions (strategies). Different strategies to consolidate non-deterministic knowledge will result in different actions determined to be ethically permissible. However, recent studies have not tried to formalise a proper evaluation of these strategies. On the other hand, strategies for a partially observable universe are also studied in the game theory literature, with studies providing axioms, such as Linearity and Symmetry, to evaluate strategies in situations where agents need to interact with the uncertainty of nature. Regardless of the resemblance, strategies in game theory have not been applied to machine ethics. Therefore, in this study, we propose to adopt four game theoretic strategies to three approaches of machine ethics with epistemic reasoning so that machines can navigate complex ethical dilemmas. With our formalisation, we can also evaluate these strategies using the proposed axioms and show that a particular aggregation function is more volatile in a specific situation but more robust in others.
... When regulating societies -be they human or multi-agent-it is essential to acknowledge that actions carry ethical implications. Along with the Machine Ethics literature (Anderson & Anderson, 2011;Tolmeijer et al., 2021;Bostrom & Yudkowsky, 2011;Svegliato et al., 2021), there are different initiatives considering these ethical implications and advocating for beneficial and trustworthy Artificial Intelligence (Russell et al., 2015;Chatila et al., 2021). For instance, the European Commission has proposed both Ethics Guidelines for Trustworthy AI (European Comission, 2019) and the Artificial Intelligence Act (European Comission, 2023) Additionally, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (IEEE Standards Association, 2016), with a committee devoted to "Embedding Values into Autonomous Intelligent Systems", considers moral values as a first-class criterion. ...
Article
Full-text available
Norms have been widely enacted in human and agent societies to regulate individuals’ actions. However, although legislators may have ethics in mind when establishing norms, moral values are only sometimes explicitly considered. This paper advances the state of the art by providing a method for selecting the norms to enact within a society that best aligns with the moral values of such a society. Our approach to aligning norms and values is grounded in the ethics literature. Specifically, from the literature’s study of the relations between norms, actions, and values, we formally define how actions and values relate through the so-called value judgment function and how norms and values relate through the so-called norm promotion function. We show that both functions provide the means to compute value alignment for a set of norms. Moreover, we detail how to cast our decision-making problem as an optimisation problem: finding the norms that maximise value alignment. We also show how to solve our problem using off-the-shelf optimisation tools. Finally, we illustrate our approach with a specific case study on the European Value Study.