Overview of Bentham's felicific calculus variables and example application to starting antimicrobial treatment.

Overview of Bentham's felicific calculus variables and example application to starting antimicrobial treatment.

Source publication
Preprint
Full-text available
Artificial intelligence (AI) assisting with antimicrobial prescribing raises significant moral questions. Utilising ethical frameworks alongside AI-driven systems, while considering infection specific complexities, can support moral decision making to tackle antimicrobial resistance.

Citations

... A great deal of effort is currently being expended on developing risk prediction models for individuals and patient groups using a variety of approaches ranging from genomics and metabonomics through to socioeconomic phenotyping [1][2][3][4][5][6]. In the domain of healthcare, the expansion in predictive modelling research is paired with rapidly emerging concerns about the ethical use of such methods, particularly artificial intelligence (AI) [7,8,9]. These include concerns around data privacy, algorithmic fairness, bias, safety, informed consent, and transparency [7,[9][10][11][12], for which the medical profession may be unprepared to navigate [12]. ...
... In the domain of healthcare, the expansion in predictive modelling research is paired with rapidly emerging concerns about the ethical use of such methods, particularly artificial intelligence (AI) [7,8,9]. These include concerns around data privacy, algorithmic fairness, bias, safety, informed consent, and transparency [7,[9][10][11][12], for which the medical profession may be unprepared to navigate [12]. Accordingly, international bodies have started taking action to address concerns around medical AI and automation. ...
... Furthermore, algorithms developed from specific patient cohorts may not translate well to populations in different parts of the world, with different demographics or baseline medical conditions [22,25,[27][28][29][30][31]. These are complex concepts and not necessarily intuitive, even to experts in clinical and technical disciplines [9]. The ethical implications of this knowledge may be uncertain [125] and, hence, the appropriateness of disclosing this information may not be straightforward. ...
Article
Full-text available
An appropriate ethical framework around the use of Artificial Intelligence (AI) in healthcare has become a key desirable with the increasingly widespread deployment of this technology. Advances in AI hold the promise of improving the precision of outcome prediction at the level of the individual. However, the addition of these technologies to patient–clinician interactions, as with any complex human interaction, has potential pitfalls. While physicians have always had to carefully consider the ethical background and implications of their actions, detailed deliberations around fast-moving technological progress may not have kept up. We use a common but key challenge in healthcare interactions, the disclosure of bad news (likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developed in the eighteenth century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI. We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustive domains, whether an AI-supported action can be morally justified.
... Another example is that it could infer subgoals, such as nefariously avoiding being turned off, like Hal in "2001: A Space Odyssey". Even if we tried to overcome MIs by programming a moral framework based on a rule-based positive account, whether deontological, consequentialist, or otherwise we would still encounter such mistakes (as we show in Bolton et al. 2022). This is because there might be many circumstances we will not have accounted for (as we show in Post et al. 2022), but most importantly, remember the consequences of IP (above and in our work from 2017): different interpretations are always possible and thus, the spectre of satisfying the literal specification given while not bringing about the intended outcome always looms darkly above us. ...
Preprint
We present what we call the Interpretation Problem, whereby any rule in symbolic form is open to infinite interpretation in ways that we might disapprove of and argue that any attempt to build morality into machines is subject to it. We show how the Interpretation Problem in Artificial Intelligence is an illustration of Wittgenstein's general claim that no rule can contain the criteria for its own application, and that the risks created by this problem escalate in proportion to the degree to which to machine is causally connected to the world, in what we call the Law of Interpretative Exposure. Using game theory, we attempt to define the structure of normative spaces and argue that any rule-following within a normative space is guided by values that are external to that space and which cannot themselves be represented as rules. In light of this, we categorise the types of mistakes an artificial moral agent could make into Mistakes of Intention and Instrumental Mistakes, and we propose ways of building morality into machines by getting them to interpret the rules we give in accordance with these external values, through explicit moral reasoning, the Show, not Tell paradigm, the adjustment of causal power and structure of the agent, and relational values, with the ultimate aim that the machine develop a virtuous character and that the impact of the Interpretation Problem is minimised.