Figure 1 - uploaded by Luciano Floridi
Content may be subject to copyright.
AI Knowledge Map (AIKM). Source: Corea (2019), reproduced with permission courtesy of F. Corea.

AI Knowledge Map (AIKM). Source: Corea (2019), reproduced with permission courtesy of F. Corea.

Contexts in source publication

Context 1
... argument stems not from the belief that all healthcare needs will soon be taken care of by "robot doctors" (Chin-Yee & Upshur, 2019). Instead, the argument rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques (summarised in Figure 1 below) that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. For example1, as mapped by (Harerimana, Jang, Kim, & Park, 2018), decision tree techniques can be used to diagnose breast cancer tumours (Kuo, Chang, Chen, & Lee, 2001); Support Vector Machine techniques can be used to classify genes ( Brown et al., 2000) and diagnose Diabetes Mellitus (Barakat, Bradley, & Barakat, 2010); ensemble learning methods can predict outcomes for cancer patients (Kourou, Exarchos, Exarchos, Karamouzis, & Fotiadis, 2015); and neural networks can be used to recognise human movement (Jiang & Yin, 2015). ...
Context 2
... example1, as mapped by (Harerimana, Jang, Kim, & Park, 2018), decision tree techniques can be used to diagnose breast cancer tumours (Kuo, Chang, Chen, & Lee, 2001); Support Vector Machine techniques can be used to classify genes ( Brown et al., 2000) and diagnose Diabetes Mellitus (Barakat, Bradley, & Barakat, 2010); ensemble learning methods can predict outcomes for cancer patients (Kourou, Exarchos, Exarchos, Karamouzis, & Fotiadis, 2015); and neural networks can be used to recognise human movement (Jiang & Yin, 2015). From this perspective, AI represents a growing resource of interactive, autonomous, and often self-learning (in the machine learning sense, see Figure 1) agency, that can be used on demand , presenting the opportunity for potentially transformative cooperation between machines and doctors (Bartoletti, 2019). 1 For a full overview of all supervised and unsupervised Machine Learning techniques and their applications in healthcare, see Harerimana, Jang, Kim, & Park, 2018 and for a detailed look at the number of papers related to AI techniques and their clinical applications see ( Tran et al., 2019) This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=3486518 ...
Context 3
... Coupling: patients and their data are so strictly and interchangeably linked that the patients are their genetic profiles, latest blood results, personal information, allergies etc. (Floridi, 2017a). What the legislation calls 'data subjects" become "data patients"; ...

Similar publications

Preprint
Full-text available
We introduce an approach for training Variational Autoencoders (VAEs) that are certifiably robust to adversarial attack. Specifically, we first derive actionable bounds on the minimal size of an input perturbation required to change a VAE's reconstruction by more than an allowed amount, with these bounds depending on certain key parameters such as...

Citations

... Meta-analyses specific to ethical AI values in healthcare, for example (Goirand et al., 2021;Karimian et al., 2022;McLennan et al., 2018;Murphy et al., 2021;Six Dijkstra et al., 2020), overlap greatly with the general AI values. Only Morley et al. (2019) distinguish ethical patterns rather than values. Literature on HCAI features similar principles such as transparent, explainable, ethical, fair, trustworthy, responsible, and sustainable AI (Hartikainen et al., 2022). ...
... A domain of growing interest within this field has been argumentative dialogue systems, which are mostly designed to persuade users to adopt a particular point of view [12][13][14]. Although some topics may have a definitive scientific or societal consensus, ethically challenging issues often remain subjective and lack a unanimous position [15,16]. ...
Article
Full-text available
This paper explores the potential of a German-language chatbot to engage users in argumentative dialogues on ethically sensitive topics. Utilizing an argumentative knowledge graph, the chatbot is equipped to engage in discussions on the ethical implications of autonomous AI systems in hypothetical future scenarios in the fields of medicine, law, and self-driving cars. In a study with 178 student participants, we investigated the chatbot’s argumentation effect—its ability to offer new perspectives, gain user acceptance, and broaden users’ viewpoints on complex issues. The results indicated a substantial argumentation effect, with 13–21% of participants shifting their opinions to more moderate stances after interacting with the chatbot. This shift demonstrates the system’s effectiveness in fostering informed discourse and increasing users’ understanding of AI ethics. While the chatbot was well-received, with users acknowledging the quality of its arguments, we identified opportunities for improvement in its argument recognition capabilities. Despite this, our results indicate the chatbot’s potential as an educational tool in engaging users with the ethical dimensions of AI technology and promoting informed discourse.
... In contemporary healthcare several ADMs are used (Hass, 2019), including health monitoring systems (Rejab et al., 2014), treatment decision-making systems (Agarwal et al., 2010;Behera et al., 2019;Morley et al., 2019) and contact tracing applications (Martens et al., 2021). Specifically in this context, a differentiation can be made between ADMs tailored to medical staff (Aljaaf et al., 2015;Garg et al., 2005; H. Lee, 2014) and those tailored to laypeople (K. Lee et al., 2017;Lupton & Jutel, 2015). ...
Article
Full-text available
Algorithmic decision-making systems (ADMs) support an ever-growing number of decision-making processes. We conducted an online survey study in Flanders (n = 1,082) to understand how laypeople perceive and trust health ADMs. Inspired by the ability, benevolence, and integrity trustworthiness model (Mayer et al., 1995), this study investigated how trust is constructed in health ADMs. In addition, we investigated how trust construction differs between ADA Health (a self-diagnosis medical chatbot) and IBM Watson Oncology (a system that suggests treatments for cancer in hospitals). Our results show that accuracy and fairness are the biggest predictors of trust in both ADMs, whereas control plays a smaller yet significant role. Interestingly, control plays a bigger role in explaining trust in ADA Health than IBM Watson Oncology. Moreover, how appropriate people evaluate data-driven healthcare and how concerned they are with algorithmic systems prove to be good predictors for accuracy, fairness, and control in these specific health ADMs. The appropriateness of data-driven healthcare had a bigger effect with IBM Watson Oncology than with ADA Health. Overall, our results show the importance of considering the broader contextual, algorithmic, and case-specific characteristics when investigating trust construction in ADMs.
... By adhering to such ethical considerations, it is possible to ensure that clinical data results generated by AI technologies used in healthcare are accurate and credible for informed decision-making processes by medical practitioners (Dash et al., 2019). Promoting such ethical considerations makes it possible to achieve an informed clinical decision-making process that improves patient satisfaction with the care provided, thus boosting their trust and enhancing public confidence (Morley et al., 2019). A regulatory dimension associated with using AI in health data governance is the detection of risks that can lead to the loss of patients' data (Morley et al., 2020). ...
... A regulatory dimension associated with using AI in health data governance is the detection of risks that can lead to the loss of patients' data (Morley et al., 2020). It implies that healthcare facilities without sufficient capital resources to implement measures to protect patients' data, such as using a firewall and encryption, may lose patients' trust and public confidence (Morley et al., 2019). The need for data privacy is a regulatory dimension that mandates medical professionals to take caution on the use and sharing of patient's healthcare data during care delivery (Morley et al., 2019). ...
... It implies that healthcare facilities without sufficient capital resources to implement measures to protect patients' data, such as using a firewall and encryption, may lose patients' trust and public confidence (Morley et al., 2019). The need for data privacy is a regulatory dimension that mandates medical professionals to take caution on the use and sharing of patient's healthcare data during care delivery (Morley et al., 2019). Promoting patients' health data privacy when 4 using AI can enhance their satisfaction level, thus boosting their trust and public confidence in the care delivery process. ...
Article
Full-text available
This paper examines the transformative impact of Artificial Intelligence (AI) in healthcare, delving into its potential and challenges. It analyses the integration of AI into medical practices, focusing on how it revolutionizes diagnostics, treatment planning, and patient care. AI deployment's ethical and legal implications in healthcare are critically assessed, highlighting the need for robust frameworks to safeguard patient privacy and data security. The paper advocates for interdisciplinary collaboration among healthcare professionals, ethicists, and legal experts to optimize AI's benefits while mitigating risks. It underscores the importance of continual education and policy development to adapt to the evolving landscape of AI in healthcare, aiming for improved patient outcomes and efficient healthcare delivery.
... The digital life has provided the artificial intelligence as the solution of challenge in healthcare sector. Morley et all., 2019 has Global Bioethics Enquiry 2022; 10(3) concluded that the artificial intelligence raises further challenges of ethical consideration, regulation and legal framework [14]. The ethical problems arise at six level such as individual, interpersonal, group, institutional, sectorial and societal level. ...
... The ethical problems arise at six level such as individual, interpersonal, group, institutional, sectorial and societal level. These level of ethical challenges are classified as epistemic, normative and overarching [14]. In digital era, the use of artificial intelligence in medicine has provided wide variety of facilitation in prevention, diagnosis and management of diseases. ...
... However, besides these contributions, the use of artificial intelligence has risen the challenges of privacy of the patients. It also raise questions of legal accountability of machines, mistaken decisions and unfair behaviour [14]. The pandemic outbreak of Covid-19 incorporated the use of digital access to the medical consultation in healthcare sector. ...
... More advanced ADM systems in healthcare are promoted by arguments that algorithm-driven systems can free up time for overworked professionals (Topol, 2019), reduce the risk of errors (Paredes, 2018), provide predictive analysis based on historical and real-time data (Pryce et al., 2018), and increase overall e ciency in the public sector (Accenture, 2018). Algorithms are said to make more objective, robust, and evidence-based clinical decisions (in terms of diagnosis, prognosis, or treatment recommendations) than humans can ever provide (Morley et al., 2019). ...
... Belief in the superiority of AI and technological solutions produced using ADM systems, including many semi-automated chatbots, can amplify the project of rationality and automation in clinical practices and alter traditional decision-making practices based on epistemic probability and prudence. AI and complex algorithmic systems represent a growing resource of interactive, autonomous, and often self-learning (in the machine-learning sense) agency, potentially transforming cooperation between machines and professionals by emphasizing the agency of machines (Morley et al., 2019). ...
Article
Full-text available
Phenomenologies are an important dimension of Management and Organization Studies (MOS). They are particularly helpful to understand organizing processes as experiences instead of mere representations or objectivations of the world. Yet several misunderstandings still pervade discussions about what they are and what they could bring. This Handbook offers a description of phenomenologies, post-phenomenologies, and anti-phenomenologies, and how they (could potentially) relate to ongoing debates in MOS. Rarely has a field of thought developed itself in such a paradoxical attempt to both extend and overcome its own seminal assumptions and directions. In this movement, phenomenologies have contributed to many external debates in cognitive sciences, interactional sociologies, process studies, economics, and geography. Beyond that, phenomenologies have often been a counterpoint, a reactive material to develop other thoughts. In the end, phenomenologies, post-phenomenologies, and antianteante-phenomenologies have contributed to descriptions far beyond the traditional views of organizations as pre-defined entities already there in the world. In this direction, after introducing the thoughts of several key phenomenologists, our book explores various phenomenological issues for MOS, including new ways of organizing, entrepreneurship, decentred management, robots, artificial intelligence, algorithms, alternative organizations, communities and communalization, managerial techniques, cinematographic organizing, among others. At this stage, numerous post-phenomenologists and antianteante-phenomenologists are also brought into a critical conversation with phenomenological constructs. Core conceptual issues, such as space, temporality, events, depth, ethics, embodiment, materiality, topology, imagination, techniques, emotions, or affects, are also included in this discussion.
... More recently, however, these debates about trust have been playing out in the context of medical AI [22,35,50,58]. Many interesting ethical issues are at stake in the deployment of automated technologies in such a domain as care [60,61]. Several factors make the medical field the paradigmatic setting for instances of trust: it is where we are the most vulnerable and where we trust others with the most essential aspects of our lives such as our bodies. ...
Article
Full-text available
In this paper, I argue that the only kind of trust that should be allocated to AI technologies, if any at all, is epistemic trust. Epistemic trust is defined by Wilholt (Br J Philos Sci 64:233–253, 2013) as the kind of trust that is allocated strictly in virtue of the capacities of the receiver as a provider of information or as a conveyer of knowledge. If, as Alvarado (2022, http://philsci-archive.pitt.edu/id/eprint/21243) argues, AI is first and foremost designed and deployed as an epistemic technology—a technology that is designed, developed and deployed to particularly and especially expand our capacities as knowers—then it follows that we trust AI, when we trust it, exclusively in its capacities as a provider of information and epistemic enhancer. Trusting it otherwise may betray conceptual confusion. As I will show, it follows that trust in AI cannot be modeled after any general kind of interpersonal trust, it cannot be modeled after trust in other technologies such as pharmaceuticals, and it cannot be modeled after the kind of trust we allocate to medical practitioners in their capacities as providers of care. It also follows that, even after it is established that epistemic trust is the only legitimate kind of trust to allocate to epistemic technologies, whether or not AI can, in fact, be trusted remains an open question.
... Despite its promises and benefits, the increasing relevance and adoption of AI in LTC and other domains of society has encouraged debate over the societal and ethical implications of introducing and scaling AI (Good, 1966;Morley et al., 2019;Rubeis, 2020;Russell et al., 2015;Tsamados et al., 2021;Zuboff, 2015). It is recognized that the use of AI can lead to more effective, efficient, and sometimes more transparent decisions than those made by human beings. ...
... Second, this paper does not argue for the exclusion (or inclusion) of AI systems from the diagnostic processes: this is a subject of a separate inquiry. 40 The goal here is only to contribute to the discussion of their proper use and responsible employment. Various pattern recognition techniques may be invaluable for certain parts of this process, when they are used with full comprehension of their role, possibilities and limitations. ...
... For an overview of ethical questions about AI in healthcare see, e.g., Trocin et al.[56], Burr et al.[11], Morley et al.[40], Stahl and Coeckelbergh[50].3 On the difference between moral and non-moral responsibility see, e.g., Tigard[54] and[4]. ...
... On the impotence of contesting AI diagnosis in patient-centric paradigm see Ploug and Holm[42].40 For example, Tigard[54] presents arguments against deployment of such high-risk systems in health care. ...
Article
Full-text available
Responsible professional use of AI implies the readiness to respond to and address—in ethically appropriate manner—harm that may be associated with such use. This presupposes the ownership of mistakes. In this paper, I ask if a mistake in AI-enhanced decision making—such as AI-aided medical diagnosis—can be attributed to the AI system itself, and answer this question negatively. I will explore two options. If AI systems are merely tools, then we are never justified to attribute mistakes to them, because their failing does not meet rational constraints on being mistaken. If, for the sake of the argument, we assume that AI systems are not (mere) tools, then we are faced with certain challenges. The first is the burden to explain what this more-than-a-tool role of an AI system is, and to establish justificatory reasons for the AI system to be considered as such. The second is to prove that medical diagnosis can be reduced to the calculations by AI system without any significant loss to the purpose and quality of the diagnosis as a procedure. I will conclude that the problem of the ownership of mistakes in hybrid decision making necessitates new forms of epistemic responsibilities.
... With advances in machine learning (ML) and artificial intelligence (AI), systems learn and adapt by themselves. This means that retrospectively, it can be difficult for people to understand how scores and goals were classified (Morley et al., 2019). Equally, it makes it difficult to evaluate the implications of automated decisions. ...
... This suggests that people have set views of what they consider the self to be, causing conflict when these views don't match up with current representations of the self. AI may offer a solution through personalised recommendations, but relying on quantitative data patterns, even if specific to a person may lack meaning as these recommendations are shaped by design decisions, goals and norms that privilege some selves over others (Morley et al., 2019). These many versions of the self show a disparity between what the self is and what people or society expect the self to be. ...
... There is a problem with not knowing this purpose in self-tracking as the purpose changes how the self is represented in a system. This will only get worse with the introduction of AI as people tend to over trust data, even if it is not accurate (Morley et al., 2019). People will blame their interpretation of data or not reaching the goal as a personal fault and rather than the system's fault for misrepresenting the self. ...