ArticlePDF Available

Ethical Implications of Artificial Intelligence in the Healthcare Sector

Authors:

Abstract and Figures

This research paper examines the ethical implications of AI in healthcare, covering the benefits and risks of using AI in healthcare services and provision. The paper highlights the applications of AI in healthcare, which can improve efficiency and accuracy of providing healthcare services by health professionals. The benefits of AI cover reducing the need for human intervention and increasing productivity through automation, delivering personalised experiences by recommendations, assisting with informed decision-making by providing real-time data analysis and insights, predicting outcomes or identifying potential threats, improving healthcare and overall customer satisfaction. The paper highlights the ethical implications of the use of AI in healthcare, including privacy and security, bias and discrimination, transparency and explainability, responsibility and accountability, informed consent and human interaction and empathy. The paper recommends that as AI becomes more prevalent in healthcare, establishing clear guidelines for responsible use, and maintaining the importance of human interaction and empathy in patient care, enhances healthcare outcomes while safeguarding patient rights and welfare. Continued research and development of the ethical implications of AI in healthcare for low-income countries as further work can promote the ethical use of AI in healthcare worldwide. Keywords: Artificial Intelligence, Ethics, Ethical Implication, Healthcare, Privacy Proceedings Citation Format Amedior, N.C. Nutifafa Cudjoe Amedior (2022): Ethical Implications of Artificial Intelligence in the Healthcare Sector. Proceedings of the 36th iSTEAMS Accra Bespoke Multidisciplinary Innovations Conference. University of Ghana/Academic City University College, Accra, Ghana. 31st May – 2nd June, 2023. Pp 1-12 www.isteams.net/ecowasetech2022. dx.doi.org/10.22624/AIMS-/ACCRABESPOKE2023P1
Content may be subject to copyright.
1
Faculty of Computational Sciences & Informatics - Academic City University College, Accra, Ghana
Society for Multidisciplinary & Advanced Research Techniques (SMART)
Trinity University, Lagos, Nigeria
SMART Scientific Projects & Research Consortium (SMART SPaRC)
Sekinah-Hope Foundation for Female STEM Education
Harmarth Global Educational Services
ICT University Foundations USA
IEEE Computer Society Nigeria Chapter
Proceedings of the 36th iSTEAMS Accra Bespoke Multidisciplinary Innovations Conference
Ethical Implications of Artificial Intelligence in the
Healthcare Sector
Amedior, Nutifafa Cudjoe
Information Technology and Law Graduate Programme
Ghana Institute of Management and Public Administration
Greenhill, Accra, Ghana
E-mail: nutifafacudjoe@gmail.com
Phone: +233502477709
ABSTRACT
This research paper examines the ethical implications of AI in healthcare, covering the benefits
and risks of using AI in healthcare services and provision. The paper highlights the applications
of AI in healthcare, which can improve efficiency and accuracy of providing healthcare services
by health professionals. The benefits of AI cover reducing the need for human intervention and
increasing productivity through automation, delivering personalised experiences by
recommendations, assisting with informed decision-making by providing real-time data analysis
and insights, predicting outcomes or identifying potential threats, improving healthcare and
overall customer satisfaction. The paper highlights the ethical implications of the use of AI in
healthcare, including privacy and security, bias and discrimination, transparency and
explainability, responsibility and accountability, informed consent and human interaction and
empathy. The paper recommends that as AI becomes more prevalent in healthcare,
establishing clear guidelines for responsible use, and maintaining the importance of human
interaction and empathy in patient care, enhances healthcare outcomes while safeguarding
patient rights and welfare. Continued research and development of the ethical implications of
AI in healthcare for low-income countries as further work can promote the ethical use of AI in
healthcare worldwide.
Keywords: Artificial Intelligence, Ethics, Ethical Implication, Healthcare, Privacy
Proceedings Citation Format
Amedior, N.C. Nutifafa Cudjoe Amedior (2022): Ethical Implications of Artificial Intelligence in the Healthcare Sector.
Proceedings of the 36th iSTEAMS Accra Bespoke Multidisciplinary Inno vations Conference. Universit y of Ghana/Academ ic
City University College, Accra, Ghana. 31s t May – 2nd June, 2023. Pp 1-12 https://www.isteams.net/ghan abespoke 2023
dx.doi.org/10.22624/AIMS/ACCRABESPOKE2023P1
I. INTRODUCTION
1.1 What is Artificial Intelligence?
Artificial intelligence (AI) can be described as “a field of study that combines computer science,
engineering and related disciplines to build machines capable of behaviour that would be said
to require intelligence were it to be observed in humans” (United Kingdom: authority of the
2
house of lords, 2018). Some of these behaviours include the ability to visually perceive images,
recognize speech, translate language, and learn from and adapt to new information (United
Kingdom: authority of the house of lords, 2018). AI employs a variety of techniques and
methods, including those from arithmetic, logic, and biology, as well as other disciplines (Berg,
2018). The ability of modern AI systems to progressively make sense of various and
unstructured types of data, such as natural language text and photos, is a key characteristic of
these technologies (Berg, 2018). Machine-learning has proven to be the most successful type
of AI in recent years and serves as the underlying approach of many of the applications currently
in use (Berg, 2018). Machine-learning allows systems to discover patterns and derive its own
rules when it is presented with data and new experiences, rather than following pre-programmed
instructions (Berg, 2018).
1.2 Use and benefits of Artificial Intelligence
Business, healthcare, education, finance, transportation, entertainment, and manufacturing are
just a few of the industries where artificial intelligence (AI) has a wide range of applications. AI
can be used to automate tasks and enhance business decision-making, analyse patient data to
diagnose diseases and create treatment plans, detect fraud and analyse credit risk in
transactions, curate individualised lesson plans and interactive learning experiences, analyse
traffic patterns and change traffic lights to relieve congestion, optimise manufacturing
processes, and enhance product quality (O’Keefe et al., 2020; Ransbotham et al., 2021). AI
provides the benefits of reducing the need for human intervention and increasing productivity
through automation, delivering personalised experiences by recommendations, assisting with
informed decision-making by providing real-time data analysis and insights, predicting outcomes
or identifying potential threats, enhancing artistic creativity, improving healthcare and overall
customer satisfaction (Nadimpalli, 2007; Naim, 2022; O’Keefe et al., 2020; Ransbotham et al.,
2021; Yeasmin, 2019). Overall, AI has the power to revolutionise a wide range of industries and
enhance the standard of living for people globally.
1.3 Investment Trends in Artificial Intelligence
Artificial intelligence (AI) is a rapidly growing field reflected by the continuous investment in the
sector that has the potential for economic gains, projected at a $15.7 trillion contribution to the
global economy by 2030 (Murphy et al., 2021). Heavy investing in AI startups are coming from
venture capital firms, reaching record levels with a majority of these investments going to early-
stage startups (Tricot, 2021). Corporate investment is going heavily into AI, this happens through
either acquisitions of startups or investing in their in-house AI initiatives (Babina et al., 2020;
Wiggers, 2023). Alphabet, Google’s parent company, has made AI acquisitions including
DeepMind and Kaggle; Microsoft also made a 2019 $1billion investment in OpenAI (Feiner,
2019). AI-focused Investment funds are also a growing trend in the AI field. These targeted
investment funds provide investors with exposure to the AI market to invest specifically in AI
startups and technologies (International Banker, 2022). Governments globally are also not
missing out on the chance to invest in AI, heavily investing in AI research and its development.
The US government and European Union have set up an initiative and fund respectively to
support AI research and development (European Commission, 2021). AI-related companies are
increasingly going public, with 2020 having Snowflake, Palantir, and Unity are among some high-
profile AI companies that went public, this also provides retail investors and public markets with
opportunities to invest in the AI market (Voronova & Lukina, 2022).
1.4 Healthcare
The term "healthcare" refers to procedures and services concerned with the identification,
mitigation, and treatment of diseases and injuries as well as the maintenance and promotion of
one's physical and mental well-being (Endeshaw, 2021). Doctors, nurses, pharmacists,
therapists, and other allied health professionals among others can all deliver healthcare.
Hospitals, clinics, outpatient facilities, long-term care facilities, and homes can all be used to
give healthcare services (Rutala & Weber, 2019).
3
Medical technologies and apparatus, such as prostheses, medical imaging systems, and
diagnostic tools, can also be included in healthcare services (De Maria et al., 2020).
The goal of healthcare is to improve and maintain the health and wellbeing of individuals and
communities, by preventing and treating illnesses and injuries, and promoting healthy lifestyles
(Darzi et al., 2023). Access to healthcare is considered a basic human right, and efforts are
made to ensure that healthcare services are available and affordable to all individuals,
regardless of their background, income, or geographic location (Da Silva, 2023).
Figure 1: Artificial Intelligence in Healthcare (Thompson, 2019)
1.5 Artificial Intelligence Applications in Healthcare
The potential use of artificial intelligence in health care delivery spans planning, resource
allocation, research, clinical care, patient-facing applications, health administration and public
health (Berg, 2018). Examples of artificial intelligence use in healthcare include and not limited
to:
1) Imaging AI technologies assisting radiologists to automatically localise and
delineate the boundaries of anatomical structures to concentrate on images that
are most likely to be abnormal;
2) AI used in tandem with digital pathology can contribute to predicting disease
diagnosis and prognosis as well as evaluating disease severity and outcome with
similar level of accuracy to that of pathologists;
3) Emergency medicine can benefit from AI to improve patient prioritisation during
triage;
4) AI tools in the area of surgery overcome surgical decision challenges of making
biassed, error and preventable harm by using diverse sources of information
such as patient risk factors, and anatomic information to make diagnoses and
predict response to treatment;
5) AI has a big potential to help people manage their chronic illnesses and illnesses
that affect the elderly. Medication administration, diet modification, and health
device management are all examples of self-management activities. By
monitoring physical space and falls, home monitoring offers the potential to
improve ageing at home and boost independence;
4
6) In cardiology, AI improves the diagnostic capacity of echocardiography in
identifying diseases such as asymptomatic left ventricular dysfunction and silent
atrial fibrillation;
7) The application of AI in nephrology has a use of a deep learning model for
ultrasound kidney imaging that non-invasively classifies chronic kidney disease
with the potential to aid diagnosis of kidney cancer thereby reducing the global
burden of kidney diseases;
8) Neuropsychiatry has in development artificial intelligence tools for digital
tracking of depression and mood to lend support to mental health patients and
to mitigate the effects of a paucity of health personnel dedicated to mental
health conditions (European Parliament. Directorate General for Parliamentary
Research Services., 2022).
1.6 Ethics of Artificial Intelligence in Healthcare
The ethical implications of AI in healthcare are complex and multifaceted. Key concerns include
issues related to privacy, transparency, bias, and accountability. For example, the use of AI in
healthcare requires access to large amounts of sensitive patient data, raising concerns about
privacy and data security (Rigby, 2019). Additionally, AI systems can be opaque, making it
difficult to understand how they arrive at their decisions, which can raise questions about
transparency and accountability (Prakash et al., 2022). Finally, AI systems can be biassed,
leading to inequitable treatment of certain patient groups (Karimian et al., 2022).
1.7 Purpose of this paper
This research paper examines the ethical implications of AI in healthcare, covering the benefits
and risks of using AI in healthcare services and provision. The paper draws on relevant literature
to highlight key challenges and offer recommendations for future work in this area. The paper
also considers the responsibility of healthcare professionals, technology developers, and
policymakers in ensuring that AI is used ethically in healthcare.
2. RELATED LITERATURE
The study conducted a comprehensive search of the academic databases to gather and
evaluate information from peer-reviewed articles that explored the ethical implications of
artificial intelligence in healthcare. The search terms employed were 'artificial intelligence' or
'artificial intelligence in healthcare' combined with 'ethics' and ‘ethical implications of AI in
healthcare’. The study focused on articles published between 2019 and 2023, this was to get
recent and relevant literature on AI. Murphy et al. (2021) examines the ethical considerations
surrounding the use of AI in healthcare, particularly on carer robots, diagnostics, and precision
medicine. The literature highlights concerns regarding privacy, trust, accountability and
responsibility, and bias, but largely ignores the ethics of AI in public and population health, and
in low- and middle-income countries (LMICs). The review concludes that while AI holds promise
for improving health systems, its introduction should be approached with caution and further
research is needed to ensure its development and implementation is ethical for everyone,
everywhere.
European Parliament: Directorate General for Parliamentary Research Services (2022) provides
an overview of the potential benefits of artificial intelligence (AI) in healthcare, including
improving diagnosis and treatment, increasing efficiency, and optimising resource allocation.
However, it also highlights the clinical, social, and ethical risks associated with AI in healthcare,
including errors and patient harm, bias and health inequalities, lack of transparency and trust,
and data privacy breaches. The report proposes mitigation measures and policy options to
minimise these risks, including stakeholder engagement, transparency, clinical validation, and
education and training.
5
Prakash et al. (2022) reviews the ethical concerns and legal framework surrounding the
application of artificial intelligence (AI) in healthcare. The authors conducted a search of
electronic databases and identified 16 articles that met their inclusion and exclusion criteria.
They found that while AI has the potential to revolutionise medical practice, there are numerous
ethical and legal issues that need to be addressed. The study emphasises the need for a
multifaceted approach involving policymakers, developers, healthcare providers, and patients
to develop a feasible solution to mitigate these concerns.
Rigby (2019) discusses the current and potential use of artificial intelligence (AI) in healthcare
and the ethical complexities it brings. While AI can improve the efficiency of healthcare delivery
and quality of patient care, it also raises concerns about patient privacy and confidentiality,
informed consent, and patient autonomy. The article addresses some of the ethical dilemmas
that arise when AI is used in healthcare and medical education, including balancing the benefits
and risks of AI technology and the role AI can play in medical education. It also explores legal
and health policy conflicts that arise with the use of AI in healthcare. The article concludes that
there is a need for more dialogue on these concerns to improve physician and patient
understanding of the role AI can play in health care and to develop a realistic sense of what AI
can and cannot do.
Abdullah et al. (2021) explores the bioethical implementation of AI in medicine and
ophthalmology, and classifies ethical issues into six main categories: machine training ethics,
machine accuracy ethics, patient-related ethics, physician-related ethics, shared ethics, and
roles of regulators. The review suggests that attention to the various aspects of ethics related
to AI is important, especially with the expanding use of AI, and that solutions to ethical problems
are multifactorial.
Karimian et al. (2022) examined the ethical issues surrounding the increasing use of artificial
intelligence (AI) in healthcare. The review identified five ethical principles that should be
considered when designing or deploying AI in healthcare: respect for human autonomy,
prevention of harm, fairness, explicability, and privacy. However, the study found limited
consideration of these ethical principles in most retrieved studies, with the principle of
prevention of harm being the least explored. The review also noted a lack of practical tools for
testing and upholding ethical requirements across the lifecycle of AI-based technologies, as well
as a lack of perspective from different stakeholders.
Nasim et al. (2022) discusses the importance of ethical considerations in AI design and
implementation. It presents statistics on AI incidents and areas where unethical use of AI has
been identified, such as language and computer vision models, intelligent robots, and
autonomous driving. The paper also highlights various forms of ethical issues, including
incorrect use of technology, racism, non-safety, and malicious algorithms with biasness. Data
collection has helped identify AI ethical issues based on time, geographic locations, application
areas, and classifications.
3. FINDINGS
Artificial intelligence (AI) has the potential to revolutionize healthcare by improving patient
outcomes, reducing costs, and increasing efficiency. However, the use of AI in healthcare also
raises ethical concerns that need to be addressed to ensure that patients are not harmed and
that AI is used ethically.
6
Figure 2: Ethical Crossroad of Artificial Intelligence
The following are the main findings of this research paper for some of the ethical implications
of AI in healthcare:
I. Privacy and security: The importance of patient privacy in AI-based healthcare research
is discussed in studies by Abdullah et al., 2021; Karimian et al., 2022; and Rigby, 2019.
Researchers argue that patients should have greater control over their data and how it
is used, while others explore current regulations and the need for obtaining patient
consent before sharing information. In the USA, Health Insurance Portability and
Accountability Act (HIPAA) allows sharing of protected health information for certain
purposes without patient consent, while in the UK, patient consent is required for
sharing information with any third party not involved in direct patient care (Karimian et
al., 2022). Data privacy is important to protect against discrimination, mental health
consequences, erosion of trust, and other harms. Confidentiality protection solutions
include governmental regulations, technological advances, and cybersecurity measures.
Data sharing can facilitate interoperability, scientific discovery, and equity in healthcare,
but must be balanced with concerns about privacy and confidentiality. As healthcare
providers collect and use more patient data to train AI algorithms, there is a risk of data
breaches and the unauthorised use of personal health information. Patients need to
trust that their data is being used ethically and that their privacy is protected.
II. Bias and discrimination: AI systems can replicate and even amplify existing biases and
discrimination in healthcare (Ayling & Chapman, 2022). For example, if an AI system is
trained on data that is biassed against a particular race or gender, it may produce
biassed results. This can lead to unequal treatment and outcomes for different groups
of patients. Discrimination can also occur when AI systems are used to make decisions
about patient care, such as diagnosing illnesses or recommending treatments (Abdullah
et al., 2021). If an AI system is trained on data that reflects biases against certain
groups, it may make decisions that are unfair or discriminatory towards those groups.
This can lead to disparities in healthcare outcomes and a lack of trust in the healthcare
system. This requires ongoing vigilance and a commitment to addressing bias and
discrimination in healthcare.
7
III. Transparency and explainability: Patients and healthcare providers need to understand
how AI systems make decisions in order to trust their results. AI systems need to be
transparent and explainable, so that patients and providers can understand how they
arrived at their conclusions. Transparency is crucial for ensuring accountability and trust
in AI systems, particularly in healthcare where the consequences of AI errors can be
severe. Transparent AI systems allow stakeholders to understand how decisions are
made, what factors are taken into account, and how any biases or limitations are
addressed (Abdullah et al., 2021; Rigby, 2019). Explainability is necessary for
stakeholders, such as patients and healthcare providers, to understand how and why a
decision was made, and to determine whether it was appropriate (Abdullah et al., 2021).
It is important to ensure that AI systems are designed and implemented in a transparent
and explainable way to promote trust, accountability, and ethical decision-making.
IV. Responsibility and accountability: As AI systems become increasingly self-governing, it
becomes more challenging to determine who is responsible and accountable for their
actions. If an AI system makes an error, is it the healthcare provider, the system
developer, or the AI system itself that should be held responsible? The use of AI in
healthcare requires clear specification of tasks and conditions for responsible use, and
human stakeholders must ensure that AI systems can perform those tasks under
appropriate conditions. Human warranty, involving upstream and downstream
supervision by patients, clinicians, and designers, can help ensure responsible use of AI
technologies (World Health Organization, 2021). When something goes wrong,
accountability mechanisms should be in place, including redress for individuals or
groups affected by algorithmically informed decisions (World Health Organization,
2021). Responsibility in complex systems should be attributed among numerous agents,
and a faultless responsibility model can encourage all actors to act with integrity and
minimise harm (World Health Organization, 2021).
V. Informed consent: Informed consent is a crucial and integral part to the patient's
experience in healthcare to give protection from harm, respect for autonomy, privacy
protection and property rights concerning data and/or tissue (European Parliament.
Directorate General for Parliamentary Research Services., 2022). Patients have the right
to make informed decisions about their healthcare, but AI systems may make decisions
based on complex algorithms that patients may not fully understand. Patients need to
be informed about how AI is being used by healthcare professionals and technology
developers in their care and given the opportunity to opt-out if they do not feel
comfortable (Murphy et al., 2021).
VI. Human interaction and empathy: AI systems can provide valuable insights and
recommendations, but they cannot replace the human touch and empathy that is
essential to healthcare. AI systems should be used to enhance, not replace, human
interaction. Abdullah et al., (2021) suggest that medical education and training
programs should integrate empathetic skills and knowledge further, and that AI can be
used to perform some tasks to give doctors more time to exercise empathy. Patients
prefer doctors to be more empathic than machines, but the use of machines can still
allow doctors to exercise empathy. AI algorithms use patients' data to give decision
outputs about their health, which must be calibrated ethically and empathetically
(Abdullah et al., 2021). AI can develop "artificial affection" to empathise with patients,
enhancing machine personhood, and serving two purposes: machines' ability to
empathise with patients and their liability for harm inflicted by their actions (Abdullah et
al., 2021).
8
The potential for AI to revolutionise healthcare is significant, but there are ethical concerns that
must be addressed to ensure that patients are not harmed and that AI is used ethically. These
ethical implications include privacy and security, bias and discrimination, transparency and
explainability, responsibility and accountability, informed consent, and human interaction and
empathy. It is important for patients to have confidence that their data is being used ethically,
and AI systems must not perpetuate existing biases and discrimination. To promote
accountability and redress, it is necessary for AI systems to be transparent and explainable.
Informed consent is critical, and AI systems should be used to enhance rather than replace
human interaction and empathy.
4. CONCLUSION
In conclusion, artificial intelligence (AI) is a rapidly growing field with a wide range of applications
in various industries such as business, healthcare, education, finance, transportation,
entertainment, and manufacturing. AI has the potential to revolutionise industries by reducing
the need for human intervention, increasing productivity through automation, and providing
personalised experiences. The investment in AI is increasing, with venture capital firms and
governments investing in AI research and development. Healthcare is an industry where AI has
a significant potential to enhance and optimise clinical care, patient-facing applications, health
administration, and public health. AI can assist in detecting, diagnosing, and treating diseases,
predicting disease diagnosis and prognosis, and evaluating disease severity and outcome. With
the benefits that AI can provide to various industries and the continuous investment in the field,
AI is poised to be a major force in shaping the future of the world.
The use of AI in healthcare has the capacity to transform the industry by enhancing patient
outcomes, reducing expenses, and improving effectiveness. Nevertheless, it is essential to take
into account the ethical consequences of AI implementation in healthcare to safeguard patient
privacy and security, prevent partiality and discrimination, encourage transparency and
comprehensibility, create responsibility and liability, and guarantee knowledgeable consent and
human connection and empathy. It is crucial to strike a balance between the advantages and
risks of AI in healthcare to guarantee its ethical use and protect the rights and welfare of
patients. Ongoing exploration and innovation in this field are necessary to ensure that AI is
utilised conscientiously and ethically in healthcare.
5. RECOMMENDATION
Based on the discussion above, there are several recommendations that can be made to ensure
the ethical use of artificial intelligence (AI) in healthcare for low-income countries. Firstly, it is
important for policymakers and healthcare providers to prioritise the development of AI
technologies that are specifically tailored to the unique healthcare needs and resource
limitations of low-income countries. This includes investing in AI technologies that can assist in
disease detection, diagnosis, and treatment, as well as improving health administration and
public health services.
Secondly, it is essential to ensure that AI technologies used in healthcare are designed and
implemented in a manner that is fair and unbiased. This includes avoiding the use of biassed
algorithms that may result in discrimination against certain groups or individuals. To achieve
this, it is necessary to ensure that AI algorithms are developed using diverse and representative
datasets, and that they are continuously monitored and evaluated for fairness and impartiality.
Thirdly, there should be a focus on transparency and explainability in the development and
deployment of AI technologies in healthcare. This means that healthcare providers should be
able to explain how AI systems arrive at their recommendations or decisions, and patients
should be provided with clear information on how their data is being collected and used.
9
Fourthly, there is a need to establish clear guidelines and protocols for the responsible use of AI
in healthcare, including ensuring that there is accountability and liability for any potential harm
caused by AI systems. This includes developing robust cybersecurity measures to protect patient
data from breaches or cyberattacks. Finally, it is crucial to ensure that the use of AI in healthcare
does not detract from the importance of human interaction and empathy in patient care. While
AI can assist in improving efficiency and productivity in healthcare, it should not replace the role
of healthcare providers in providing personalised and compassionate care to patients. In
summary, the ethical use of AI in healthcare for low-income countries requires a concerted effort
from policymakers, healthcare providers, and AI developers. By prioritising the development of
AI technologies that are tailored to the unique healthcare needs of low-income countries,
ensuring fairness and transparency in their development and deployment, establishing clear
guidelines for responsible use, and maintaining the importance of human interaction and
empathy in patient care, AI can be used to enhance healthcare outcomes while safeguarding
patient rights and welfare.
6. FUTURE WORKS
As artificial intelligence (AI) continues to revolutionise healthcare worldwide, it is crucial to
consider the ethical implications of its use in low-income countries. While AI has the potential to
improve patient outcomes, reduce costs, and increase efficiency in these settings, it is
imperative to ensure that it is used ethically and responsibly to protect patients' rights and
interests. One of the primary ethical considerations for the use of AI in healthcare in low-income
countries is the potential for bias and discrimination as training data will vastly differ from
developed countries. AI algorithms must be designed and implemented in a way that avoids
discrimination based on factors such as race, gender, and socio-economic status..
Patient privacy and security are also critical ethical considerations for AI in healthcare. Low-
income countries may lack the infrastructure and resources to ensure that patient data is
protected adequately. Thus, AI systems must be designed with robust security features to ensure
that patient data is not vulnerable to cyberattacks or other breaches. Moreover, it is essential
to obtain informed consent from patients before using their data for AI applications, and patients
from low income countries must have the right and means to access and control their data at
all times.
Human interaction and empathy are other important ethical considerations for AI in healthcare.
AI may increase efficiency in healthcare delivery, but it must not replace human care and
compassion. Low-income countries may face a shortage of healthcare professionals, but the
use of AI must not result in neglecting the patient's emotional and psychological needs. The
responsibility and accountability of healthcare providers, AI developers, and other stakeholders
must also be established to ensure ethical AI use in healthcare. Regulations and guidelines must
be put in place to guide the development, implementation, and evaluation of AI systems in low-
income countries. The responsibility for any harm caused by AI must be assigned, and
stakeholders must be held accountable for their actions.
Continued research and development in the ethical use of AI in healthcare in low-income
countries is necessary. Studies must focus on the impact of AI on patient outcomes and
healthcare delivery, as well as the ethical, legal, and social implications of its considering the
unique challenges faced by low-income countries. The use of AI in healthcare in low-income
countries present a range of benefits but must be done ethically and responsibly. The ethical
considerations of bias and discrimination, patient privacy and security, human interaction and
empathy, and responsibility and accountability must be addressed. Continued research and
development in this area is essential to ensure that AI is used responsibly and ethically in
healthcare, protecting patients' rights and interests. The development of international
guidelines and frameworks can promote the ethical use of AI in healthcare worldwide.
10
REFERENCES
1. Abdullah, Y. I., Schuman, J. S., Shabsigh, R., Caplan, A., & Al-Aswad, L. A. (2021).
Ethics of Artificial Intelligence in Medicine and Ophthalmology. Asia-Pacific Journal
of Ophthalmology, 10(3), 289–298.
https://doi.org/10.1097/APO.0000000000000397
2. Akers, C. (2023, March 24). The Squeeze: Democratising tech helps private
investors’ chances.
https://www.investorschronicle.co.uk/education/2023/03/24/the-squeeze-
democratising-tech-helps-private-investors-chances/
3. Ayling, J., & Chapman, A. (2022). Putting AI ethics to work: Are the tools fit for
purpose? AI and Ethics, 2(3), 405–429. https://doi.org/10.1007/s43681-021-
00084-x
4. Babina, T., Fedyk, A., He, A., & Hodson, J. (n.d.). Firm Investments in Artificial
Intelligence Technologies and Changes in Workforce Composition.
5. Babina, T., Fedyk, A., He, A. X., & Hodson, J. (2020). Artificial Intelligence, Firm
Growth, and Industry Concentration. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.3651052
6. Berg, R. (2018). Artificial intelligence (AI) in healthcare and research. Nuffield
Council on Bioethics. https://www.nuffieldbioethics.org/assets/pdfs/Artificial-
Intelligence-AI-in-healthcare-and-research.pdf
7. Da Silva, M. (2023). Health, healthcare, and public health as objects of (human)
rights. In The Routledge Handbook of Philosophy of Public Health (pp. 347–361).
Routledge.
8. Darzi, M. A., Islam, S. B., Khursheed, S. O., & Bhat, S. A. (2023). Service quality in
the healthcare sector: A systematic review and meta-analysis. LBS Journal of
Management & Research, ahead-of-print.
9. De Maria, C., Di Pietro, L., Ravizza, A., Lantada, A. D., & Ahluwalia, A. D. (2020). Open-
source medical devices: Healthcare solutions for low-, middle-, and high-resource
settings. In Clinical engineering handbook (pp. 7–14). Elsevier.
10. Endeshaw, B. (2021). Healthcare service quality-measurement models: A review.
Journal of Health Research, 35(2), 106–117.
11. European Commission. (2021, September 14). High-Level Conference on AI: From
Ambition to Action | Shaping Europe’s digital future. https://digital-
strategy.ec.europa.eu/en/events/high-level-conference-on-ai-from-ambition-to-
action
12. European Parliament. Directorate General for Parliamentary Research Services.
(2022). Artificial intelligence in healthcare: Applications, risks, and ethical and
societal impacts. Publications Office.
https://data.europa.eu/doi/10.2861/568473
13. Feiner, L. (2019, July 22). Microsoft invests $1 billion in artificial intelligence project
co-founded by Elon Musk. CNBC. https://www.cnbc.com/2019/07/22/microsoft-
invests-1-billion-in-elon-musks-openai.html
14. Frost, E. K., Bosward, R., Aquino, Y. S. J., Braunack-Mayer, A., & Carter, S. M. (2022).
Public views on ethical issues in healthcare artificial intelligence: Protocol for a
scoping review. Systematic Reviews, 11(1), 142. https://doi.org/10.1186/s13643-
022-02012-4
15. Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial
intelligence-driven healthcare. In Artificial Intelligence in Healthcare (pp. 295–336).
Elsevier. https://doi.org/10.1016/B978-0-12-818438-7.00012-5
11
16. Huang, X., Zou, D., Cheng, G., Chen, X., & Xie, H. (2023). Trends, Research Issues
and Applications of Artificial Intelligence in Language Education. Educational
Technology & Society, 26(1), 112–131. JSTOR.
17. International Banker. (2022, November 30). The Paradox of AI and Investing.
International Banker. https://internationalbanker.com/technology/the-paradox-of-
ai-and-investing/
18. Karimian, G., Petelos, E., & Evers, S. M. A. A. (2022). The ethical issues of the
application of artificial intelligence in healthcare: A systematic scoping review. AI and
Ethics, 2(4), 539–551. https://doi.org/10.1007/s43681-021-00131-7
19. MBA, M. K. M., BSN, RN-BC, Director, N., Industry, U. P., Officer, C. N., & Microsoft.
(2019, June 17). Artificial Intelligence in Health: Ethical Considerations for Research
and Practice | HIMSS. https://www.himss.org/resources/artificial-intelligence-
health-ethical-considerations-research-and-practice
20. Montreal AI Ethics Institute. (2022, May 18). Artificial Intelligence in healthcare:
Providing ease or ethical dilemmas? Montreal AI Ethics Institute.
https://montrealethics.ai/artificial-intelligence-in-healthcare-providing-ease-or-
ethical-dilemmas/
21. Murphy, K., Di Ruggiero, E., Upshur, R., Willison, D. J., Malhotra, N., Cai, J. C.,
Malhotra, N., Lui, V., & Gibson, J. (2021). Artificial intelligence for good health: A
scoping review of the ethics literature. BMC Medical Ethics, 22(1), 14.
https://doi.org/10.1186/s12910-021-00577-8
22. Murtha, F. L., Jain, P., Song, K., Murtha, F. L., Jain, P., & Song, K. (2022, May 31).
Ethical issues surrounding research of AI in health care. Reuters.
https://www.reuters.com/legal/litigation/ethical-issues-surrounding-research-ai-
health-care-2022-05-31/
23. Muthuswamy, V. (n.d.). Ethical issues in Artificial Intelligence in Health care.
24. Nadimpalli, M. (2007). Artificial Intelligence Risks and Benefits. 6(6).
25. Naim, A. (2022). Benefits of Artificial Intelligence (AI) in Financial Practices. 01(01).
26. Nasim, S. F., Muhammad Rizwan Ali, & Umme Kulsoom. (2022). Artificial Intelligence
Incidents & Ethics: A Narrative Review. International Journal of Technology,
Innovation and Management (IJTIM), 2(2). https://doi.org/10.54489/ijtim.v2i2.80
27. O’Keefe, C., Cihon, P., Garfinkel, B., Flynn, C., Leung, J., & Dafoe, A. (2020). The
Windfall Clause: Distributing the Benefits of AI for the Common Good. Proceedings
of the AAAI/ACM Conference on AI, Ethics, and Society, 327–331.
https://doi.org/10.1145/3375627.3375842
28. Prakash, S., Balaji, J. N., Joshi, A., & Surapaneni, K. M. (2022). Ethical Conundrums
in the Application of Artificial Intelligence (AI) in Healthcare—A Scoping Review of
Reviews. Journal of Personalized Medicine, 12(11), 1914.
https://doi.org/10.3390/jpm12111914
29. Ransbotham, S., Candelon, F., Kiron, D., LaFountain, B., & Khodabandeh, S. (2021).
The Cultural Benefits of Artificial Intelligence in the Enterprise. MIT Sloan
Management Review and Boston Consulting Group.
30. Rigby, M. J. (2019). Ethical Dimensions of Using Artificial Intelligence in Health Care.
AMA Journal of Ethics.
31. Rutala, W. A., & Weber, D. J. (2019). Best practices for disinfection of noncritical
environmental surfaces and equipment in health care facilities: A bundle approach.
American Journal of Infection Control, 47, A96–A105.
12
32. Thompson, R. (2019). How Artificial Intelligence Is Transforming the Healthcare
Industry And Making The System Smarter. LinkedIn.
https://media.licdn.com/dms/image/C5112AQGte4y6cx-gqA/article-cover_image-
shrink_423_752/0/1547209324769?e=1687392000&v=beta&t=eAtBW0AcX4B
rY4XuWbURtAbCt7mIL7naXG0b6IeBzl8
33. Tricot, R. (2021). Venture capital investments in artificial intelligence. 319.
https://doi.org/10.1787/f97beae7-en
34. United Kingdom: authority of the house of lords. (2018). AI in the UK: ready, willing
and able? (Intelligence SCoA, Editor).
https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
35. Voronova, E. Y., & Lukina, Y. A. (2022). Investment Attractiveness of “Young”
Companies in the Context of Global Changes Associated With the New Industrial
Revolution. In Digital Technologies for Entrepreneurship in Industry 4.0 (pp. 156–
175). IGI Global.
36. Wetsman, N. (2021, June 30). WHO outlines principles for ethics in health AI. The
Verge. https://www.theverge.com/2021/6/30/22557119/who-ethics-ai-
healthcare
37. Wiggers, K. (2023, March 20). Corporate investment in AI is on the rise, driven by
the tech’s promise. TechCrunch. https://techcrunch.com/2023/03/20/corporate-
investment-artificial-intelligence/
38. World Health Organization. (2021). Ethics and governance of artificial intelligence
for health: WHO guidance. World Health Organization.
39. Yale Medicine Magazine. (2021). Hard choices: AI in healthcare. Yale Medicine
Magazine, 166. https://medicine.yale.edu/news/yale-medicine-
magazine/article/hard-choices-ai-in-health-care/
40. Yeasmin, S. (2019). Benefits of Artificial Intelligence in Medicine. 2019 2nd
International Conference on Computer Applications & Information Security (ICCAIS),
1–6. https://doi.org/10.1109/CAIS.2019.8769557
... Additionally, the focus on individual effects and the assumption of "ethically sound" AI systems underscore the ethical implications of AI technologies, necessitating a comprehensive understanding of the ethical issues at different scales in the context of scriptwriting (Smallman, 2022). Furthermore, the ethical implications of AI in scriptwriting extend to considerations of privacy, security, bias, discrimination, transparency, explainability, responsibility, accountability, informed consent, and human interaction, emphasizing the need for ethical guidelines and responsible AI practices (Amedior, 2023). The ethicality of AI recruiting and the ethical issues associated with human-AI co-creation further underscore the ethical complexities in the use of AI in scriptwriting, necessitating a human rights perspective and participatory design approaches to address ethical concerns (Hunkenschroer & Kriebitz, 2022). ...
Article
Full-text available
This study explores the profound changes artificial intelligence (AI) brings to the film industry's scriptwriting realm. This focuses on the role of AI in storytelling in recent years. It starts by mapping the historical progression of technological advancements in scriptwriting, highlighting the increasingly significant role of AI in crafting narratives. This paper conducts an in-depth content analysis of various AI technologies currently influencing scriptwriting, drawing on case studies to demonstrate AI's notable impact. It also explores the ethical and creative ramifications of AI-driven storytelling, illuminating how AI alters the conventional function of scriptwriters and potentially transforms the film industry. The paper concludes with a forward-looking perspective, providing insightful projections about the future direction of storytelling in the era of digital innovation.
Article
Full-text available
Purpose-The purpose of this study is to summarize the available pool of literature on service quality to identify different dimensions of service quality in the healthcare industry and understand how it is measured. The study attempts to explore the research gaps in the literature about different service quality dimensions and patient satisfaction. Design/methodology/approach-A systematic literature review process was followed to achieve the objectives of the study. Various inclusion and exclusion criteria were used to select relevant research articles from 2000-2020 for the study, and a total of 100 research articles were selected. Findings-The study identified 41 different dimensions of healthcare service quality measurement and classified these dimensions into four categories, namely servicescape, personnel, hospital administration and patients. It can be concluded that SERVQUAL is the most widely used service quality measurement tool. Originality/value-The study identified that a majority of the researchers deduced a positive relationship between SERVQUAL dimensions and the quality of healthcare services. The findings of study will assist hospital executives in formulating effective strategies to ensure that patients receive superior quality healthcare services.
Article
Full-text available
There has been a lot of media debate about "Artificial Intelligence (AI) Ethics" nowadays and many scientists and researchers have shared their views on this topic. As technology is evolving, security issues are also emerging in new forms. Machines should be ethical, and the "Build and Design" of such machines should be based on ethics. Infact, AI must have Ethics as a part of design within the software code, just like security measures are encoded within. In this review paper, statistics of AI incidents and areas are presented along with the social impact. Using the online AI Incident Database, some areas of AI applications have been identified, which shows unethical use of AI. Applications like Language and Computer vision models, intelligent robots and autonomous driving are in top ranking. Ethical issues also appear in various forms like incorrect use of technology, racism, non-safety and malicious algorithms with biasness. Data collection has helped to identify the AI ethical issues based on Time, Geographic Locations, Application Areas, and Classifications.
Article
Full-text available
Background: With the availability of extensive health data, artificial intelligence has an inordinate capability to expedite medical explorations and revamp healthcare.Artificial intelligence is set to reform the practice of medicine soon. Despite the mammoth advantages of artificial intelligence in the medical field, there exists inconsistency in the ethical and legal framework for the application of AI in healthcare. Although research has been conducted by various medical disciplines investigating the ethical implications of artificial intelligence in the healthcare setting, the literature lacks a holistic approach. Objective: The purpose of this review is to ascertain the ethical concerns of AI applications in healthcare, to identify the knowledge gaps and provide recommendations for an ethical and legal framework. Methodology: Electronic databases Pub Med and Google Scholar were extensively searched based on the search strategy pertaining to the purpose of this review. Further screening of the included articles was done on the grounds of the inclusion and exclusion criteria. Results: The search yielded a total of 1238 articles, out of which 16 articles were identified to be eligible for this review. The selection was strictly based on the inclusion and exclusion criteria mentioned in the manuscript. Conclusion: Artificial intelligence (AI) is an exceedingly puissant technology, with the prospect of advancing medical practice in the years to come. Nevertheless, AI brings with it a colossally abundant number of ethical and legal problems associated with its application in healthcare. There are manifold stakeholders in the legal and ethical issues revolving around AI and medicine. Thus, a multifaceted approach involving policymakers, developers, healthcare providers and patients is crucial to arrive at a feasible solution for mitigating the legal and ethical problems pertaining to AI in healthcare.
Article
Full-text available
Background In recent years, innovations in artificial intelligence (AI) have led to the development of new healthcare AI (HCAI) technologies. Whilst some of these technologies show promise for improving the patient experience, ethicists have warned that AI can introduce and exacerbate harms and wrongs in healthcare. It is important that HCAI reflects the values that are important to people. However, involving patients and publics in research about AI ethics remains challenging due to relatively limited awareness of HCAI technologies. This scoping review aims to map how the existing literature on publics’ views on HCAI addresses key issues in AI ethics and governance. Methods We developed a search query to conduct a comprehensive search of PubMed, Scopus, Web of Science, CINAHL, and Academic Search Complete from January 2010 onwards. We will include primary research studies which document publics’ or patients’ views on machine learning HCAI technologies. A coding framework has been designed and will be used capture qualitative and quantitative data from the articles. Two reviewers will code a proportion of the included articles and any discrepancies will be discussed amongst the team, with changes made to the coding framework accordingly. Final results will be reported quantitatively and qualitatively, examining how each AI ethics issue has been addressed by the included studies. Discussion Consulting publics and patients about the ethics of HCAI technologies and innovations can offer important insights to those seeking to implement HCAI ethically and legitimately. This review will explore how ethical issues are addressed in literature examining publics’ and patients’ views on HCAI, with the aim of determining the extent to which publics’ views on HCAI ethics have been addressed in existing research. This has the potential to support the development of implementation processes and regulation for HCAI that incorporates publics’ values and perspectives.
Article
Full-text available
Artificial intelligence (AI) is being increasingly applied in healthcare. The expansion of AI in healthcare necessitates AI-related ethical issues to be studied and addressed. This systematic scoping review was conducted to identify the ethical issues of AI application in healthcare, to highlight gaps, and to propose steps to move towards an evidence-informed approach for addressing them. A systematic search was conducted to retrieve all articles examining the ethical aspects of AI application in healthcare from Medline (PubMed) and Embase (OVID), published between 2010 and July 21, 2020. The search terms were “artificial intelligence” or “machine learning” or “deep learning” in combination with “ethics” or “bioethics”. The studies were selected utilizing a PRISMA flowchart and predefined inclusion criteria. Ethical principles of respect for human autonomy, prevention of harm, fairness, explicability, and privacy were charted. The search yielded 2166 articles, of which 18 articles were selected for data charting on the basis of the predefined inclusion criteria. The focus of many articles was a general discussion about ethics and AI. Nevertheless, there was limited examination of ethical principles in terms of consideration for design or deployment of AI in most retrieved studies. In the few instances where ethical principles were considered, fairness, preservation of human autonomy, explicability and privacy were equally discussed. The principle of prevention of harm was the least explored topic. Practical tools for testing and upholding ethical requirements across the lifecycle of AI-based technologies are largely absent from the body of reported evidence. In addition, the perspective of different stakeholders is largely missing.
Article
Full-text available
Bias, unfairness and lack of transparency and accountability in Artificial Intelligence (AI) systems, and the potential for the misuse of predictive models for decision-making have raised concerns about the ethical impact and unintended consequences of new technologies for society across every sector where data-driven innovation is taking place. This paper reviews the landscape of suggested ethical frameworks with a focus on those which go beyond high-level statements of principles and offer practical tools for application of these principles in the production and deployment of systems. This work provides an assessment of these practical frameworks with the lens of known best practices for impact assessment and audit of technology. We review other historical uses of risk assessments and audits and create a typology that allows us to compare current AI ethics tools to Best Practices found in previous methodologies from technology, environment, privacy, finance and engineering. We analyse current AI ethics tools and their support for diverse stakeholders and components of the AI development and deployment lifecycle as well as the types of tools used to facilitate use. From this, we identify gaps in current AI ethics tools in auditing and risk assessment that should be considered going forward.
Article
Full-text available
Background: This review explores the bioethical implementation of artificial intelligence (AI) in medicine and in ophthalmology. AI, which was first introduced in the 1950s, is defined as "the machine simulation of human mental reasoning, decision making, and behavior". The increased power of computing, expansion of storage capacity, and compilation of medical big data helped the AI implementation surge in medical practice and research. Ophthalmology is a leading medical specialty in applying AI in screening, diagnosis, and treatment. The first Food and Drug Administration approved autonomous diagnostic system served to diagnose and classify diabetic retinopathy. Other ophthalmic conditions such as age-related macular degeneration, glaucoma, retinopathy of prematurity, and congenital cataract, among others, implemented AI too. Purpose: To review the contemporary literature of the bioethical issues of AI in medicine and ophthalmology, classify ethical issues in medical AI, and suggest possible standardizations of ethical frameworks for AI implementation. Methods: Keywords were searched on Google Scholar and PubMed between October 2019 and April 2020. The results were reviewed, cross-referenced, and summarized. A total of 284 references including articles, books, book chapters, and regulatory reports and statements were reviewed, and those that were relevant were cited in the paper. Results: Most sources that studied the use of AI in medicine explored the ethical aspects. Bioethical challenges of AI implementation in medicine were categorized into 6 main categories. These include machine training ethics, machine accuracy ethics, patient-related ethics, physician-related ethics, shared ethics, and roles of regulators. Conclusions: There are multiple stakeholders in the ethical issues surrounding AI in medicine and ophthalmology. Attention to the various aspects of ethics related to AI is important especially with the expanding use of AI. Solutions of ethical problems are envisioned to be multifactorial.
Chapter
Proponents of health rights hold that there are rights to health, healthcare, or public health. Yet health rights raise familiar philosophical challenges, including those concerning the scope, content, relative strength, and justification of the purported rights and their correlative duties, and practical challenges, including those concerning how to recognize the rights under resource constraints without undermining access to other social goods. Recognizing narrower health rights, such as a right to healthcare, can address some of these concerns but may not secure the health outcomes desired by health advocates. Indeed, the moral considerations most likely to justify a right to healthcare appear to prioritize public health initiatives. Recognizing a right to healthcare may make it more difficult to address relevant population-level concerns. After all, healthcare and public health entitlements often draw on the same budget. These considerations present a puzzle: Can plausible health rights-claims meet necessary philosophical strictures, rather than just serve rhetorical utility, and adequately recognize the moral importance of population-level public health? This chapter explains the puzzle, analyzes two potentially promising approaches to solving it, and outlines my preferred approach. It should thereby provide insights into rights theory and related issues in health justice and public health ethics.
Chapter
Many factors act as criteria for assessing investment attractiveness, including the digitalization of business processes within companies and the introduction of innovative technologies to optimize the activities of an enterprise. The research purpose is to assess the volume of investments attracted by “young” companies through the use of advanced digital and information technologies, as well as to determine an integrated approach to activities aimed at developing the digitalization of the leading sectors of the economy. The theoretical basis of the research is the work of experts studying the impact of digitalization on attracting investment in the face of global economic uncertainty. The methodological basis of the study consists of the methods of statistical analysis of expert evaluation. As a result of the study, the authors have revealed the beneficial effect of digital technologies on the volume of attracted funding sources.
Chapter
Open-source and collaborative design of medical devices has the potential to increase the access to medical technologies, thanks to a feasible reduction in design, management, maintenance, and repair costs, due to the open access to device blueprints. To be truly transformative and have a real impact on the medical industry and healthcare systems, open-source medical devices (OSMDs) and their boundaries should be adequately defined. In this chapter, a reasoned definition of OSMD is proposed, and the underlying enabling technologies and supporting practices are provided.