ArticlePDF Available

Protecting Your Patients’ Interests in the Era of Big Data, Artificial Intelligence, and Predictive Analytics

Authors:

Abstract

The Hippocratic oath and the Belmont report articulate foundational principles for how physicians interact with patients and research subjects. The increasing use of big data and artificial intelligence techniques demands a re-examination of these principles in light of the potential issues surrounding privacy, confidentiality, data ownership, informed consent, epistemology, and inequities. Patients have strong opinions about these issues. Radiologists have a fiduciary responsibility to protect the interest of their patients. As such, the community of radiology leaders, ethicists, and informaticists must have a conversation about the appropriate way to deal with these issues and help lead the way in developing capabilities in the most just, ethical manner possible.
ORIGINAL ARTICLE
Protecting Your PatientsInterests in the
Era of Big Data, Articial Intelligence, and
Predictive Analytics
Patricia Balthazar, MD
a
, Peter Harri, MD
a
, Adam Prater, MD, MPH
a
, Nabile M. Safdar, MD, MPH
a
Abstract
The Hippocratic oath and the Belmont report articulate foundational principles for how physicians interact with patients and research
subjects. The increasing use of big data and articial intelligence techniques demands a re-examination of these principles in light of the
potential issues surrounding privacy, condentiality, data ownership, informed consent, epistemology, and inequities. Patients have
strong opinions about these issues. Radiologists have a duciary responsibility to protect the interest of their patients. As such, the
community of radiology leaders, ethicists, and informaticists must have a conversation about the appropriate way to deal with these
issues and help lead the way in developing capabilities in the most just, ethical manner possible.
Key Words: Articial intelligence, machine learning, ethics, big data, informatics
J Am Coll Radiol 2018;15:580-586. Copyright 2017 American College of Radiology
When Google DeepMind needed to test an app to pro-
vide alerts for patients at risk for worsening renal disease,
it gathered the records of 1.6 million patients from the
Royal Free Hospital. The Information Commissioners
Ofce, an independent authority set up to uphold in-
formation rights in the public interest, promoting open-
ness by public bodies and data privacy for individualsin
the United Kingdom, disapproved, nding that the
arrangement between the two entities broke the law and
failed to uphold the data privacy rights of individuals
[1,2]. Although disclosures of patient information for
direct patient care are widely accepted, otherwise
identical disclosures for research and development
require informed consent. The distinction between
patient care and research is widely recognized, yet its
proper application in the setting of new techniques can
elude even the most capable organizations.
Imaging is a robust source of phenotypic information
suitable for the application of big data, articial intelli-
gence, and personalized medicine methods. Industry has
taken notice of this relatively unexplored frontier and
spent considerable resources surveying options to harness
the power of imaging data [3,4], eagerly seeking partners
in health care. Although some have forged ahead, others
have reconsidered their initial forays into this space with
industry partners [5,6]. Because the conversation often
begins with imaging and the radiology department, it
behooves any health care provider, department, or
system to consider important questions regarding their
big data and articial intelligence efforts, whether
internally or in partnership with external partners.
We have long subscribed to ethical and regulatory
frameworks to guide our use of patient and research
subject data. In many cases, we seem unsure of how to
Credits awarded for this enduring activity are designated SA-CMEby the American Board of Radiology (ABR) and qualify
toward fullling requirements for Maintenance of Certication (MOC) Part II: Lifelong Learning and Self-assessment.
To access the SA-CME activity visit https://3s.acr.org/Presenters/CaseScript/CaseView?CDId¼WcDgy/tmftU%3d.
a
Department of Radiology and Imaging Sciences, Emory University School
of Medicine, Atlanta, Georgia.
Corresponding author and reprints: Nabile M. Safdar, MD, MPH,
Department of Radiology and Imaging Sciences, Emory University School
of Medicine, 1364 Clifton Road NE, Room D125A, Atlanta, GA 30322;
e-mail: nmsafda@emory.edu.
The authors have no conicts of interest related to the material discussed in
this article.
ª2017 American College of Radiology
580 1546-1440/17/$36.00 nhttps://doi.org/10.1016/j.jacr.2017.11.035
apply these conventions in the era of big data and arti-
cial intelligence, with their seemingly insatiable appetite
for more information. These methods have real conse-
quences, with the potential to affect the lives of in-
dividuals and populations in ways that could benet some
while harming others. Here, we touch on the major
principles of existing applicable frameworks in this
setting, explore known issues when dealing with big data
and machine learning in health care, explore perspectives
from key stakeholders, and pose questions for discussion
for imaging health care professionals to consider as they
embark on their own big data and articial intelligence
ventures.
BRIEF DEFINITIONS
The expressions big data, articial intelligence, personal-
ized medicine, population health, and predictive analytics
represent a family of concepts that are related but not
synonymous. For those being rst introduced to the eld,
a brief delineation of these terms follows.
Big Data
Still a vaguely dened term [7],big dataconsists of at
least three increasingly accepted characteristics of data,
the 3Vs: volume, variety, and velocity [8]. These are
especially suitable for radiology data, which include
large volumes of images and reports, in a variety of
imaging modalities, body parts, and formats
(unstructured text and structured DICOM), that are
rapidly generated and potentially analyzed in real time
or near real time.
Articial Intelligence
Articial intelligence is a branch of computer science that
encompasses the automation of intelligent behavior [9].
Machine learning, a subeld of articial intelligence, is
composed of data-driven techniques, such as deep
learning, used to uncover patterns and predict behavior
accomplished with minimum human intervention [10].
The machine learnsby analyzing training data and
then making predictions on a new data set [10]. This
technology holds promise in radiology for preliminary
lesion detection and differential diagnosis generation,
potentially augmenting the sensitivity and accuracy of
radiologists. Natural language processing, also a subeld
of articial intelligence, focuses on understanding the
full meaning of written or spoken text by integrating
concepts and methods pulled from various domains [11].
Precision or Personalized Medicine
Precision or personalized medicine involves prevention
and treatment strategies that take individual variability
into account [12], for example, scheduling earlier
mammographic and MRI breast cancer screening for
patients with BRCA gene mutations. Precision and
personalized medicine is increasingly dependent on big
data and articial intelligence techniques.
Population Health
Population health and public health are often inter-
changeably used terms and refer to the health of a group
of individuals, rather than the individuals themselves,
organized into many different units of analysis, depend-
ing on the research or policy purpose [13].
Predictive Analytics
Predictive analytics is a broad term used to describe a
variety of statistical techniques, such as modeling, ma-
chine learning, and data mining, that analyze current and
historical data to predict future events or behaviors [14].
When articial intelligence algorithms have access to big
data, they may facilitate the advancement of predictive
analytics in population health or personalized medicine.
EXISTING FRAMEWORKS
I will respect the privacy of my patients, for their
problems are not disclosed to me that the world may
know.
Hippocratic oath [15]
Numerous legal precedents, ethical frameworks, and
historical milestones have contributed to our current
understanding of how to appropriately interact with hu-
man subjects, patients, and clients. From these, two of
the most well known in health care are the Hippocratic
oath and the Belmont report, which articulate founda-
tional principles for how physicians ethically treat patients
and deal with research participants.
The Hippocratic oath is one of the earliest known
calls to respect the privacy of patients and respect the
condentiality of the information with which they
entrust their physicians. In this regard, the Hippocratic
oaths call to respect privacy and condentiality predates
the US constitution, the European Union Charter of
Fundamental Rights, and HIPAA, all of which allude
to privacy of individuals, if not the condentiality of
their data.
Journal of the American College of Radiology 581
Balthazar et al nProtecting PatientsInterests
The Belmont report [16] deals with the protection of
human subjects in research. The principles of respect for
persons, benecence, and justicemanifest in our current
practices of informed consent, assessment of risks and
benets, and just selection of subjects.
These cornerstones of our shared ethical principles
have been designed to apply to a wide variety of settings,
present and future, and should not be deemed irrelevant,
even in the era of big data and articial intelligence. The
imaging community should revisit the meaning and
practical implications of these principles in light of new
questions. If needed, additional standards can be derived
from these principles to address specic issues that may
arise in the era of big data.
Other reports and codes, such as those of the Inter-
national Medical Informatics Association, the Association
of Computing Machinery, the American Health Infor-
mation Management Association, the Data Science As-
sociation, and the Institute of Electrical and Electronics
Engineers, also evaluate issues of condentiality, data
ownership, epistemology, informed consent, and justice
to varying degrees [17-19]. The imaging community and
individual health care entities should survey these and
other codes to identify the most relevant components
in the process of developing our own code of ethics and
conduct with respect to imaging-related health care data
and their potential use in big data and articial
intelligence.
SOME FUNDAMENTAL CONSIDERATIONS
As the use of big data, articial intelligence, and related
techniques in health care expand, conversations about
the appropriate, ethical use of these methods become
increasingly relevant. Here, we explore common,
fundamental issues relevant to those considering devel-
oping or using such techniques for imaging, including
privacy, ownership, informed consent, epistemology,
and potential inequities between various data constitu-
ents [20].
Privacy and Condentiality
nHow do we keep data-driven insights about sensitive
health issues condential?
nHow do institutions prevent the reidentication of
individuals from joining of data sets?
nWhat is your obligation to notify a patient or subject of
a health risk or propensity identied using big data or
machine learning techniques?
Privacy and condentiality are closely related issues.
Data privacy refers to the rights of individuals to maintain
control over their own health information. Condenti-
ality refers to the responsibility of those entrusted with
those data to maintain privacy [21].
Privacy and condentiality issues affect the scope,
proper storage of, access to, and dissemination of data,
especially data that are highly sensitive or personal. As the
breadth of the collected data and their analyses continue
to increase rapidly [20,22], these data are being housed
electronically in perpetuity [23,24], which increases the
risk for privacy violations. Additionally, anonymization
of the data does not ensure against individualsbeing
identied subsequently through the joining of data sets
and reidentication [25], manipulation of the data
causing discrimination [26,27], or other improper uses
[28]. Aggregated data about specic groups might also
be created, which can cause stigmatization [29].
Ownership of Data and Subsequently Developed
Products
nCan patient data be reused for developing and vali-
dating advanced analytic methods? Can they be shared
or sold for this purpose?
nIf an app is developed and validated using patient data,
should the app be sold for prot?
In this context, ownershipdeals with who controls
(possesses or allows access to the data) and gains from
intellectual property that is subsequently developed [20].
Medical information, the key ingredient for any health-
related big data or articial intelligence exercise, is not
owned in the same sense that a physical object, or even
intellectual property, is in other settings. Although in
some cases property law may apply, in other cases an
intellectual property framework may be more appro-
priate. Patients, health care providers, and hospital sys-
tems are all stakeholders that may have intersecting rights
and responsibilities when it comes to individual medical
records [30,31]. There is no single legal construction that
regulates ownership of health care data but rather a
combination of federal, state, and international laws
and rules through which health information ownership
is governed [32].
Informed Consent
nDoes your institution have mechanisms for blanket or
tiered consent for the development, validation, and use
of big data analytics or articial intelligence?
582 Journal of the American College of Radiology
Volume 15 nNumber 3PB nMarch 2018
nWhat mechanisms are in place to exclude the data of
individuals who do opt out?
Informed consent typically implies general consent to
treat, consent for a specic procedure, or participation in
a research study; however, big data often requires pooling
and analyzing data in the future for purposes not antic-
ipated at the time of consent [25]. Anticipating possible
big data or articial intelligence approaches may require
seeking consent at multiple levels and creating
mechanisms to deal with those patients who do not
wish to be included in such exercises. Blanket consents
preauthorize a wide range of secondary analytics [33],
addressing the impracticability of obtaining consent for
multiple future analyses but may also reduce autonomy
[34-38]. Tiered consent enables the exclusion of specic
uses of the data [39] but usually cannot anticipate all
possible purposes [38]. Your system, hospital, or
practice should review and accordingly revise its initial
intake and consent regime to consider possible big data
and articial intelligence uses of patient data. These
discussions should include mechanisms for patients to
opt out of such uses.
The Black Box
nHow do we know that the results of articial intelli-
gence algorithms are valid?
nWere the data sets with which they were developed
representative?
nHow would your institution defend the results of an
algorithm directly affecting a patients health care, if no
provider could completely comprehend how the
algorithm reached its conclusion?
Epistemology is the study or a theory of the nature
and grounds of knowledge especially with reference to its
limits and validity[40]. In many cases, the analysis of
big data may go beyond direct human intelligence,
calling into question how we can ascertain the validity
of the results [20]. With some techniques, the results of
any analysis could represent statistically signicant noise
found by an algorithm without any clinical signicance
[24,41]. Human input is needed to drive analysis
scientically and provide context to the results [42,43].
Inequities
nWill predictive analytics lead to inadvertent harm of
an individual or group nancially or otherwise?
nCould a group be stigmatized by the results of an
articial intelligence classication?
Inequities in the era of articial intelligence may result
in a power gradient between the subjects providing data
and organizations with the ability to use the same data
[23,44], concentrating the ability to benet from the data
[24,45-48]. In this circumstance, patients may be
divorced from the analysis their own data supports [28]
and the ability to buy and sell that very data [48].
Additionally, aggregated data may be used to make
decisions about larger groups of people or create
groupings that previously did not exist, potentially
leading to discrimination, proling, or surveillance
[23,41,49]. If the data are collected, they may favor
groups that are more willing to provide the data [50].
Even in diverse data sets, the data could still be used to
preferentially benet one group over another [26,27,51].
Health care disparities have often been overlooked by
the emerging technology centered on big data [52].
Certainly, there is a risk that we perpetuate groups of
haves and have notson the basis of access to these
technologies. Active engagement with small population
data sets, addressing social determinants of health, and
actively promoting data access to underprivileged
populations are potential avenues for mitigating the
effects of such a digital divide.
We believe that researchers and institutions entering
the marketplace for articial intelligence discovery and
tool development should involve the local ethics review
committee, institutional review board, or other body and
craft a policy regarding these issues. The issues discussed
above, including privacy, condentiality measures,
ownership concerns, informed consent, epistemology,
and inequities, can serve as a checklist with which the
completeness of the developed policies can be evaluated.
PATIENT PERSPECTIVES
Many patients may not have considered the differences
between privacy or condentiality, the epistemology of
predictions from machine learning methods, or the jus-
tice implications of population health analytics. However,
when asked, patients have strong opinions about the
appropriate use of their health information.
Studies have shown that in general, many patients
have positive attitudes toward precision medicine [53,54].
Although not directly related to imaging per se, Halverson
et al [53] interviewed patients who had undergone genomic
sequencing in oncology and rare-disease settings and
found that participants had predominantly positive
attitudes toward sequencing. Some of the positive feelings
were empowerment over their own health, altruistic
Journal of the American College of Radiology 583
Balthazar et al nProtecting PatientsInterests
contribution to the progress of medicine, the legitimization
of their suffering, and a sense of closure in having done
everything they could [53]. However, in the era of
predictive analytics, decisions can be driven by nancial
and administrative incentives [55] not necessarily aligned
with the needs of patients. Patients have signicant
concerns about sharing their anonymized personal health
records when they might be divulged or sold to other
organizations to be used for prot[56]. Questions of
data security, privacy, and condentiality are also
common among patients, especially when it comes to
sensitive information (eg, drug abuse, mental health) that
could have an impact on their personal employment or
health insurance coverage [56].
When it comes to potential inequities, patients
opinions may vary with their backgrounds and sense of
vulnerability. One study assessing attitudes of patients
with breast cancer toward molecular testing for personal-
ized therapy and research found that nonwhites were
less willing to undergo testing even if the results would
guide their own therapy [57]. This is problematic on many
levels, including the fact that diverse representation is
needed for validity and reliability of articial intelligence
and personalized medicine techniques.
There is concern that with the use of predictive
analytics, a racial or religious population associated with an
illness, poverty, or another factor could become further
marginalized or stigmatized, either by being underrepre-
sented during the development of these methods, through
spurious associations, or through analyses focusing on an
outcome other than health. In a worst-case scenario, a data
set or articial intelligence algorithm can inheritthe
systematic biases from which they originate or the implicit
biases of those that curated them [26,58]. Indeed, the
potential for this was realized in the popular imagination
when Microsoft created an articial intelligence Twitter
chatbot that started posting racist and genocidal tweets
within 24 hours of learningfrom human-generated text
[59]. This issue is further complicated by the ability of
machines to identify minorities in ways that transcend
human perception, as exemplied by a recent project
demonstrating that a machine-learning algorithm using
Facebook proles was able to predict sexual orientation in
men with 91% accuracy [60]. When applying machine
learning algorithms to health care, we must be aware that
they were derived from existing data, usually entered by
humans, and therefore, might propagate human bias.
Given that in an era of increased information
accumulation and larger data breaches [61,62], and that
patient data can be used for nefarious purposes, we
advocate that the radiology community should craft a
patient bill of rightsthat addresses these issues
specically in collaboration with other physicians,
ethicists, and patients. If radiology is to maintain its
position as a leading health care discipline in big data
and articial intelligence, we must also lead in the
ethical use of these methods. Among other items, such
a bill of rights should consider the following:
nPatientsimaging data are valuable and deserve the
highest level of security reasonably available.
nPatients are entitled to know what their imaging data
can and cannot be used for, including secondary uses.
nPatientsimaging data will not be used to harm them
or a group to which they belong.
THE ROLE OF RADIOLOGISTS
Radiologists must be deeply involved in the development,
validation, and implementation of big data analytics,
articial intelligence, and personalized medicine [63] in
imaging. Clinical expertise is essential to asking the
right questions, accurately interpreting results, and
communicating with patients for optimal decision
making. Furthermore, physicians have a duciary
responsibility for the well-being of their patients, as
afrmed in the Hippocratic oath, rendering them pro-
fessionally responsible for securing the interest of their
patients. Although corporations and health care systems
may have legal obligations and policies, physicians have
an individual, moral obligation to protect their patients
privacy and data.
As the appetite for data to develop articial intelli-
gence, precision medicine, and predictive analytics in
imaging grows, more radiologists will have to consider
how they wish to engage with their patients, their pa-
tientsdata, and the third parties looking for clinical
partners. The community of imaging scientists, ethicists,
radiology leaders, and informaticists must have their own
conversation about the appropriate way to deal with these
issues and help lead the way in developing capabilities in
the most just, ethical manner possible.
TAKE-HOME POINTS
-Data privacy refers to the rights of individuals to
maintain control over their health data and infor-
mation. Condentiality refers to the responsibility
of those entrusted with those data to maintain
privacy.
584 Journal of the American College of Radiology
Volume 15 nNumber 3PB nMarch 2018
-Anticipating possible big data or articial intelli-
gence approaches may require seeking consent at
multiple levels and creating mechanisms to deal
with those patients who do not wish to be included
in such exercises.
-In many cases, the analysis of big data using com-
plex computer algorithms may go beyond direct
human intelligence.
-Physicians must be deeply involved in the devel-
opment, validation, and implementation of big data
analytics, articial intelligence, and personalized
medicine in healthcare.
REFERENCES
1. Information CommissionersOfce. Royal FreeGoogle Deep-
Mind trial failed to comply with data protection law. Available
at: https://ico.org.uk/about-the-ico/news-and-events/news-and-
blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-
with-data-protection-law/. Accessed August 20, 2017.
2. Hern A. Google DeepMind 1.6m patient record deal inappropriate.
The Guardian. Available at: http://www.theguardian.com/
technology/2017/may/16/google-deepmind-16m-patient-record-deal-
inappropriate-data-guardian-royal-free. Accessed August 20, 2017.
3. IBM Watson HealthMedical Imaging. Available at: https://www.
ibm.com/watson/health/imaging/. Accessed August 10, 2017.
4. Google. Research at Google. Available at: https://research.google.com/
teams/brain/healthcare/. Accessed August 10, 2017.
5. Schmidt C. MD Anderson breaks with IBM Watson, raising questions
about articial intelligence in oncology. J Natl Cancer Inst
2017;109(5).
6. MD Anderson benches IBM Watson in setback for articial intelligence
in medicine. Available at: https://www.forbes.com/sites/matthewherper/
2017/02/19/md-anderson-benches-ibm-watson-in-setback-for-articial-
intelligence-in-medicine/#4a096ac63774. Accessed August 20, 2017.
7. Fallik D. For big data, big questions remain. Health Aff Proj Hope
2014;33:1111-4.
8. Mooney SJ, Westreich DJ, El-Sayed AM. Commentary: epidemiology
in the era of big data. Epidemiol Camb Mass 2015;26:390-4.
9. Russell S, Norvig P. Articial intelligence: a modern approach.
Englewood Cliffs, New Jersey: Prentice Hall; 1995.
10. Lee J-G, Jun S, Cho Y-W, et al. Deep learning in medical imaging:
general overview. Kor J Radiol 2017;18:570-84.
11. Lacson R, Khorasani R. Natural language processing: the basics (part 1).
J Am Coll Radiol 2011;8:436-7.
12. Collins FS, Varmus H. A new initiative on precision medicine. N Engl
J Med 2015;372:793-5.
13. Kindig DA. Understanding population health terminology. Milbank
Q 2007;85:139-61.
14. Suresh S. Big data and predictive analytics: applications in the care of
children. Pediatr Clin North Am 2016;63:357-66.
15. Hippocratic oath. Available at: https://en.wikipedia.org/w/index.php?
title¼Hippocratic_Oath&oldid¼795750050. Accessed December 6,
2017.
16. US Department of Health and Human Services. The Belmont report.
Available at: https://www.hhs.gov/ohrp/regulations-and-policy/
belmont-report/index.html. Accessed August 10, 2017.
17. Code of conduct. Available at: /code-of-conduct.html. Accessed
September 26, 2017.
18. Metcalf J. Ethics codes: history, context, and challenges. Available at:
http://bdes.datasociety.net/council-output/ethics-codes-history-context-
and-challenges/. Accessed September 2017.
19. American Health Information Management Association. AHIMA code
of ethics. Available at: http://bok.ahima.org/doc?oid¼105098.
Accessed September 26, 2017.
20. Mittelstadt BD, Floridi L. The ethics of big data: current and fore-
seeable issues in biomedical contexts. Sci Eng Ethics 2016;22:303-41.
21. Centers for Disease Control and Prevention. Emergency preparedness
for older adults; HIPAA, privacy and condentiality. Available at:
https://www.cdc.gov/aging/emergency/legal/privacy.htm. Accessed
August 10, 2017.
22. Nunan D, Domenico MD. Market research and the ethics of big data.
Int J Mark Res 2013;55:505-20.
23. Andrejevic M. The big data divide. Int J Commun 2014;8:17.
24. Puschmann C, Burgess J. Metaphors of big data. Int J Commun
2014;8:20.
25. Choudhury S, Fishman JR, McGowan ML, Juengst ET. Big data,
open science and the brain: lessons learned from genomics. Front Hum
Neurosci 2014;8:239.
26. Zarsky T. Understanding discrimination in the scored society. Available at:
https://papers.ssrn.com/abstract¼2550248. Accessed August 23, 2017.
27. Crawford K. The hidden biases in big data. Harvard Business Review.
Available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data.
Accessed August 23, 2017.
28. Tene O, Polonetsky J. Big data for all: privacy and user control in the
age of analytics. Nw J Tech Intell Prop 2012;11:xxvii.
29. Docherty A. Big dataethical perspectives. Anaesthesia 2014;69:390-1.
30. Hall MA, Schulman KA. Ownership of medical information. JAMA
2009;301:1282-4.
31. Safdar N, Shet N, Bulas D, Knight N. Handoffs between radiolo-
gists and patients: threat or opportunity? J Am Coll Radiol 2011;8:
853-7.
32. Cartwright-Smith L, Gray E, Thorpe JH. Health information
ownership: legal theories and policy implications. Vand J Ent Tech L
2016;19:207.
33. Larson EB. Building trust in the power of big dataresearch to serve
the public good. JAMA 2013;309:2443-4.
34. Clayton EW. Informed consent and biobanks. J Law Med Ethics
2005;33:15-21.
35. Ioannidis JPA. Informed consent, big data, and the oxymoron of
research that is not research. Am J Bioethics 2013;13:40-2.
36. Currie J. Big dataversus big brother: on the appropriate use of
large-scale data collections in pediatrics. Pediatrics 2013;131(suppl 2):
S127-32.
37. Lomborg S, Bechmann A. Using APIs for data collection on social
media. Inf Soc 2014;30:256-65.
38. Master Z, Campo-Engelstein L, Cauleld T. Scientistsperspectives on
consent in the context of biobanking research. Eur J Hum Genet
2015;23:569-74.
39. Majumder MA. Cyberbanks and other virtual research repositories.
J Law Med Ethics 2005;33:31-9.
40. Denition of epistemology. Available at: https://www.merriam-
webster.com/dictionary/epistemology. Accessed August 20, 2017.
41. Markowetz A, Błaszkiewicz K, Montag C, Switala C, Schlaepfer TE.
Psycho-informatics: big data shaping modern psychometrics. Med
Hypotheses 2014;82:405-11.
42. Floridi L. The method of levels of abstraction. Minds Mach 2008;18:
303-29.
43. Floridi L. Big data and their epistemological challenge. Philos Technol
2012;25:435-7.
44. Crawford K, Gray ML, Miltner K. Critiquing big data: politics, ethics,
epistemology. Int J Commun 2014;8:10.
45. Berry DM. The computational turn: thinking about the digital hu-
manities. Available at: http://www.culturemachine.net/index.php/cm/
article/view/440. Accessed May 21, 2017.
46. Boyd D, Crawford K. Critical questions for big data. Inf Commun Soc
2012;15:662-79.
47. Faireld J, Shtein H. Big data, big problems: emerging issues in the ethics
of data science and journalism. J Mass Media Ethics 2014;29:38-51.
Journal of the American College of Radiology 585
Balthazar et al nProtecting PatientsInterests
48. McNeely CL, Hahm J. The big (data) bang: policy, prospects, and
challenges. Rev Policy Res 2014;31:304-10.
49. Lupton D. The commodication of patient opinion: the digital patient
experience economy in the age of big data. Sociol Health Illn 2014;36:
856-69.
50. Lewis CM, Obregón-Tito A, Tito RY, Foster MW, Spicer PG. The
Human Microbiome Project: lessons from human genomics. Trends
Microbiol 2012;20:1-4.
51. Mathaiyan J, Chandrasekaran A, Davis S. Ethics of genomic research.
Perspect Clin Res 2013;4:100-4.
52. Zhang X, Pérez-Stable EJ, Bourne PE, et al. Big data science: oppor-
tunities and challenges to address minority health and health disparities
in the 21st century. Ethn Dis 2017;27:95-106.
53. Halverson CM, Clift KE, McCormick JB. Was it worth it? Patients
perspectives on the perceived value of genomic-based individualized
medicine. J Community Genet 2016;7:145-52.
54. McCullough LB, Slashinski MJ, McGuire AL, et al. Is whole-exome
sequencing an ethically disruptive technology? Perspectives of pediat-
ric oncologists and parents of pediatric patients with solid tumors.
Pediatr Blood Cancer 2016;63:511-5.
55. Cohen IG, Amarasingham R, Shah A, Xie B, Lo B. The legal
and ethical concerns that arise from using complex
predictive analytics in health care. Health Aff Proj Hope
2014;33:1139-47.
56. NICE Citizens Council. What ethical and practical issues need to be
considered in the use of anonymised information derived from per-
sonal care records as part of the evaluation of treatments and delivery of
care? Available at: http://www.ncbi.nlm.nih.gov/books/NBK401705/.
Accessed May 15, 2017.
57. Yusuf RA, Rogith D, Hovick SRA, et al. Attitudes toward mo-
lecular testing for personalized cancer therapy. Cancer 2015;121:
243-50.
58. Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically
from language corpora contain human-like biases. Science 2017;356:
183-6.
59. How Twitter corrupted Microsofts Tay: a crash course in the dangers
of AI in the real world. Available at: https://www.forbes.com/sites/
kalevleetaru/2016/03/24/how-twitter-corrupted-microsofts-tay-a-
crash-course-in-the-dangers-of-ai-in-the-real-world/#4f39fd926d28.
Accessed September 27, 2017.
60. Bhattasali N, Maiti E. Machine gaydar: using Facebook proles to
predict sexual orientation. Mach Learn 2015.
61. Blumenthal R, McSweeny T. If youre not angry about Equifax,you should
be. Available at: http://www.cnn.com/2017/09/26/opinions/congress-
protect-americans-from-security-breaches-blumenthal-mcsweeny-opinion/
index.html. Accessed September 27, 2017.
62. Equifax CEO Richard Smith steps down amid hacking scandal. The
Washington Post. Available at: https://www.washingtonpost.com/
news/the-switch/wp/2017/09/26/equifax-ceo-retires-following-massive-
data-breach/?utm_term¼.5de0e2bf4711. Accessed September 27,
2017.
63. McGuire AL, McCullough LB, Evans JP. The indispensable role
of professional judgment in genomic medicine. JAMA 2013;309:
1465-6.
586 Journal of the American College of Radiology
Volume 15 nNumber 3PB nMarch 2018
... For example, patients should provide informed consent beforehand, as they may refuse facial analysis. Furthermore, algorithms might be trained for particular demographics, further marginalizing already vulnerable groups [26,135,136]. Additionally, using AI/ML algorithms to detect pain through facial expressions has several limitations [26,135,136]. ...
... Furthermore, algorithms might be trained for particular demographics, further marginalizing already vulnerable groups [26,135,136]. Additionally, using AI/ML algorithms to detect pain through facial expressions has several limitations [26,135,136]. Their application may be limited by many factors, e.g., the patient's age, gender, behavior, as well as the head position and movements. Moreover, some medical conditions can affect facial shape and mobility, such as Parkinson's, stroke, facial injury, or deformity. ...
... Moreover, some medical conditions can affect facial shape and mobility, such as Parkinson's, stroke, facial injury, or deformity. All these factors can reduce the accuracy and sensitivity of the AI/ML to automatically detect pain experiences [26,135,136]. The ''WHO'' reiterates the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health. ...
Article
Full-text available
Pain is a significant health issue, and pain assessment is essential for proper diagnosis, follow-up, and effective management of pain. The conventional methods of pain assessment often suffer from subjectivity and variability. The main issue is to understand better how people experience pain. In recent years, artificial intelligence (AI) has been playing a growing role in improving clinical diagnosis and decision making. The application of AI offers promising opportunities to improve the accuracy and efficiency of pain assessment. This review article provides an overview of the current state of AI in pain assessment and explores its potential for improving accuracy, efficiency, and personalized care. By examining the existing literature, research gaps, and future directions , this article aims to guide further advancements in the field of pain management. An online database search was conducted via multiple websites to identify the relevant articles. The inclusion criteria were English articles
... Confidentiality: "the responsibility of those entrusted with those data to maintain privacy" [116]. ...
Article
Full-text available
The use of AI in healthcare has sparked much debate among philosophers, ethicists, regulators and policymakers who raised concerns about the implications of such technologies. The presented scoping review captures the progression of the ethical and legal debate and the proposed ethical frameworks available concerning the use of AI-based medical technologies, capturing key themes across a wide range of medical contexts. The ethical dimensions are synthesised in order to produce a coherent ethical framework for AI-based medical technologies, highlighting how transparency, accountability, confidentiality, autonomy, trust and fairness are the top six recurrent ethical issues. The literature also highlighted how it is essential to increase ethical awareness through interdisciplinary research, such that researchers, AI developers and regulators have the necessary education/competence or networks and tools to ensure proper consideration of ethical matters in the conception and design of new AI technologies and their norms. Interdisciplinarity throughout research, regulation and implementation will help ensure AI-based medical devices are ethical, clinically effective and safe. Achieving these goals will facilitate successful translation of AI into healthcare systems, which currently is lagging behind other sectors, to ensure timely achievement of health benefits to patients and the public.
... AI unfurls its wings, transcending statistical norms, diving into the mystical realms of nuance and complexity [5]. It's the alchemy of machine and mind, where algorithms transform into savants, learning, adapting, and honing their predicting abilities with each repetition [6]. In the crucible of predictive analytics, AI isn't just a facet; it's the keystone, an amalgam of methodologies and techniques, from the esoteric skills of machine learning to the ethereal regions of natural language processing [3]. ...
Article
Full-text available
Predictive analytics, driven by the tremendous force of artificial intelligence (AI), has caused an upsurge across multiple industries. It's a wonder, unearthing important insights and projections from the immense pools of data. But despite this digital transformation, a shadow hangs large the ethics. Privacy, bias, openness, and accountability stand as towering pillars of worry. This paper begins on a journey, unravelling the subtle movement between AI and predictive analytics, while throwing light on the ethical challenges it generates. A historical voyage awaits, tracing the footsteps of AI's progress in predictive realms, passing across varied landscapes of theory and application. Privacy, that hallowed refuge, fractures beneath the eyes of predictive algorithms. Bias, an insidious phantom, creeps along the halls of predictive models. Transparency, a beacon of truth, flickers weakly in the face of complex decision-making. Strategies unfurl like maps in new territories-regulatory requirements stand guard against ethical breaches, bias detection techniques penetrate through the veil of prejudice, and transparency-enhancing measures cast a bright glow upon the murky waters of decision-making. Thus, as the pendulum swings, companies utilize the power of AI in predictive ana-lytics with a firm hand, anchoring themselves to the shores of fairness, transparency, and responsibility .
... Ensuring the responsible storage, use and sharing of precision oncology data demands a robust data governance that concomitantly safeguards patient privacy and enables transparent data sharing necessary for clinical purposes and research, a 'sine qua non' for advancing precision oncology and improving patient outcomes (Balthazar et al., 2018). Nonetheless, it remains commonplace for patients to express concerns about potential misuse of their data (Sanderson et al., 2016), leading to potential exploitation and discrimination (Lowrance and Collins, 2007). ...
Article
Full-text available
The personalised oncology paradigm remains challenging to deliver despite technological advances in genomics-based identification of actionable variants combined with the increasing focus of drug development on these specific targets. To ensure we continue to build concerted momentum to improve outcomes across all cancer types, financial, technological and operational barriers need to be addressed. For example, complete integration and certification of the ‘molecular tumour board’ into ‘standard of care’ ensures a unified clinical decision pathway that both counteracts fragmentation and is the cornerstone of evidence-based delivery inside and outside of a research setting. Generally, integrated delivery has been restricted to specific (common) cancer types either within major cancer centres or small regional networks. Here, we focus on solutions in real-world integration of genomics, pathology, surgery, oncological treatments, data from clinical source systems and analysis of whole-body imaging as digital data that can facilitate cost-effectiveness analysis, clinical trial recruitment, and outcome assessment. This urgent imperative for cancer also extends across the early diagnosis and adjuvant treatment interventions, individualised cancer vaccines, immune cell therapies, personalised synthetic lethal therapeutics and cancer screening and prevention. Oncology care systems worldwide require proactive step-changes in solutions that include inter-operative digital working that can solve patient centred challenges to ensure inclusive, quality, sustainable, fair and cost-effective adoption and efficient delivery. Here we highlight workforce, technical, clinical, regulatory and economic challenges that prevent the implementation of precision oncology at scale, and offer a systematic roadmap of integrated solutions for standard of care based on minimal essential digital tools. These include unified decision support tools, quality control, data flows within an ethical and legal data framework, training and certification, monitoring and feedback. Bridging the technical, operational, regulatory and economic gaps demands the joint actions from public and industry stakeholders across national and global boundaries.
... Striking a balance between protecting patient privacy and promoting scientific advancements in cancer research is a critical challenge in cancer research and also in digital pathology. Data sharing and the use of AI algorithms in digital pathology have the potential to accelerate cancer research, but these advances must be weighed against the ethical obligation to safeguard patient privacy (Balthazar et al. 2018, Price & Cohen 2019. Recently, efforts have been made toward systematically analyzing the privacy risks associated with sharing WSIs and formulating guidelines for their release (Holub et al. 2023). ...
Article
Full-text available
Digital pathology, powered by whole-slide imaging technology, has the potential to transform the landscape of cancer research and diagnosis. By converting traditional histopathological specimens into high-resolution digital images, it paves the way for computer-aided analysis, uncovering a new horizon for the integration of artificial intelligence (AI) and machine learning (ML). The accuracy of AI- and ML-driven tools in distinguishing benign from malignant tumors and predicting patient outcomes has ushered in an era of unprecedented opportunities in cancer care. However, this promising field also presents substantial challenges, such as data security, ethical considerations, and the need for standardization. In this review, we delve into the needs that digital pathology addresses in cancer research, the opportunities it presents, its inherent potential, and the challenges it faces. The goal of this review is to stimulate a comprehensive discourse on harnessing digital pathology and AI in health care, with an emphasis on cancer diagnosis and research. Expected final online publication date for the Annual Review of Cancer Biology, Volume 8 is April 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to “responsibility” and “AI in healthcare”, and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders’ responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Chapter
Artificial intelligence and machine learning are novel technologies that will change the way medicine is practiced. Ushering in this new tool in a conscientious way will require knowledge of the terminology and types of AI, as well as forward-thinking regarding the ethical and legal implications within the profession. At the outset, all health and biomedical research, whether AI-based or conventional methods, should adhere to the fundamental ethical principles: do good (beneficence), respect for persons (autonomy), do no harm (non-malfeasance), and distributive justice, to assure protection of the safety, dignity, rights, and welfare of the community and the participants. These four fundamental principles have been broadened into ten general principles in the ICMR National Ethical Guidelines, 2017. These general ethical principles address most of the ethical aspects of any biomedical and health research. Nevertheless, AI for healthcare, to a large extent, depends on data obtained from human participants, which invokes additional concerns related to potential biases, data handling, interpretation, autonomy, risk minimization, professional competence, data sharing, and confidentiality. It is, therefore, imperative to have an ethical framework that addresses issues specific to AI for healthcare. Developers as well as end-users will need to consider the ethical and legal components alongside the functional creation of algorithms in order to foster acceptance and adoption, and most importantly, to prevent patient harm.
Article
A large number of high-quality and repeated digital images in clinical applications of ophthalmology have allowed the development of artificial intelligence studies in ophthalmology at a global level. Artificial intelligence algorithms can be used to diagnose diseases, monitor progression, analyze images, and evaluate treatment effectiveness by using digital data led by direct photography, fundus photography and optical coherence tomography. These programs can be used to make quick and accurate decisions in clinical applications in all areas of ophthalmology, especially diabetic retinopathy, glaucoma and age-related macular degeneration. This review, it is aimed to reveal the current status of artificial intelligence in clinical applications of ophthalmology, its prevalence and potential difficulties in clinical practice.
Article
Full-text available
The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.
Article
Full-text available
Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicate a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the Web. Our results indicate that text corpora contain re-coverable and accurate imprints of our historic biases, whether morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.
Article
Full-text available
p class="Default">Addressing minority health and health disparities has been a missing piece of the puzzle in Big Data science. This article focuses on three priority opportunities that Big Data science may offer to the reduction of health and health care disparities. One opportunity is to incorporate standardized information on demographic and social determinants in electronic health records in order to target ways to improve quality of care for the most disadvantaged popula­tions over time. A second opportunity is to enhance public health surveillance by linking geographical variables and social determinants of health for geographically defined populations to clinical data and health outcomes. Third and most impor­tantly, Big Data science may lead to a better understanding of the etiology of health disparities and understanding of minority health in order to guide intervention devel­opment. However, the promise of Big Data needs to be considered in light of significant challenges that threaten to widen health dis­parities. Care must be taken to incorporate diverse populations to realize the potential benefits. Specific recommendations include investing in data collection on small sample populations, building a diverse workforce pipeline for data science, actively seeking to reduce digital divides, developing novel ways to assure digital data privacy for small populations, and promoting widespread data sharing to benefit under-resourced minority-serving institutions and minority researchers. With deliberate efforts, Big Data presents a dramatic opportunity for re­ducing health disparities but without active engagement, it risks further widening them. Ethn.Dis; 2017;27(2):95-106; doi:10.18865/ed.27.2.95.</p
Article
Full-text available
The value of genomic sequencing is often understood in terms of its ability to affect diagnosis or treatment. In these terms, successes occur only in a minority of cases. This paper presents views from patients who had exome sequencing done clinically to explore how they perceive the utility of genomic medicine. The authors used semi-structured, qualitative interviews in order to study patients' attitudes toward genomic sequencing in oncology and rare-disease settings. Participants from 37 cases were interviewed. In terms of the testing's key values-regardless of having received what clinicians described as meaningful results-participants expressed four qualities that are separate from traditional views of clinical utility: Participants felt they had been empowered over their own health. They felt they had contributed altruistically to the progress of genomic technology in medicine. They felt their suffering had been legitimated. They also felt a sense of closure, having done everything they could. Patients expressed overwhelmingly positive attitudes toward sequencing. Their rationale was not solely based on the results' clinical utility. It is important for clinicians to understand this non-medical reasoning as it pertains to patient decision-making and informed consent.
Article
Why now? This is the first question we might ask of the big data phenomenon. Why has it gained such remarkable purchase in a range of industries and across academia, at this point in the 21st century? Big data as a term has spread like kudzu in a few short years, ranging across a vast terrain that spans health care, astronomy, policing, city planning, and advertising. From the RNA bacteriophages in our bodies to the Kepler Space Telescope, searching for terrorists or predicting cereal preferences, big data is deployed as the term of art to encompass all the techniques used to analyze data at scale. But why has the concept gained such traction now?.
Article
Background: It has been anticipated that physician and parents will be ill prepared or unprepared for the clinical introduction of genome sequencing, making it ethically disruptive. Procedure: As a part of the Baylor Advancing Sequencing in Childhood Cancer Care study, we conducted semistructured interviews with 16 pediatric oncologists and 40 parents of pediatric patients with cancer prior to the return of sequencing results. We elicited expectations and attitudes concerning the impact of sequencing on clinical decision making, clinical utility, and treatment expectations from both groups. Using accepted methods of qualitative research to analyze interview transcripts, we completed a thematic analysis to provide inductive insights into their views of sequencing. Results: Our major findings reveal that neither pediatric oncologists nor parents anticipate sequencing to be an ethically disruptive technology, because they expect to be prepared to integrate sequencing results into their existing approaches to learning and using new clinical information for care. Pediatric oncologists do not expect sequencing results to be more complex than other diagnostic information and plan simply to incorporate these data into their evidence-based approach to clinical practice, although they were concerned about impact on parents. For parents, there is an urgency to protect their child's health and in this context they expect genomic information to better prepare them to participate in decisions about their child's care. Conclusions: Our data do not support the concern that introducing genome sequencing into childhood cancer care will be ethically disruptive, that is, leave physicians or parents ill prepared or unprepared to make responsible decisions about patient care.
Article
As big data techniques become widespread in journalism, both as the subject of reporting and as newsgathering tools, the ethics of data science must inform and be informed by media ethics. This article explores emerging problems in ethical research using big data techniques. It does so using the duty-based framework advanced by W.D. Ross, who has significantly influenced both research science and media ethics. A successful framework must provide stability and flexibility. Without stability, ethical precommitments will vanish as technology rapidly shifts costs. Without flexibility, traditional approaches will rapidly become obsolete in the face of technological change. The article concludes that Ross's duty-based approach both provides stability in the face of rapid technological change and flexibility to innovate to achieve the original purpose of basic ethical principles.