Content uploaded by Patricia Balthazar, MD
Author content
All content in this area was uploaded by Patricia Balthazar, MD on Feb 07, 2020
Content may be subject to copyright.
ORIGINAL ARTICLE
Protecting Your Patients’Interests in the
Era of Big Data, Artificial Intelligence, and
Predictive Analytics
Patricia Balthazar, MD
a
, Peter Harri, MD
a
, Adam Prater, MD, MPH
a
, Nabile M. Safdar, MD, MPH
a
Abstract
The Hippocratic oath and the Belmont report articulate foundational principles for how physicians interact with patients and research
subjects. The increasing use of big data and artificial intelligence techniques demands a re-examination of these principles in light of the
potential issues surrounding privacy, confidentiality, data ownership, informed consent, epistemology, and inequities. Patients have
strong opinions about these issues. Radiologists have a fiduciary responsibility to protect the interest of their patients. As such, the
community of radiology leaders, ethicists, and informaticists must have a conversation about the appropriate way to deal with these
issues and help lead the way in developing capabilities in the most just, ethical manner possible.
Key Words: Artificial intelligence, machine learning, ethics, big data, informatics
J Am Coll Radiol 2018;15:580-586. Copyright 2017 American College of Radiology
When Google DeepMind needed to test an app to pro-
vide alerts for patients at risk for worsening renal disease,
it gathered the records of 1.6 million patients from the
Royal Free Hospital. The Information Commissioner’s
Office, an “independent authority set up to uphold in-
formation rights in the public interest, promoting open-
ness by public bodies and data privacy for individuals”in
the United Kingdom, disapproved, finding that the
arrangement between the two entities broke the law and
failed to uphold the data privacy rights of individuals
[1,2]. Although disclosures of patient information for
direct patient care are widely accepted, otherwise
identical disclosures for research and development
require informed consent. The distinction between
patient care and research is widely recognized, yet its
proper application in the setting of new techniques can
elude even the most capable organizations.
Imaging is a robust source of phenotypic information
suitable for the application of big data, artificial intelli-
gence, and personalized medicine methods. Industry has
taken notice of this relatively unexplored frontier and
spent considerable resources surveying options to harness
the power of imaging data [3,4], eagerly seeking partners
in health care. Although some have forged ahead, others
have reconsidered their initial forays into this space with
industry partners [5,6]. Because the conversation often
begins with imaging and the radiology department, it
behooves any health care provider, department, or
system to consider important questions regarding their
big data and artificial intelligence efforts, whether
internally or in partnership with external partners.
We have long subscribed to ethical and regulatory
frameworks to guide our use of patient and research
subject data. In many cases, we seem unsure of how to
Credits awarded for this enduring activity are designated “SA-CME”by the American Board of Radiology (ABR) and qualify
toward fulfilling requirements for Maintenance of Certification (MOC) Part II: Lifelong Learning and Self-assessment.
To access the SA-CME activity visit https://3s.acr.org/Presenters/CaseScript/CaseView?CDId¼WcDgy/tmftU%3d.
a
Department of Radiology and Imaging Sciences, Emory University School
of Medicine, Atlanta, Georgia.
Corresponding author and reprints: Nabile M. Safdar, MD, MPH,
Department of Radiology and Imaging Sciences, Emory University School
of Medicine, 1364 Clifton Road NE, Room D125A, Atlanta, GA 30322;
e-mail: nmsafda@emory.edu.
The authors have no conflicts of interest related to the material discussed in
this article.
ª2017 American College of Radiology
580 1546-1440/17/$36.00 nhttps://doi.org/10.1016/j.jacr.2017.11.035
apply these conventions in the era of big data and arti-
ficial intelligence, with their seemingly insatiable appetite
for more information. These methods have real conse-
quences, with the potential to affect the lives of in-
dividuals and populations in ways that could benefit some
while harming others. Here, we touch on the major
principles of existing applicable frameworks in this
setting, explore known issues when dealing with big data
and machine learning in health care, explore perspectives
from key stakeholders, and pose questions for discussion
for imaging health care professionals to consider as they
embark on their own big data and artificial intelligence
ventures.
BRIEF DEFINITIONS
The expressions big data, artificial intelligence, personal-
ized medicine, population health, and predictive analytics
represent a family of concepts that are related but not
synonymous. For those being first introduced to the field,
a brief delineation of these terms follows.
Big Data
Still a vaguely defined term [7],“big data”consists of at
least three increasingly accepted characteristics of data,
the “3V’s”: volume, variety, and velocity [8]. These are
especially suitable for radiology data, which include
large volumes of images and reports, in a variety of
imaging modalities, body parts, and formats
(unstructured text and structured DICOM), that are
rapidly generated and potentially analyzed in real time
or near real time.
Artificial Intelligence
Artificial intelligence is a branch of computer science that
encompasses the automation of intelligent behavior [9].
Machine learning, a subfield of artificial intelligence, is
composed of data-driven techniques, such as deep
learning, used to uncover patterns and predict behavior
accomplished with minimum human intervention [10].
The machine “learns”by analyzing training data and
then making predictions on a new data set [10]. This
technology holds promise in radiology for preliminary
lesion detection and differential diagnosis generation,
potentially augmenting the sensitivity and accuracy of
radiologists. Natural language processing, also a subfield
of artificial intelligence, focuses on understanding the
full meaning of written or spoken text by integrating
concepts and methods pulled from various domains [11].
Precision or Personalized Medicine
Precision or personalized medicine involves prevention
and treatment strategies that take individual variability
into account [12], for example, scheduling earlier
mammographic and MRI breast cancer screening for
patients with BRCA gene mutations. Precision and
personalized medicine is increasingly dependent on big
data and artificial intelligence techniques.
Population Health
Population health and public health are often inter-
changeably used terms and refer to the health of a group
of individuals, rather than the individuals themselves,
organized into many different units of analysis, depend-
ing on the research or policy purpose [13].
Predictive Analytics
Predictive analytics is a broad term used to describe a
variety of statistical techniques, such as modeling, ma-
chine learning, and data mining, that analyze current and
historical data to predict future events or behaviors [14].
When artificial intelligence algorithms have access to big
data, they may facilitate the advancement of predictive
analytics in population health or personalized medicine.
EXISTING FRAMEWORKS
I will respect the privacy of my patients, for their
problems are not disclosed to me that the world may
know.
—Hippocratic oath [15]
Numerous legal precedents, ethical frameworks, and
historical milestones have contributed to our current
understanding of how to appropriately interact with hu-
man subjects, patients, and clients. From these, two of
the most well known in health care are the Hippocratic
oath and the Belmont report, which articulate founda-
tional principles for how physicians ethically treat patients
and deal with research participants.
The Hippocratic oath is one of the earliest known
calls to respect the privacy of patients and respect the
confidentiality of the information with which they
entrust their physicians. In this regard, the Hippocratic
oath’s call to respect privacy and confidentiality predates
the US constitution, the European Union Charter of
Fundamental Rights, and HIPAA, all of which allude
to privacy of individuals, if not the confidentiality of
their data.
Journal of the American College of Radiology 581
Balthazar et al nProtecting Patients’Interests
The Belmont report [16] deals with the protection of
human subjects in research. The principles of “respect for
persons, beneficence, and justice”manifest in our current
practices of informed consent, assessment of risks and
benefits, and just selection of subjects.
These cornerstones of our shared ethical principles
have been designed to apply to a wide variety of settings,
present and future, and should not be deemed irrelevant,
even in the era of big data and artificial intelligence. The
imaging community should revisit the meaning and
practical implications of these principles in light of new
questions. If needed, additional standards can be derived
from these principles to address specific issues that may
arise in the era of big data.
Other reports and codes, such as those of the Inter-
national Medical Informatics Association, the Association
of Computing Machinery, the American Health Infor-
mation Management Association, the Data Science As-
sociation, and the Institute of Electrical and Electronics
Engineers, also evaluate issues of confidentiality, data
ownership, epistemology, informed consent, and justice
to varying degrees [17-19]. The imaging community and
individual health care entities should survey these and
other codes to identify the most relevant components
in the process of developing our own code of ethics and
conduct with respect to imaging-related health care data
and their potential use in big data and artificial
intelligence.
SOME FUNDAMENTAL CONSIDERATIONS
As the use of big data, artificial intelligence, and related
techniques in health care expand, conversations about
the appropriate, ethical use of these methods become
increasingly relevant. Here, we explore common,
fundamental issues relevant to those considering devel-
oping or using such techniques for imaging, including
privacy, ownership, informed consent, epistemology,
and potential inequities between various data constitu-
ents [20].
Privacy and Confidentiality
nHow do we keep data-driven insights about sensitive
health issues confidential?
nHow do institutions prevent the reidentification of
individuals from joining of data sets?
nWhat is your obligation to notify a patient or subject of
a health risk or propensity identified using big data or
machine learning techniques?
Privacy and confidentiality are closely related issues.
Data privacy refers to the rights of individuals to maintain
control over their own health information. Confidenti-
ality refers to the responsibility of those entrusted with
those data to maintain privacy [21].
Privacy and confidentiality issues affect the scope,
proper storage of, access to, and dissemination of data,
especially data that are highly sensitive or personal. As the
breadth of the collected data and their analyses continue
to increase rapidly [20,22], these data are being housed
electronically in perpetuity [23,24], which increases the
risk for privacy violations. Additionally, anonymization
of the data does not ensure against individuals’being
identified subsequently through the joining of data sets
and reidentification [25], manipulation of the data
causing discrimination [26,27], or other improper uses
[28]. Aggregated data about specific groups might also
be created, which can cause stigmatization [29].
Ownership of Data and Subsequently Developed
Products
nCan patient data be reused for developing and vali-
dating advanced analytic methods? Can they be shared
or sold for this purpose?
nIf an app is developed and validated using patient data,
should the app be sold for profit?
In this context, “ownership”deals with who controls
(possesses or allows access to the data) and gains from
intellectual property that is subsequently developed [20].
Medical information, the key ingredient for any health-
related big data or artificial intelligence exercise, is not
owned in the same sense that a physical object, or even
intellectual property, is in other settings. Although in
some cases property law may apply, in other cases an
intellectual property framework may be more appro-
priate. Patients, health care providers, and hospital sys-
tems are all stakeholders that may have intersecting rights
and responsibilities when it comes to individual medical
records [30,31]. There is no single legal construction that
regulates ownership of health care data but rather a
combination of federal, state, and international laws
and rules through which health information ownership
is governed [32].
Informed Consent
nDoes your institution have mechanisms for blanket or
tiered consent for the development, validation, and use
of big data analytics or artificial intelligence?
582 Journal of the American College of Radiology
Volume 15 nNumber 3PB nMarch 2018
nWhat mechanisms are in place to exclude the data of
individuals who do opt out?
Informed consent typically implies general consent to
treat, consent for a specific procedure, or participation in
a research study; however, big data often requires pooling
and analyzing data in the future for purposes not antic-
ipated at the time of consent [25]. Anticipating possible
big data or artificial intelligence approaches may require
seeking consent at multiple levels and creating
mechanisms to deal with those patients who do not
wish to be included in such exercises. Blanket consents
preauthorize a wide range of secondary analytics [33],
addressing the impracticability of obtaining consent for
multiple future analyses but may also reduce autonomy
[34-38]. Tiered consent enables the exclusion of specific
uses of the data [39] but usually cannot anticipate all
possible purposes [38]. Your system, hospital, or
practice should review and accordingly revise its initial
intake and consent regime to consider possible big data
and artificial intelligence uses of patient data. These
discussions should include mechanisms for patients to
opt out of such uses.
The “Black Box”
nHow do we know that the results of artificial intelli-
gence algorithms are valid?
nWere the data sets with which they were developed
representative?
nHow would your institution defend the results of an
algorithm directly affecting a patient’s health care, if no
provider could completely comprehend how the
algorithm reached its conclusion?
Epistemology is “the study or a theory of the nature
and grounds of knowledge especially with reference to its
limits and validity”[40]. In many cases, the analysis of
big data may go beyond direct human intelligence,
calling into question how we can ascertain the validity
of the results [20]. With some techniques, the results of
any analysis could represent statistically significant noise
found by an algorithm without any clinical significance
[24,41]. Human input is needed to drive analysis
scientifically and provide context to the results [42,43].
Inequities
nWill predictive analytics lead to inadvertent harm of
an individual or group financially or otherwise?
nCould a group be stigmatized by the results of an
artificial intelligence classification?
Inequities in the era of artificial intelligence may result
in a power gradient between the subjects providing data
and organizations with the ability to use the same data
[23,44], concentrating the ability to benefit from the data
[24,45-48]. In this circumstance, patients may be
divorced from the analysis their own data supports [28]
and the ability to buy and sell that very data [48].
Additionally, aggregated data may be used to make
decisions about larger groups of people or create
groupings that previously did not exist, potentially
leading to discrimination, profiling, or surveillance
[23,41,49]. If the data are collected, they may favor
groups that are more willing to provide the data [50].
Even in diverse data sets, the data could still be used to
preferentially benefit one group over another [26,27,51].
Health care disparities have often been overlooked by
the emerging technology centered on big data [52].
Certainly, there is a risk that we perpetuate groups of
“haves and have nots”on the basis of access to these
technologies. Active engagement with small population
data sets, addressing social determinants of health, and
actively promoting data access to underprivileged
populations are potential avenues for mitigating the
effects of such a digital divide.
We believe that researchers and institutions entering
the marketplace for artificial intelligence discovery and
tool development should involve the local ethics review
committee, institutional review board, or other body and
craft a policy regarding these issues. The issues discussed
above, including privacy, confidentiality measures,
ownership concerns, informed consent, epistemology,
and inequities, can serve as a checklist with which the
completeness of the developed policies can be evaluated.
PATIENT PERSPECTIVES
Many patients may not have considered the differences
between privacy or confidentiality, the epistemology of
predictions from machine learning methods, or the jus-
tice implications of population health analytics. However,
when asked, patients have strong opinions about the
appropriate use of their health information.
Studies have shown that in general, many patients
have positive attitudes toward precision medicine [53,54].
Although not directly related to imaging per se, Halverson
et al [53] interviewed patients who had undergone genomic
sequencing in oncology and rare-disease settings and
found that participants had predominantly positive
attitudes toward sequencing. Some of the positive feelings
were empowerment over their own health, altruistic
Journal of the American College of Radiology 583
Balthazar et al nProtecting Patients’Interests
contribution to the progress of medicine, the legitimization
of their suffering, and a sense of closure in having done
everything they could [53]. However, in the era of
predictive analytics, decisions can be driven by financial
and administrative incentives [55] not necessarily aligned
with the needs of patients. Patients have significant
concerns about sharing their anonymized personal health
records when they might be divulged or sold to other
organizations to be used for profit[56]. Questions of
data security, privacy, and confidentiality are also
common among patients, especially when it comes to
sensitive information (eg, drug abuse, mental health) that
could have an impact on their personal employment or
health insurance coverage [56].
When it comes to potential inequities, patients’
opinions may vary with their backgrounds and sense of
vulnerability. One study assessing attitudes of patients
with breast cancer toward molecular testing for personal-
ized therapy and research found that nonwhites were
less willing to undergo testing even if the results would
guide their own therapy [57]. This is problematic on many
levels, including the fact that diverse representation is
needed for validity and reliability of artificial intelligence
and personalized medicine techniques.
There is concern that with the use of predictive
analytics, a racial or religious population associated with an
illness, poverty, or another factor could become further
marginalized or stigmatized, either by being underrepre-
sented during the development of these methods, through
spurious associations, or through analyses focusing on an
outcome other than health. In a worst-case scenario, a data
set or artificial intelligence algorithm can “inherit”the
systematic biases from which they originate or the implicit
biases of those that curated them [26,58]. Indeed, the
potential for this was realized in the popular imagination
when Microsoft created an artificial intelligence Twitter
chatbot that started posting racist and genocidal tweets
within 24 hours of “learning”from human-generated text
[59]. This issue is further complicated by the ability of
machines to identify minorities in ways that transcend
human perception, as exemplified by a recent project
demonstrating that a machine-learning algorithm using
Facebook profiles was able to predict sexual orientation in
men with 91% accuracy [60]. When applying machine
learning algorithms to health care, we must be aware that
they were derived from existing data, usually entered by
humans, and therefore, might propagate human bias.
Given that in an era of increased information
accumulation and larger data breaches [61,62], and that
patient data can be used for nefarious purposes, we
advocate that the radiology community should craft a
“patient bill of rights”that addresses these issues
specifically in collaboration with other physicians,
ethicists, and patients. If radiology is to maintain its
position as a leading health care discipline in big data
and artificial intelligence, we must also lead in the
ethical use of these methods. Among other items, such
a bill of rights should consider the following:
nPatients’imaging data are valuable and deserve the
highest level of security reasonably available.
nPatients are entitled to know what their imaging data
can and cannot be used for, including secondary uses.
nPatients’imaging data will not be used to harm them
or a group to which they belong.
THE ROLE OF RADIOLOGISTS
Radiologists must be deeply involved in the development,
validation, and implementation of big data analytics,
artificial intelligence, and personalized medicine [63] in
imaging. Clinical expertise is essential to asking the
right questions, accurately interpreting results, and
communicating with patients for optimal decision
making. Furthermore, physicians have a fiduciary
responsibility for the well-being of their patients, as
affirmed in the Hippocratic oath, rendering them pro-
fessionally responsible for securing the interest of their
patients. Although corporations and health care systems
may have legal obligations and policies, physicians have
an individual, moral obligation to protect their patient’s
privacy and data.
As the appetite for data to develop artificial intelli-
gence, precision medicine, and predictive analytics in
imaging grows, more radiologists will have to consider
how they wish to engage with their patients, their pa-
tients’data, and the third parties looking for clinical
partners. The community of imaging scientists, ethicists,
radiology leaders, and informaticists must have their own
conversation about the appropriate way to deal with these
issues and help lead the way in developing capabilities in
the most just, ethical manner possible.
TAKE-HOME POINTS
-Data privacy refers to the rights of individuals to
maintain control over their health data and infor-
mation. Confidentiality refers to the responsibility
of those entrusted with those data to maintain
privacy.
584 Journal of the American College of Radiology
Volume 15 nNumber 3PB nMarch 2018
-Anticipating possible big data or artificial intelli-
gence approaches may require seeking consent at
multiple levels and creating mechanisms to deal
with those patients who do not wish to be included
in such exercises.
-In many cases, the analysis of big data using com-
plex computer algorithms may go beyond direct
human intelligence.
-Physicians must be deeply involved in the devel-
opment, validation, and implementation of big data
analytics, artificial intelligence, and personalized
medicine in healthcare.
REFERENCES
1. Information Commissioner’sOffice. Royal Free—Google Deep-
Mind trial failed to comply with data protection law. Available
at: https://ico.org.uk/about-the-ico/news-and-events/news-and-
blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-
with-data-protection-law/. Accessed August 20, 2017.
2. Hern A. Google DeepMind 1.6m patient record deal “inappropriate.”
The Guardian. Available at: http://www.theguardian.com/
technology/2017/may/16/google-deepmind-16m-patient-record-deal-
inappropriate-data-guardian-royal-free. Accessed August 20, 2017.
3. IBM Watson Health—Medical Imaging. Available at: https://www.
ibm.com/watson/health/imaging/. Accessed August 10, 2017.
4. Google. Research at Google. Available at: https://research.google.com/
teams/brain/healthcare/. Accessed August 10, 2017.
5. Schmidt C. MD Anderson breaks with IBM Watson, raising questions
about artificial intelligence in oncology. J Natl Cancer Inst
2017;109(5).
6. MD Anderson benches IBM Watson in setback for artificial intelligence
in medicine. Available at: https://www.forbes.com/sites/matthewherper/
2017/02/19/md-anderson-benches-ibm-watson-in-setback-for-artificial-
intelligence-in-medicine/#4a096ac63774. Accessed August 20, 2017.
7. Fallik D. For big data, big questions remain. Health Aff Proj Hope
2014;33:1111-4.
8. Mooney SJ, Westreich DJ, El-Sayed AM. Commentary: epidemiology
in the era of big data. Epidemiol Camb Mass 2015;26:390-4.
9. Russell S, Norvig P. Artificial intelligence: a modern approach.
Englewood Cliffs, New Jersey: Prentice Hall; 1995.
10. Lee J-G, Jun S, Cho Y-W, et al. Deep learning in medical imaging:
general overview. Kor J Radiol 2017;18:570-84.
11. Lacson R, Khorasani R. Natural language processing: the basics (part 1).
J Am Coll Radiol 2011;8:436-7.
12. Collins FS, Varmus H. A new initiative on precision medicine. N Engl
J Med 2015;372:793-5.
13. Kindig DA. Understanding population health terminology. Milbank
Q 2007;85:139-61.
14. Suresh S. Big data and predictive analytics: applications in the care of
children. Pediatr Clin North Am 2016;63:357-66.
15. Hippocratic oath. Available at: https://en.wikipedia.org/w/index.php?
title¼Hippocratic_Oath&oldid¼795750050. Accessed December 6,
2017.
16. US Department of Health and Human Services. The Belmont report.
Available at: https://www.hhs.gov/ohrp/regulations-and-policy/
belmont-report/index.html. Accessed August 10, 2017.
17. Code of conduct. Available at: /code-of-conduct.html. Accessed
September 26, 2017.
18. Metcalf J. Ethics codes: history, context, and challenges. Available at:
http://bdes.datasociety.net/council-output/ethics-codes-history-context-
and-challenges/. Accessed September 2017.
19. American Health Information Management Association. AHIMA code
of ethics. Available at: http://bok.ahima.org/doc?oid¼105098.
Accessed September 26, 2017.
20. Mittelstadt BD, Floridi L. The ethics of big data: current and fore-
seeable issues in biomedical contexts. Sci Eng Ethics 2016;22:303-41.
21. Centers for Disease Control and Prevention. Emergency preparedness
for older adults; HIPAA, privacy and confidentiality. Available at:
https://www.cdc.gov/aging/emergency/legal/privacy.htm. Accessed
August 10, 2017.
22. Nunan D, Domenico MD. Market research and the ethics of big data.
Int J Mark Res 2013;55:505-20.
23. Andrejevic M. The big data divide. Int J Commun 2014;8:17.
24. Puschmann C, Burgess J. Metaphors of big data. Int J Commun
2014;8:20.
25. Choudhury S, Fishman JR, McGowan ML, Juengst ET. Big data,
open science and the brain: lessons learned from genomics. Front Hum
Neurosci 2014;8:239.
26. Zarsky T. Understanding discrimination in the scored society. Available at:
https://papers.ssrn.com/abstract¼2550248. Accessed August 23, 2017.
27. Crawford K. The hidden biases in big data. Harvard Business Review.
Available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data.
Accessed August 23, 2017.
28. Tene O, Polonetsky J. Big data for all: privacy and user control in the
age of analytics. Nw J Tech Intell Prop 2012;11:xxvii.
29. Docherty A. Big data—ethical perspectives. Anaesthesia 2014;69:390-1.
30. Hall MA, Schulman KA. Ownership of medical information. JAMA
2009;301:1282-4.
31. Safdar N, Shet N, Bulas D, Knight N. Handoffs between radiolo-
gists and patients: threat or opportunity? J Am Coll Radiol 2011;8:
853-7.
32. Cartwright-Smith L, Gray E, Thorpe JH. Health information
ownership: legal theories and policy implications. Vand J Ent Tech L
2016;19:207.
33. Larson EB. Building trust in the power of “big data”research to serve
the public good. JAMA 2013;309:2443-4.
34. Clayton EW. Informed consent and biobanks. J Law Med Ethics
2005;33:15-21.
35. Ioannidis JPA. Informed consent, big data, and the oxymoron of
research that is not research. Am J Bioethics 2013;13:40-2.
36. Currie J. “Big data”versus “big brother”: on the appropriate use of
large-scale data collections in pediatrics. Pediatrics 2013;131(suppl 2):
S127-32.
37. Lomborg S, Bechmann A. Using APIs for data collection on social
media. Inf Soc 2014;30:256-65.
38. Master Z, Campo-Engelstein L, Caulfield T. Scientists’perspectives on
consent in the context of biobanking research. Eur J Hum Genet
2015;23:569-74.
39. Majumder MA. Cyberbanks and other virtual research repositories.
J Law Med Ethics 2005;33:31-9.
40. Definition of epistemology. Available at: https://www.merriam-
webster.com/dictionary/epistemology. Accessed August 20, 2017.
41. Markowetz A, Błaszkiewicz K, Montag C, Switala C, Schlaepfer TE.
Psycho-informatics: big data shaping modern psychometrics. Med
Hypotheses 2014;82:405-11.
42. Floridi L. The method of levels of abstraction. Minds Mach 2008;18:
303-29.
43. Floridi L. Big data and their epistemological challenge. Philos Technol
2012;25:435-7.
44. Crawford K, Gray ML, Miltner K. Critiquing big data: politics, ethics,
epistemology. Int J Commun 2014;8:10.
45. Berry DM. The computational turn: thinking about the digital hu-
manities. Available at: http://www.culturemachine.net/index.php/cm/
article/view/440. Accessed May 21, 2017.
46. Boyd D, Crawford K. Critical questions for big data. Inf Commun Soc
2012;15:662-79.
47. Fairfield J, Shtein H. Big data, big problems: emerging issues in the ethics
of data science and journalism. J Mass Media Ethics 2014;29:38-51.
Journal of the American College of Radiology 585
Balthazar et al nProtecting Patients’Interests
48. McNeely CL, Hahm J. The big (data) bang: policy, prospects, and
challenges. Rev Policy Res 2014;31:304-10.
49. Lupton D. The commodification of patient opinion: the digital patient
experience economy in the age of big data. Sociol Health Illn 2014;36:
856-69.
50. Lewis CM, Obregón-Tito A, Tito RY, Foster MW, Spicer PG. The
Human Microbiome Project: lessons from human genomics. Trends
Microbiol 2012;20:1-4.
51. Mathaiyan J, Chandrasekaran A, Davis S. Ethics of genomic research.
Perspect Clin Res 2013;4:100-4.
52. Zhang X, Pérez-Stable EJ, Bourne PE, et al. Big data science: oppor-
tunities and challenges to address minority health and health disparities
in the 21st century. Ethn Dis 2017;27:95-106.
53. Halverson CM, Clift KE, McCormick JB. Was it worth it? Patients’
perspectives on the perceived value of genomic-based individualized
medicine. J Community Genet 2016;7:145-52.
54. McCullough LB, Slashinski MJ, McGuire AL, et al. Is whole-exome
sequencing an ethically disruptive technology? Perspectives of pediat-
ric oncologists and parents of pediatric patients with solid tumors.
Pediatr Blood Cancer 2016;63:511-5.
55. Cohen IG, Amarasingham R, Shah A, Xie B, Lo B. The legal
and ethical concerns that arise from using complex
predictive analytics in health care. Health Aff Proj Hope
2014;33:1139-47.
56. NICE Citizens Council. What ethical and practical issues need to be
considered in the use of anonymised information derived from per-
sonal care records as part of the evaluation of treatments and delivery of
care? Available at: http://www.ncbi.nlm.nih.gov/books/NBK401705/.
Accessed May 15, 2017.
57. Yusuf RA, Rogith D, Hovick SRA, et al. Attitudes toward mo-
lecular testing for personalized cancer therapy. Cancer 2015;121:
243-50.
58. Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically
from language corpora contain human-like biases. Science 2017;356:
183-6.
59. How Twitter corrupted Microsoft’s Tay: a crash course in the dangers
of AI in the real world. Available at: https://www.forbes.com/sites/
kalevleetaru/2016/03/24/how-twitter-corrupted-microsofts-tay-a-
crash-course-in-the-dangers-of-ai-in-the-real-world/#4f39fd926d28.
Accessed September 27, 2017.
60. Bhattasali N, Maiti E. Machine “gaydar”: using Facebook profiles to
predict sexual orientation. Mach Learn 2015.
61. Blumenthal R, McSweeny T. If you’re not angry about Equifax,you should
be. Available at: http://www.cnn.com/2017/09/26/opinions/congress-
protect-americans-from-security-breaches-blumenthal-mcsweeny-opinion/
index.html. Accessed September 27, 2017.
62. Equifax CEO Richard Smith steps down amid hacking scandal. The
Washington Post. Available at: https://www.washingtonpost.com/
news/the-switch/wp/2017/09/26/equifax-ceo-retires-following-massive-
data-breach/?utm_term¼.5de0e2bf4711. Accessed September 27,
2017.
63. McGuire AL, McCullough LB, Evans JP. The indispensable role
of professional judgment in genomic medicine. JAMA 2013;309:
1465-6.
586 Journal of the American College of Radiology
Volume 15 nNumber 3PB nMarch 2018