ArticlePDF Available

Abstract

Artificial intelligence (AI) brings forth many opportunities to contribute to the wellbeing of individuals and the advancement of economies and societies, but also a variety of novel ethical, legal, social, and technological challenges. Trustworthy AI (TAI) bases on the idea that trust builds the foundation of societies, economies, and sustainable development, and that individuals, organizations, and societies will therefore only ever be able to realize the full potential of AI, if trust can be established in its development, deployment, and use. With this article we aim to introduce the concept of TAI and its five foundational principles (1) beneficence, (2) non-maleficence, (3) autonomy, (4) justice, and (5) explicability. We further draw on these five principles to develop a data-driven research framework for TAI and demonstrate its utility by delineating fruitful avenues for future research, particularly with regard to the distributed ledger technology-based realization of TAI.
INVITED PAPER
Trustworthy artificial intelligence
Scott Thiebes
1
&Sebastian Lins
1
&Ali Sunyaev
1
Received: 13 May 2020 /Accepted: 9 September 2020
#The Author(s) 2020
Abstract
Artificial intelligence (AI) brings forth many opportunities to contribute to the wellbeing of individuals and the advancement of
economies and societies, but also a variety of novel ethical, legal, social, and technological challenges. Trustworthy AI (TAI)
bases on the idea that trust builds the foundation of societies, economies, and sustainable development, and that individuals,
organizations, and societies will therefore only ever be able to realize the full potential of AI, if trust can be established in its
development, deployment, and use. With this article we aim to introduce the concept of TAI and its five foundational principles
(1) beneficence, (2) non-maleficence, (3) autonomy, (4) justice, and (5) explicability. We further draw on these five principles to
develop a data-driven research framework for TAI and demonstrate its utility by delineating fruitful avenues for future research,
particularly with regard to the distributed ledger technology-based realization of TAI.
Keywords Trustworthy artificial intelligence .Artificial intelligence .Trust .Framework .Distributed ledger technology .
Blockchain
JEL classification M15 O30 A13 C80
Introduction
Artificial intelligence (AI) enables computers to execute tasks
that are easy for people to perform but difficult to describe
formally (Pandl et al. 2020). It is one of the most-discussed
technology trends in research and practice today, and estimat-
ed to deliver an additional global economic output of around
USD 13 trillion by the year 2030 (Bughin et al. 2018).
Although AI has been around and researched for decades, it
is especially the recent advances in the subfields of machine
learning and deep learning that not only result in manifold
opportunities to contribute to the wellbeing of individuals as
well as the prosperity and advancement of organizations and
societies but, also in a variety of novel ethical, legal, and social
challenges that may severely impede AIs value contributions,
if not handled appropriately (Floridi 2019; Floridi et al. 2018).
Examples of issues that are associated with the rapid develop-
ment and proliferation of AI are manifold. They range from
risks of infringing individualsprivacy (e.g., swapping peo-
ples faces in images or videos via DeepFakes (Turton and
Martin 2020) or involuntarily tracking individuals over the
Internet via the Clearview AI (Hill 2020)), or the presence of
racial bias in widely used AI-based systems (Obermeyer et al.
2019), to the rapid and uncontrolled creation of economic
losses via autonomous trading agents (e.g., the loss of millions
of dollars through erroneous algorithms in high-frequency
trading (Harford 2012)).
To maximize the benefits of AI while at the same time
mitigating or even preventing its risks and dangers, the con-
cept of trustworthy AI (TAI) promotes the idea that individ-
uals, organizations, and societies will only ever be able to
achieve the full potential of AI if trust can be established in
its development, deployment, and use (Independent High-
Level Expert Group on Artificial Intelligence 2019). If, for
Responsible Editor: Rainer Alt
*Ali Sunyaev
sunyaev@kit.edu
Scott Thiebes
scott.thiebes@kit.edu
Sebastian Lins
sebastian.lins@kit.edu
1
Department of Economics and Management, Karlsruhe Institute of
Technology, Institute AIFB - Building 05.20, KIT-Campus South,
76128 Karlsruhe, Germany
Electronic Markets
https://doi.org/10.1007/s12525-020-00441-4
To access this article, please follow this Springer SharedIt link https://rdcu.be/b7Y38
or use the DOI https://doi.org/10.1007/s12525-020-00441-4
... Governance against unregulated uses, privacy violations, and algorithm biases requires resilience for security and the preparation for potential attacks. By delivering and maintaining accessibility, safety, accuracy, and fairness, these challenges can be addressed through standardized RAI principles that involve ethical consideration and stakeholder participation [4][5][6][7][8]. ...
Article
Background Responsible artificial intelligence (RAI) emphasizes the use of ethical frameworks implementing accountability, responsibility, and transparency to address concerns in the deployment and use of artificial intelligence (AI) technologies, including privacy, autonomy, self-determination, bias, and transparency. Standards are under development to guide the support and implementation of AI given these considerations. Objective The purpose of this review is to provide an overview of current research evidence and knowledge gaps regarding the implementation of RAI principles and the occurrence and resolution of ethical issues within AI systems. Methods A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines was proposed. PubMed, ERIC, Scopus, IEEE Xplore, EBSCO, Web of Science, ACM Digital Library, and ProQuest (Arts and Humanities) will be systematically searched for articles published since 2013 that examine RAI principles and ethical concerns within AI. Eligibility assessment will be conducted independently and coded data will be analyzed along themes and stratified across discipline-specific literature. Results The results will be included in the full scoping review, which is expected to start in June 2024 and completed for the submission of publication by the end of 2024. Conclusions This scoping review will summarize the state of evidence and provide an overview of its impact, as well as strengths, weaknesses, and gaps in research implementing RAI principles. The review may also reveal discipline-specific concerns, priorities, and proposed solutions to the concerns. It will thereby identify priority areas that should be the focus of future regulatory options available, connecting theoretical aspects of ethical requirements for principles with practical solutions. International Registered Report Identifier (IRRID) PRR1-10.2196/52349
... Finally, questions are raised about the future of crowdsourcing in science in the context of the development of other emerging technologies. It is increasingly emphasized that artificial intelligence is the future of scientific research (OECD Science, Technology and Innovation Outlook, 2018;Thiebes et al., 2021;UKRI, 2021;Xu et al., 2021). Another significant challenge for the creation of scientific knowledge is found in the development of other technologies (Feger et al., 2019), such as blockchain, augmented virtual reality, Metaverse, and artificial intelligence. ...
... Additionally, AI systems must be resilient to threats that may try to exploit their normal behaviors and turn them into harmful ones. In the literature, additional principles have been proposed such as accuracy [10], acceptance [11], predictability and performance [12]. The AI HLEG [6], has focused on the concept of TAI, offering guidance in the form of a framework and identifying seven key ethical and technical requirements. ...
Conference Paper
Full-text available
In the rapidly evolving landscape of Artificial Intelligence (AI), ensuring the trustworthiness of AI tools deployed in sensitive use cases, such as judicial or healthcare processes, is paramount. The management of AI risks in judicial systems necessitates a holistic approach that includes various elements, such as technical, ethical considerations, and legal responsibilities. This approach should not only involve the application of risk management frameworks and regulations but also focus on the education and training of legal professionals. For this, we propose a risk-based approach designed to evaluate and mitigate potential risks associated with AI applications in judicial settings. Our approach is a semi-automated process that integrates both user (i.e., judge) feedback and technical insights to assess the AI tool's alignment with Trustworthy AI principles.
Chapter
This chapter comprehensively reviews the integration of artificial intelligence (AI) and blockchain technology. AI is a machine simulation of human intelligence to create systems that can perform tasks that typically require humans to recognize patterns and solve problems. It has been successfully applied across various sectors such as healthcare, finance, transportation, education, etc. Blockchain technology is a decentralized distributed ledger system that records transactions across networks. Although it gained popularity when Bitcoin was introduced, it can be applied to various sectors such as decentralized finance (DeFi), healthcare, supply chain management, etc. An extensive review of journal articles was conducted to explore the synergies, challenges, and potential research directions of AI and blockchain convergence, especially in the financial sector. The contents of the selected articles were summarized to evaluate the current applications and future research directions were outlined.
Article
Full-text available
Artificial Intelligence (AI) has pervaded everyday life, reshaping the landscape of business, economy, and society through the alteration of interactions and connections among stakeholders and citizens. Nevertheless, the widespread adoption of AI presents significant risks and hurdles, sparking apprehension regarding the trustworthiness of AI systems by humans. Lately, numerous governmental entities have introduced regulations and principles aimed at fostering trustworthy AI systems, while companies, research institutions, and public sector organizations have released their own sets of principles and guidelines for ensuring ethical and trustworthy AI. Additionally, they have developed methods and software toolkits to aid in evaluating and improving the attributes of trustworthiness. The present paper aims to explore this evolution by analysing and supporting the trustworthiness of AI systems. We commence with an examination of the characteristics inherent in trustworthy AI, along with the corresponding principles and standards associated with them. We then examine the methods and tools that are available to designers and developers in their quest to operationalize trusted AI systems. Finally, we outline research challenges towards end-to-end engineering of trustworthy AI by-design.
Chapter
The proliferation of e-commerce platforms has provided customers with a level of convenience never seen before, but it also raises serious questions about how sensitive consumer data is protected. This chapter explores the complex interplay between two emerging technologies, AI and blockchain, with the goal of strengthening consumer data protection on e-commerce platforms. It examines innovative and forward-thinking solutions that handle the complexities of data governance as well as the changing legal aspects of the e-commerce industry by exposing the numerous difficulties and challenges that occur in this confluence. The protection of consumer data is one of the biggest issues facing e-commerce. The combination of blockchain technology and artificial intelligence (AI) acts as a lighthouse, pointing the way for the e-commerce sector toward a more transparent, safe, and morally upright future.
Chapter
This chapter scrutinizes what is justice in a technology-driven society in which communities are inundated with, and increasingly rely on, big data and automated and computational processes. Visibility, presentation, and recognition of individuals and groups who seek justice are challenged by algorithmic decisions, creating a heightened risk of blurring and obscuring struggles for justice instead of accounting for, and addressing, them. While the use of technology to engage justice-seeking efforts can be a valuable tool, it needs to be managed with caution so that the perception and legitimacy of struggles for justice are not rendered invisible.
Article
Full-text available
Developments in Artificial Intelligence (AI) and Distributed Ledger Technology (DLT) currently lead to lively debates in academia and practice. AI processes data to perform tasks that were previously thought possible only for humans. DLT has the potential to create consensus over data among a group of participants in uncertain environments. In recent research, both technologies are used in similar and even the same systems. Examples include the design of secure distributed ledgers or the creation of allied learning systems distributed across multiple nodes. This can lead to technological convergence, which in the past, has paved the way for major innovations in information technology. Previous work highlights several potential benefits of the convergence of AI and DLT but only provides a limited theoretical framework to describe upcoming real-world integration cases of both technologies. We aim to contribute by conducting a systematic literature review on previous work and providing rigorously derived future research opportunities. This work helps researchers active in AI or DLT to overcome current limitations in their field, and practitioners to develop systems along with the convergence of both technologies.
Article
Full-text available
Communicating with customers through live chat interfaces has become an increasingly popular means to provide real-time customer service in many e-commerce settings. Today, human chat service agents are frequently replaced by conversational software agents or chatbots, which are systems designed to communicate with human users by means of natural language often based on artificial intelligence (AI). Though cost- and time-saving opportunities triggered a widespread implementation of AI-based chatbots, they still frequently fail to meet customer expectations, potentially resulting in users being less inclined to comply with requests made by the chatbot. Drawing on social response and commitment-consistency theory, we empirically examine through a randomized online experiment how verbal anthropomorphic design cues and the foot-in-the-door technique affect user request compliance. Our results demonstrate that both anthropomorphism as well as the need to stay consistent significantly increase the likelihood that users comply with a chatbot’s request for service feedback. Moreover, the results show that social presence mediates the effect of anthropomorphic design cues on user compliance.
Article
Full-text available
When developing peer-to-peer applications on distributed ledger technology (DLT), a crucial decision is the selection of a suitable DLT design (e.g., Ethereum), because it is hard to change the underlying DLT design post hoc. To facilitate the selection of suitable DLT designs, we review DLT characteristics and identify trade-offs between them. Furthermore, we assess how DLT designs account for these trade-offs and we develop archetypes for DLT designs that cater to specific requirements of applications on DLT. The main purpose of our article is to introduce scientific and practical audiences to the intricacies of DLT designs and to support development of viable applications on DLT.
Article
Full-text available
Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved.
Article
Full-text available
Contemporary cryptocurrencies lack legal, monetary, and institutional backing that traditional financial services employ. Instead, cryptocurrencies provide trust through technology. Despite the plethora of research in both trust and cryptocurrencies, the underlying attributes of the technologies that drive trust in cryptocurrencies are not well understood. To uncover these attributes, we analyze the corpus of 1.97 million discussion posts related to Bitcoin, the oldest and most widely used cryptocurrency. Based on earlier research, we identified functionality, reliability, and helpfulness as the focal constructs with which to evaluate users’ trust in technology. In our analysis, we discovered 11 different attributes related to three technology constructs that are significant in creating and maintaining users’ trust in Bitcoin. The findings are discussed in detail in the article.
Article
Full-text available
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence , namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
Book
This book introduces the reader to the fundamentals of contemporary, emerging and future technologies and services in Internet computing. It covers essential concepts such as distributed systems architectures and web technologies, contemporary paradigms such as cloud computing and the Internet of things, and emerging technologies like distributed ledger technologies and fog computing. The book also highlights the interconnection and recombination of these Internet-based technologies, which together form a critical information infrastructure with major impacts on individuals, organizations, governments, economies, and society as a whole. Intended as a textbook for upper undergraduate and graduate classes, it features a wealth of examples, learning goals and summaries for every chapter, numerous recommendations for further reading, and questions for checking students’ comprehension. A dedicated author website offers additional teaching material and more elaborate examples. Accordingly, the book enables students and young professionals in IT-related fields to familiarize themselves with the Internet’s basic mechanisms, and with the most promising Internet-based technologies of our time.
Article
25 years back the new frontier were Electronic Markets. The ‘Internet Revolution’ was in its infancy and many of the tech behemoths not yet founded. Today we are at the dawn of the next fundamental technology shift: Artificial Intelligence. The paper argues that insights generated from analyzing data with AI will become the new ‘nervous system’ of any company.