Conference PaperPDF Available

Trustworthiness of Artificial Intelligence

Authors:
Trustworthiness of Artificial Intelligence
Sonali Jain Manan Luthra
Department of Electrical and Electronics Engineering Department of Electrical and Electronics Engineering
Amity University, Uttar Pradesh Amity University, Uttar Pradesh
India India
jainsonali10dec@gmail.com manan894@gmail.com
Shagun Sharma Mehtab Fatima
Department of Electrical and Electronics Engineering Department of Electrical and Electronics Engineering
Amity University, Uttar Pradesh Amity University, Uttar Pradesh
India India
shagun131a@gmail.com mehtabfatima@gmail.com
Abstract This paper discusses the need for a trustworthy AI,
along with the ethics which are required to keep that trust intact.
AI has a lot of benefits when it comes to societal, individual or
cultural development. But any mistake in either the development
or in the working phase of the AI system can be disastrous,
especially when human lives are involved. The main goal of this
paper is to understand what really makes an Artificial
Intelligence system trustworthy.
Keywords Artificial Intelligence, ethical, lawful, robust,
trustworthy, fundamental rights, democracy.
I. INTRODUCTION
Artificial Intelligence is the most important innovation for
the society which has the potential for improving the
quality of living of humankind as a whole. It can be utilized
in nearly every aspect of life of the people like healthcare
services, public sectors, education, electronics, banking etc.
The greatest contribution of AI will be to face and resolve
the global challenges, given in the UN's Sustainable
Development Goals(SDG), like giving quality Education,
providing Clean Water and Sanitation, ending Poverty,
Zero Hunger etc. For this purpose, innovation in the current
AI system is of paramount importance for them to
encompass a humane perspective and function in society to
support and expand human welfare. For complete trust
between the society and AI systems, improvement and
innovation of both the internal architecture of the AIs and
applications utilizing their Human Interface properties
needs to be accomplished.
II. FRAMEWORK AND FOUNDATIONOF A TRUSTWORTHY AI
For successful development of framework for a reliable AI
system, three criteria should be met for its development and
its function as depicted in fig.1.
1. Lawful: The AI system should be compliant with
various rules and laws.
2. Ethical: It should contain morals and ethics, and adhere
to moral values and principals.
3. Robust: AI should be sturdy in both social and technical
sense.
Ethical issues of AI are field of applied moral values, it
focuses on the various socio-technical discrepancies or
issues generated due to the construction and function of
AI. Ethical field regarding AI has significant value as it
deals with problems like safety of individuals, dealing
with the privacy of society and even unemployment due to
AI. Ethical field will also explore the possibility of AI’s
influence on the society regarding the basic values, like
the UN Development Goals. The main objective for the
developers will be to integrate these systems on the
common life with disrupting and existing social
boundaries for maintaining sustainable order in the
society.
III. RIGHTS AS A FOUNDATION FOR TRUSTED AI
A. Respect for human dignity and integrity
An AI System developed should respect and protect
human’s moral code and their self-identity along with their
personal sense of worth by not taking any unethical action
in opposition of their ethics.
B. Freedom of the individual
Fig. 1. Framework of Trustworthy AI
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 907
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
Freedom of individuals means the full autonomous control
over their rights, that can be rights to education, rights like
privacy rights to express etc. An AI system should need to
regard the freedom of individuals by not using any form of
coercion, manipulation and deception with them.
C. Respect for democracy, justice and the rule of law
AI system should not change any current democratic
processes, freedom of vote and laws of any country. AI
system should also be aware enough for not taking any
actions which can be detrimental to the principles that form
the laws.
D. Equality, non-discrimination and solidarity
AI system should not function in any manner that supports
racial issues, religion issues, gender discrimination and any
other such unfair criteria. The system should be respectful
to all, irrespective of their gender, religion and race.
E. Citizens’ rights
AI system should be increasing the potential of the ability
of various governments to enhance the innovation and
efficacy of the public sector as well as the private sector for
improvement of life for citizens.
IV. ETHICAL PRINCIPLES IN THE CONTEXT OF AI SYSTEMS
The way ethics play an important role in our daily lives,
similarly, it is necessary to have ethics for AI systems in
order to enable the systems to make quick, transparent and
responsible decisions. Ethical principles for AI as depicted
in Fig.2 can serve a variety of functions in support of the
users. Some of the ethical principles necessary for AI to
achieve better outcomes, reduce the risk of negative impact
and practice the highest standards of ethical business and
good governance.
A. The principle of respect for human centred values
The AI systems must not in any case dominate, force,
deceive or manipulate human beings. Rather, they must be
designed in such a way that they support, increase and
accompany humans’ social and cultural skills as well as
their cognitive thinking. The AI systems must follow the
design principles that have been creating supporting the
human centric approach and there should always be an
upper hand for humans regarding their functionality. The
AI systems may also make changes in the working
atmosphere aiming for the establishment of meaningful
work keeping in mind the proposed limits set by humans.
B. The principle of prevention of harm
An AI system must not intend or cause harm to a human
being. This involves mental as well physical protection of
human beings, while keeping their dignity. The safety and
security of the environment in which the AI systems work
must also be kept in mind, so that it is ensured that they are
not used maliciously.AI systems should benefit individuals,
society and the environment.
C. The principle of fairness
The motive behind using an AI system should be fair and
must not include any bias decisions. The ulterior motive
behind this principle is to mitigate the results obtained from
a discriminate use of data in artificial intelligence.
The principle of explicability
Explicability comes from the word explicable meaning
“capability of being explained”. In order to build and
maintain trust among users in AI systems, explicability is
an important factor. The process through which AI works
needs to be transparent, and the purpose of the AI system as
well as the decisions made by it must be well understood by
those affected, directly or indirectly. The extent to which an
AI system is explicable is highly based on the context
related to which the system is working.
E. Principle of privacy protection and security
AI system should respect and uphold privacy rights and
data protection, and ensure the security of data. This
includes ensuring proper data governance and management
for all data used and generated by the AI systems.
V. TRUSTWORTHY AI REQUIREMENTS(Fig.3.)
A. Human agency and oversight
Fundamental Rights: AI systems have the capacity to
equally support or hamper fundamental rights. For instance,
they can balloon in the field of education, thus supporting
someone’s right to education. However, the same AI
system can negatively affect someone's fundamental rights.
Fig. 2. Ethical Purposes of AI
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 908
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
In such situations, proper fundamental rights violation
assessment must be performed. This must be done before
the development of the AI system.
Human Agency: There should be a flexible system between
the user and the AI system. The user should have the
necessary knowledge and tools in order to comprehend and
make changes in the AI system according to their needs and
goals. But this must be limited to a certain degree.
Human Oversight: Human oversight can be beneficial.
B. Technical robustness and safety
Resilience to attack: Just like any software, AI systems also
have the vulnerability of getting attacked by adversaries
(eg. hacking). In case an AI system is attacked by an
adversary, there are chances that the AI system may
respond differently and produce an unwanted output. It may
even shut down. Hence, in order to mitigate this, the AI’s
security must be taken into account while designing and
developing the AI system.
Fall Back Plan: Every AI system must have a fall-back plan
in case a problem occurs. It must be ensured that the AI acts
according to the proposed regulations towards its goal
without harming any human being or the environment. The
fall-back may include moving from a statistical approach
to a rule based approach. The system may even take
permission from the human operator before performing
further tasks.
Accuracy: An AI system must be accurate enough to make
correct judgements. This is very crucial at times and
situations where human lives are at risk. Inaccurate
predictions may lead to damage to property and loss of
human lives.
Reliability and Reproducibility: An AI system must work
with a variety of input in order to obtain different outputs,
hence it must be reliable. Also, an AI task must produce the
same output when repeatedly performed under the same
conditions.
C. Privacy and data governance
Privacy, Data protection: The information provided by the
user and the personal information of the user must be kept
safe by the AI system at all times. The AI system must not
misuse it for any reason whatsoever.
Quality and Integrity of Data: Whenever any data is
gathered by the AI system, there are chances that the data
may be full of errors and mistakes. Feeding such type of
data may change the system behaviour. The system must
also reject any malicious data.
Access to Data: Not everyone must have access to the data
collected by the AI system. Certain rules and regularities
must be maintained regarding who will have access to this
data and under what circumstances this data can be
extracted.
D. Transparency
Explainability: There must always be an explanation of
why an AI system made a particular decision. There are
some situations in which analysing a particular decision
made by the AI system is necessary.
Communication: Every user has the right to know that they
are interacting or communicating with an AI system. A user
can knowingly choose to have a human based interaction
with its AI system, but that too under certain conditions.
Also, this must not violate any fundamental rights under
any condition.
E. Diversity, non-discrimination and fairness
Avoidance of unfair bias: The information that goes
through an AI system (whether that data is used to interact
with the user or is used while developing the AI system)
may contain some historical events that are related with
biases in the past. This piece of information may continue
to create various
cultural, racial or sexual bias and prejudice in the future as
well. In order to alleviate the problem, people from a
diverse background may be hired while developing the AI
system. Accessibility and Universal Design: Every AI
system must have a fit-for-all design. This means that it
Fig. 3. Trustworthy AI requirements
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 909
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
must be designed in such a way so that it could be used by
everyone, regardless of age, gender, mental or physical
disabilities.
F. Societal and environmental wellbeing
Sustainable and environment friendly AI: An AI system’s
design, development and usage processes must be
performed in an environmentally friendly way. E.g. energy
consumption during the AI’s usage process must be tracked
and kept under certain limits.
Social impact: AI systems have the ability to alter our
social lives, be it in areas of entertainment, work life or
social life. They can not only make our social lives better,
but can deteriorate it too. When it comes to AI’s negative
impact on our social life, they include both physical as well
as mental effects. In order to mitigate this, the AI systems
must be kept under observation and monitored regularly.
Society and Democracy: Apart from using AI systems to
improve an individual's life, they must be used to impact
the society at large, for e.g. analysing the flaws of a
democracy and suggesting decisions to improve its
structure.
VI. REALIZATION OF A TRUSTWORTHY AI
A. Technical methods
Technical methods ensure that the trustworthy AI that can
be employed in the development, designing and is utilized
in all phases of an AI system. This architecture involves
three step cycle for AI to be trustworthy:
The sense step, involves recognition of all
environmental factors necessary to follow all the
requirements.
The plan step, allows involvement of those plans that
adhere to all the requirements.
The act step, allows only those actions that are limited
to behaviours realizing all the requirements.
a) Ethics and the rule of law by design
Law by design provides accurate and explicit links between
the abstract principles which the system should obey and
implement specific decisions. The norms should be obeyed
for implementation of trustworthy AI system. It provides
safe shutdown in case of failure and resume the operation
after a forced shut-down.
b) Explanation methods
Behaviour of system must be analysed before interpreting
its results for achieving a trustworthy AI system.
c) Testing and validating
Testing and validation of the system must be provided as it
ensures the system behaves as desired throughout its life
cycle. It must include all components of an AI system,
including data, pre-trained models, environments and the
behaviour of the system as a whole. The output must be
consistent with the final results of the preceding processes,
while comparing them with the previously defined policies
to ensure that they are not violated.
B. Non-technical methods
This section elaborates different non-technical methods
which plays an important role in maintaining and securing
the AI.
a) Standardisation
Standardisation of designs, business processing and
manufacturing services act as a quality management system
for AI by providing the users, organisations, research
institutions consumers and governments with the ability to
identify and encourage ethical code of conduct for their
purchasing decisions
b) Certification
The certifications apply standardised designs,
manufacturing services developed for different application
domains and align them appropriately in different contexts
of industrial and societal standards. Certification cannot
replace the responsibility. So it should be complemented by
disclaimers as well as review, accountable frameworks and
readdressed mechanisms.
c) Codes of conduct
An organisation should document its purpose and intentions
when working with AI systems. Also it is supported
standards of some expected values such as transparency,
fundamental rights, and protection from harm.
d) Accountability via governance frameworks
Some governance frameworks should be established
internally and externally by organization to account for the
ethical decisions related to deployment, development and
usage of AI system. Communication channels should also
discuss dilemmas and report emerging issues incorporating
ethical concerns.
e) Education pact with awareness to foster an
ethical mind-set
Trustworthy AI encourages the collaborative and instructed
participation by all stakeholders. Communication,
education and training are important factors for ensuring
the potential impact of AI systems is known, and makes
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 910
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
people aware as they have a vital part in shaping the society
having AI Systems.
f) Stakeholder participation and social
dialogue
AI systems offer huge benefits so it should be guaranteed
that they are available to all. This requires discussions and
dialogues between various social partners and stakeholders
must also include the general public for their views.
g) Diversity and inclusive design teams
The teams developing, designing, testing, deploying,
maintaining, and procurement of AI systems takes into
consideration the diversity of users and society in general.
This ensures objectivity and contributions of various
perspectives and needs. Generally, team diversity is not
only in terms of gender, age, social group, culture but also
includes skill sets, professional skills and background.
VII. ASSESSING TRUSTWORTHY AI
Development of evaluation criteria and administration step
are to be done closely with the interested parties of the
organisation like the stakeholders and government. Various
small scale projects are to be performed first for getting the
relevant feedback on the limitations of the current AI
system.
Hard rules and limitations of AI’s functions is to be
outlined by referencing several factors like safety,
advancement of AI and social acceptance of the people.
A. Climate action and sustainable infrastructure
AI influence on improvement or at least mitigation in
climate change can have a great impact on society. AI
systems can reduce the unwanted needs of resources by
accurately monitoring and managing the data of relevant
energy needs of the society. This will result in the
development of efficient infrastructure and intelligent
logistics.
Indirectly, by taking positive action for climate change, AI
system can also reduce net amount of fatalities in the world.
Additionally, the use of AI on medical sectors will also
support the decrease in fatalities.
B. Health and well-being
AI system can also have both direct and indirect effect on
the medical and health sector. In a direct way, it can be
used
with various measuring instruments and life support devices
to provide a high level of accuracy and control for aiding
the doctors. Trust on the measurement of the AI devices
and their lack of bias by the doctors can improve the
present conditions of treatment exponentially.
In case of indirect influence, through the use of
measurement recorded by the AI, doctors will be able to
determine any potential diseases or problems in the patients
and appropriate preventive measures can be taken.
C. Quality education and digital transformation
AI systems can have the ability to estimate or predict the
upcoming trends regarding the jobs availability,
replacement
of jobs due to better technologies and the measure of
unemployment. All the above factors can be used by these
systems to provide solutions like the skills needed for the
new jobs, change in the present educational material and
the business suggestions for reduction in unemployment.
AI stream will also be a great tool to restructure the
Educational system to be more job oriented and adaptable
to the individual strengths of students. Furthermore, using
the power of internet, it will be used to provide quality
education for children irrespective of their backgrounds.
D. Identifying and tracking individuals with AI
As with the rise of various security measures like face ID,
touch ID for sensitive work like banking or email signing,
privacy of individuals becoming somewhat stable but with
the AI system, these measures will also become obsolete
due to the system’s ability of face recognition and
fingerprint scanning. For organisations and governments,
breach of their privacy will be of dire consequences.
Additionally, when dealing with digital entities there will
certainly be a risk of hacking and AI systems will also share
this disadvantage.
E. Lethal autonomous weapon systems (LAWS)
Research on the autonomous weapons for war and defence
are being done by nearly every country in the world.
Certain risk factors are involved for this technology which
can have dire consequences if not addressed properly.
1. Hacking of the systems will become plausible by
this research which will risk the safety of citizens all around
the world.
2. Malfunction of the AI systems is also a grave
concern for this research. If the various ethical, legal and
humanitarian values and issues are not clearly expressed to
these AI systems, there is a possibility for them to have an
autonomous interpretation which can be lethal for the
society and their action can risk the safety of people.
VIII. CONCLUSION
AI system have numerous positive impacts in the present
and will increase in the future, both on a personal level and
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 911
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
a professional level, in various sectors like medical,
educational, defence etc. But these systems also accompany
equally large risks and negative impacts on the society.
Therefore, development of the framework for the system
through which they can be regarded as trustworthy is of
paramount importance before their acclimatisation in the
daily lives of people and organisations. As discussed in this
paper, a trustworthy AI should be 1) compliant with various
rules and laws. 2) contain morals and ethics, and adhere to
moral values and principals. 3) sturdy in both social and
technical sense. For achieving this, list of requirements is to
be established based on sources like fundamental rights and
various laws, for the AI system to work on human centric
approach.
References
[1] Floridi, Luciano. "Establishing the rules for building trustworthy AI."
Nature Machine Intelligence1, no. 6 (2019): 261.
[2] Siau, K., Wang, W. (2018), Building Trust in Artificial Intelligence,
Machine Learning, and Robotics,CUTTER BUSINESS
TECHNOLOGY JOURNAL (31), S. 47–53.
[3] Marnau, N. (2019). Comments on the “Draft Ethics Guidelines for
Trustworthy AI” by the High-Level Expert Group on Artificial
Intelligence.
[4] Yu, Han, Zhiqi Shen, Chunyan Miao, Cyril Leung, Voctor R. Lesser,
and Qiang Yang. 2018. “Building Ethics into Artificial Intelligence.”
arXiv, 1–8.
[5] Vakkuri, Ville, and Pekka Abrahamsson. 2018. “The Key Concepts of
Ethics of Artificial Intelligence.” Proceedings of the 2018 IEEE
International Conference on Engineering, Technology and Innovation,
1–6.
[6] Livingston, S., & Risse, M. (2019). The Future Impact of Artificial
Intelligence on Humans and Human Rights. Ethics & International
Affairs, 33(2), 141-158.
[7] Renda, Andrea. "Artificial intelligence: Ethics, governance and policy
challenges." CEPS Task Force Report (2019).
[8] Jabłonowska, Agnieszka, et al. "Consumer Law and Artificial
Intelligence: Challenges to the EU Consumer Law and Policy
Stemming from the Business' Use of Artificial Intelligence-Final
report of the ARTSY project." EUI Department of Law Research
Paper 2018/11 (2018).
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 912
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
... • "Trustworthy AI", found in [10,11,12,13,14,15,16], and [17] as cited in [18] • "Ethical AI", found in [19,20,21,22,23], and [24] as cited in [25] • "Human-Centered AI", found in [26] as cited in [23] as the content-wise similar expressions for "Responsible AI" hereinafter. ...
... The following papers analyze trustworthy AI in their survey or review: [11,13,14,17,28,29]. The most important insights were the following: ...
... There were also papers presented frameworks specially for trustwothiness or papers that reported on how Trust is perceived and described by different users. Trustworthy AI (28/254, 11% ) * Reviews and Surveys 9/28 32% [11,17,28,13,29,14,30,130,45] Perceptions of trust 4/28 14% [31,32,33,27] Frameworks 9/28 32% [26,34,35,37,38,15,39,40,41] Miscellaneous 6/28 28% [42,43,44,46,16,47] Ethical AI ( ...
Article
Full-text available
Our research endeavors to advance the concept of responsible artificial intelligence (AI), a topic of increasing importance within EU policy discussions. The EU has recently issued several publications emphasizing the necessity of trust in AI, underscoring the dual nature of AI as both a beneficial tool and a potential weapon. This dichotomy highlights the urgent need for international regulation. Concurrently, there's a need for frameworks that guide companies in AI development , ensuring compliance with such regulations. Our research aims to assist law-makers and machine learning practitioners in navigating the evolving landscape of AI regulation, identifying focal areas for future attention. This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI. Through a structured literature review, we elucidate the current understanding of responsible AI. Drawing from this analysis, we propose an approach for developing a future framework centered around this concept. Our findings advocate for a human-centric approach to Responsible AI. This approach encompasses the implementation of AI methods with a strong emphasis on ethics, model explainability, and the pillars of privacy, security, and trust.
... S AFETY-CRITICAL assessment of machine learning (ML) is currently one of the main issues in trustworthy artificial intelligence [1], [2]. The scope is to understand under which conditions autonomous operation may lead to hazards, in order to reduce to the minimum the risk of operating with detrimental effects to the human or the environment. ...
... The scope is to understand under which conditions autonomous operation may lead to hazards, in order to reduce to the minimum the risk of operating with detrimental effects to the human or the environment. Such an assessment is mandatory in several application domains, such as avionics [3], finance [4], healthcare [5], smart mobility [6], cybersecurity [7] as well as with autonomous systems [1], [2], [8]. Informally speaking, the safety assurance of ML consists of building guardrails around the autonomous decision in front of uncertainty [9]. ...
... We will often refer to r as a discarding parameter, since the generalized max can be interpreted as a classical maximum after the largest r − 1 points are discarded. Indeed, to construct max (r) (Γ) it is sufficient to order the elements of Γ as {γ (1) , γ (2) , . . . , γ (n) } so that ...
Preprint
Full-text available
Supervised classification recognizes patterns in the data to separate classes of behaviours. Canonical solutions contain misclassification errors that are intrinsic to the numerical approximating nature of machine learning. The data analyst may minimize the classification error on a class at the expense of increasing the error of the other classes. The error control of such a design phase is often done in a heuristic manner. In this context, it is key to develop theoretical foundations capable of providing probabilistic certifications to the obtained classifiers. In this perspective, we introduce the concept of probabilistic safety region to describe a subset of the input space in which the number of misclassified instances is probabilistically controlled. The notion of scalable classifiers is then exploited to link the tuning of machine learning with error control. Several tests corroborate the approach. They are provided through synthetic data in order to highlight all the steps involved, as well as through a smart mobility application.
... Trustworthiness: according to Jain et al. (2020), in the field of artificial intelligence, trustworthy AI is used to describe AI that is lawful, ethically compliant, and technically robust. It is based on the belief that AI can achieve its full potential when trust can be established during every stage of its existence, from design to development. ...
... Moreover, explainability is a key factor in building and maintaining trust among users in AI systems. A transparent process must govern the operation of AI, and the purpose of the AI system as well as the decisions it makes, which must be understandable to all those affected, directly or indirectly (Jain et al., 2020). Therefore, to increase users' trust, both transparency and explainability have been recommended (Robert et al., 2020). ...
Preprint
Full-text available
Artificial intelligence (AI) and multimodal data (MMD) are gaining popularity in education for their ability to monitor and support complex teaching and learning processes. This line of research and practice was recently named Multimodal Learning Analytics (MMLA). However, MMLA raise serious ethical concerns given the pervasive nature of MMD and the opaque AI techniques that process them. This study aims to explore ethical concerns related to MMLA use in higher education and proposes a framework for raising awareness of these concerns, which could lead to more ethical MMLA research and practice. This chapter presents the findings of 60 interviews with educational stakeholders (39 higher education students, 12 researchers, 8 educators, and 1 representative of an MMLA company). A thematic coding of verbatim transcriptions revealed nine distinct themes. The themes and associated probing questions for MMLA stakeholders are presented as a draft of the first ethical MMLA framework. The chapter is concluded with a discussion of the emerging themes and suggestions for MMLA research and practice.
... The project team proports that computing and non-computing curricula can benefit from customized modules that expose students to the knowledge of AI and the trustworthiness of AI models (Jain et. al. 2020). For the verification of this hypothesis, the following intervention tools were planned, developed, and studied. ...
Article
While students are often passionate about their chosen fields, they often have limited awareness of the profound impact of AI technologies on their professions. In order to advance efforts in building subject-relevant AI literacy among undergraduate students studying Computer Science and non-Computer Science (Criminal Justice and Forensic Science) it is imperative to engage in rigorous efforts to develop and study curricular infusion of Artificial Intelligence topics. Using a Design-Based Research model, the project team and the external evaluators studied the first iteration of the module development and implementation. Using data collected through surveys, focus groups, critical review, and reflection exercises the external evaluation team produced findings that informed the project team in revising and improving their materials and approach for the second iteration. These efforts can help educators and the AI module developers tailor their AI curriculum to address these specific areas, ensuring that students develop a more accurate understanding of applications of AI in their future career field.
... This burgeoning reliance on AI, however, brings to the forefront the paramount importance of trustworthiness and reliability in these systems. The effectiveness of AI in sensitive and impactful applications hinges on its ability to perform not only with high accuracy but also with fairness, transparency, and security [2][3][4]. In scenarios where AI decisions can have significant consequences, the trust placed in these systems by both users and stakeholders is critical. ...
Chapter
Full-text available
This paper delves into the crucial issue of trustworthiness in Artificial Intelligence (AI), a concern gaining prominence with the widespread adoption of AI systems in various sectors, including emerging technologies like ChatGPT. At the heart of this issue lies the complexity inherent in AI systems, particularly those based on deep learning models, whose remarkable capabilities are often offset by challenges in comprehending their functionality and decision-making mechanisms. The paper systematically explores key factors contributing to concerns about AI reliability, including the lack of transparency (‘black box’ nature), inherent biases, security vulnerabilities, and the dilemma of accountability. By scrutinizing these aspects, the paper aims to unravel the intricacies that currently obscure AI operations and impede trust. Furthermore, the paper examines current measures implemented to enhance AI trustworthiness, including existing regulations, industry standards, and ethical guidelines. It then ventures into the realm of innovations and future directions, highlighting the potential of Explainable AI (XAI), AI auditing, and certification processes as pivotal in bridging the trust gap. Case studies are presented to provide real-world context to these issues, offering insights into how trustworthiness challenges have been addressed and what lessons can be gleaned. In its conclusion, the paper proposes a comprehensive roadmap for building and sustaining trust in AI systems. This roadmap underscores a collaborative approach involving policymakers, developers, and users, each contributing towards a more transparent, fair, and secure AI ecosystem. The aim is to foster a deeper understanding and confidence in AI technologies, ensuring their responsible development and application in our increasingly digital world.
Conference Paper
Integrating Artificial Intelligence (AI) into the workplace is imperative for navigating the challenges and opportunities of the fifth industrial revolution. In recent years, bringing AI and related technologies into the workplace has increased productivity, cost efficiency and work performance. Still, these changes in the work environment raised issues such as dehumanisation, lack of employee trust, and high job insecurity, leading to difficulties in AI adoption. At the same time, there are many standardisation activities in the field of AI, such as security standards, ethical guidelines, interoperability protocols, and others. However, it is believed that standardisation endeavours are still not close to the number of developed solutions that use AI technology. The problem analysed in this research is the role of standardisation in adopting AI in the workplace. To investigate this relationship, data was collected using a survey developed based on OECD research on the impact of AI and an extensive literature review. To answer the raised question, the research focus will be on exploring the impact of company and employee characteristics on the perceived role of standardisation in adopting AI in the workplace. As statistical tools, descriptive statistics and hypothesis testing will be used. It is believed that this research will give insights into whether companies and their employees recognise standardisation and standards as tools for adopting AI in the workplace to ensure interoperability, trustworthiness, and safety and security of AI solutions.
Chapter
Full-text available
Artificial intelligence (AI) and multimodal data (MMD) are gaining popularity in education for their ability to monitor and support complex teaching and learning processes. This line of research and practice was recently named Multimodal Learning Analytics (MMLA). However, MMLA raise serious ethical concerns given the pervasive nature of MMD and the opaque AI techniques that process them. This study aims to explore ethical concerns related to MMLA use in higher education and proposes a framework for raising awareness of these concerns, which could lead to more ethical MMLA research and practice. This chapter presents the findings of 60 interviews with educational stakeholders (39 higher education students, 12 researchers, 8 educators, and 1 representative of an MMLA company). A thematic coding of verbatim transcriptions revealed nine distinct themes. The themes and associated probing questions for MMLA stakeholders are presented as a draft of the first ethical MMLA framework. The chapter is concluded with a discussion of the emerging themes and suggestions for MMLA research and practice.
Article
Full-text available
Advances in new technologies affect private and professional lives alike, posing new opportunities and threats for companies, consumers, and society. In this context, the concept of corporate digital responsibility (CDR) gains traction enabling technologies benefitting humanity while exceeding mere technology advancements. Yet, theory and practice still lack a systematic in-depth understanding of the concept’s scope up to concrete activities. The aim of this paper is to enable a more concrete and deeper understanding of the concept scope by drawing on available knowledge in the thematically related discipline of information systems (IS) in general and electronic markets in particular. The study employs an extended systematic literature review to aggregate prior knowledge in this research domain relatable to the concept of CDR and to develop an in-depth classification of potential CDR activities inductively according to ten dimensions, corresponding sub-dimensions, and respective fields of action. This contributes to the overarching goal to develop the conceptualization of CDR and to anchor the concept in the context of electronic markets, thereby fostering human and social value creation.
Article
Full-text available
Artificial intelligence (AI) is advancing across technology domains including healthcare, commerce, the economy, the environment, cybersecurity, transportation, etc. AI will transform healthcare systems, bringing profound changes to diagnosis, treatment, patient care, data, medicines, devices, etc. However, AI in healthcare introduces entirely new categories of risk for assessment, management, and communication. For this topic, the framing of conventional risk and decision analyses is ongoing. This paper introduces a method to quantify risk as the disruption of the order of AI initiatives in healthcare systems, aiming to find the scenarios that are most and least disruptive to system order. This novel approach addresses scenarios that bring about a re-ordering of initiatives in each of the following three characteristic layers: purpose, structure, and function. In each layer, the following model elements are identified: 1. Typical research and development initiatives in healthcare. 2. The ordering criteria of the initiatives. 3. Emergent conditions and scenarios that could influence the ordering of the AI initiatives. This approach is a manifold accounting of the scenarios that could contribute to the risk associated with AI in healthcare. Recognizing the context-specific nature of risks and highlighting the role of human in the loop, this study identifies scenario s.06—non-interpretable AI and lack of human–AI communications—as the most disruptive across all three layers of healthcare systems. This finding suggests that AI transparency solutions primarily target domain experts, a reasonable inclination given the significance of “high-stakes” AI systems, particularly in healthcare. Future work should connect this approach with decision analysis and quantifying the value of information. Future work will explore the disruptions of system order in additional layers of the healthcare system, including the environment, boundary, interconnections, workforce, facilities, supply chains, and others.
Article
Full-text available
The European Commission’s report ‘Ethics guidelines for trustworthy AI’ provides a clear benchmark to evaluate the responsible development of AI systems, and facilitates international support for AI solutions that are good for humanity and the environment, says Luciano Floridi.
Conference Paper
Full-text available
As artificial intelligence (AI) systems become increasingly ubiquitous, the topic of AI governance for ethical decision-making by AI has captured public imagination. Within the AI research community, this topic remains less familiar to many researchers. In this paper, we complement existing surveys, which largely focused on the psychological, social and legal discussions of the topic, with an analysis of recent advances in technical solutions for AI governance. By reviewing publications in leading AI conferences including AAAI, AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four areas: 1) exploring ethical dilemmas; 2) individual ethical decision frameworks; 3) collective ethical decision frameworks; and 4) ethics in human-AI interactions. We highlight the intuitions and key techniques used in each approach, and discuss promising future research directions towards successful integration of ethical AI systems into human societies.
Article
Full-text available
In this article, we look at trust in artificial intelligence, machine learning (ML), and robotics. We first review the concept of trust in AI and examine how trust in AI may be different from trust in other technologies. We then discuss the differences between interpersonal trust and trust in technology and suggest factors that are crucial in building initial trust and developing continuous trust in artificial intelligence.
Article
What are the implications of artificial intelligence (AI) on human rights in the next three decades? Precise answers to this question are made difficult by the rapid rate of innovation in AI research and by the effects of human practices on the adaption of new technologies. Precise answers are also challenged by imprecise usages of the term “AI.” There are several types of research that all fall under this general term. We begin by clarifying what we mean by AI. Most of our attention is then focused on the implications of artificial general intelligence (AGI), which entail that an algorithm or group of algorithms will achieve something like superintelligence. While acknowledging that the feasibility of superintelligence is contested, we consider the moral and ethical implications of such a potential development. What do machines owe humans and what do humans owe superintelligent machines?
Comments on the “Draft Ethics Guidelines for Trustworthy AI
  • N Marnau
Artificial intelligence: Ethics, governance and policy challenges
  • Andrea Renda
Renda, Andrea. "Artificial intelligence: Ethics, governance and policy challenges." CEPS Task Force Report (2019).
Comments on the “Draft Ethics Guidelines for Trustworthy AI
  • marnau
Comments on the "Draft Ethics Guidelines for Trustworthy AI" by the High-Level Expert Group on Artificial Intelligence
  • N Marnau
Marnau, N. (2019). Comments on the "Draft Ethics Guidelines for Trustworthy AI" by the High-Level Expert Group on Artificial Intelligence.