Content uploaded by Mehtab Fatima
Author content
All content in this area was uploaded by Mehtab Fatima on Feb 28, 2024
Content may be subject to copyright.
Trustworthiness of Artificial Intelligence
Sonali Jain Manan Luthra
Department of Electrical and Electronics Engineering Department of Electrical and Electronics Engineering
Amity University, Uttar Pradesh Amity University, Uttar Pradesh
India India
jainsonali10dec@gmail.com manan894@gmail.com
Shagun Sharma Mehtab Fatima
Department of Electrical and Electronics Engineering Department of Electrical and Electronics Engineering
Amity University, Uttar Pradesh Amity University, Uttar Pradesh
India India
shagun131a@gmail.com mehtabfatima@gmail.com
Abstract— This paper discusses the need for a trustworthy AI,
along with the ethics which are required to keep that trust intact.
AI has a lot of benefits when it comes to societal, individual or
cultural development. But any mistake in either the development
or in the working phase of the AI system can be disastrous,
especially when human lives are involved. The main goal of this
paper is to understand what really makes an Artificial
Intelligence system trustworthy.
Keywords— Artificial Intelligence, ethical, lawful, robust,
trustworthy, fundamental rights, democracy.
I. INTRODUCTION
Artificial Intelligence is the most important innovation for
the society which has the potential for improving the
quality of living of humankind as a whole. It can be utilized
in nearly every aspect of life of the people like healthcare
services, public sectors, education, electronics, banking etc.
The greatest contribution of AI will be to face and resolve
the global challenges, given in the UN's Sustainable
Development Goals(SDG), like giving quality Education,
providing Clean Water and Sanitation, ending Poverty,
Zero Hunger etc. For this purpose, innovation in the current
AI system is of paramount importance for them to
encompass a humane perspective and function in society to
support and expand human welfare. For complete trust
between the society and AI systems, improvement and
innovation of both the internal architecture of the AIs and
applications utilizing their Human Interface properties
needs to be accomplished.
II. FRAMEWORK AND FOUNDATIONOF A TRUSTWORTHY AI
For successful development of framework for a reliable AI
system, three criteria should be met for its development and
its function as depicted in fig.1.
1. Lawful: The AI system should be compliant with
various rules and laws.
2. Ethical: It should contain morals and ethics, and adhere
to moral values and principals.
3. Robust: AI should be sturdy in both social and technical
sense.
Ethical issues of AI are field of applied moral values, it
focuses on the various socio-technical discrepancies or
issues generated due to the construction and function of
AI. Ethical field regarding AI has significant value as it
deals with problems like safety of individuals, dealing
with the privacy of society and even unemployment due to
AI. Ethical field will also explore the possibility of AI’s
influence on the society regarding the basic values, like
the UN Development Goals. The main objective for the
developers will be to integrate these systems on the
common life with disrupting and existing social
boundaries for maintaining sustainable order in the
society.
III. RIGHTS AS A FOUNDATION FOR TRUSTED AI
A. Respect for human dignity and integrity
An AI System developed should respect and protect
human’s moral code and their self-identity along with their
personal sense of worth by not taking any unethical action
in opposition of their ethics.
B. Freedom of the individual
Fig. 1. Framework of Trustworthy AI
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 907
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
Freedom of individuals means the full autonomous control
over their rights, that can be rights to education, rights like
privacy rights to express etc. An AI system should need to
regard the freedom of individuals by not using any form of
coercion, manipulation and deception with them.
C. Respect for democracy, justice and the rule of law
AI system should not change any current democratic
processes, freedom of vote and laws of any country. AI
system should also be aware enough for not taking any
actions which can be detrimental to the principles that form
the laws.
D. Equality, non-discrimination and solidarity
AI system should not function in any manner that supports
racial issues, religion issues, gender discrimination and any
other such unfair criteria. The system should be respectful
to all, irrespective of their gender, religion and race.
E. Citizens’ rights
AI system should be increasing the potential of the ability
of various governments to enhance the innovation and
efficacy of the public sector as well as the private sector for
improvement of life for citizens.
IV. ETHICAL PRINCIPLES IN THE CONTEXT OF AI SYSTEMS
The way ethics play an important role in our daily lives,
similarly, it is necessary to have ethics for AI systems in
order to enable the systems to make quick, transparent and
responsible decisions. Ethical principles for AI as depicted
in Fig.2 can serve a variety of functions in support of the
users. Some of the ethical principles necessary for AI to
achieve better outcomes, reduce the risk of negative impact
and practice the highest standards of ethical business and
good governance.
A. The principle of respect for human centred values
The AI systems must not in any case dominate, force,
deceive or manipulate human beings. Rather, they must be
designed in such a way that they support, increase and
accompany humans’ social and cultural skills as well as
their cognitive thinking. The AI systems must follow the
design principles that have been creating supporting the
human centric approach and there should always be an
upper hand for humans regarding their functionality. The
AI systems may also make changes in the working
atmosphere aiming for the establishment of meaningful
work keeping in mind the proposed limits set by humans.
B. The principle of prevention of harm
An AI system must not intend or cause harm to a human
being. This involves mental as well physical protection of
human beings, while keeping their dignity. The safety and
security of the environment in which the AI systems work
must also be kept in mind, so that it is ensured that they are
not used maliciously.AI systems should benefit individuals,
society and the environment.
C. The principle of fairness
The motive behind using an AI system should be fair and
must not include any bias decisions. The ulterior motive
behind this principle is to mitigate the results obtained from
a discriminate use of data in artificial intelligence.
The principle of explicability
Explicability comes from the word explicable meaning
“capability of being explained”. In order to build and
maintain trust among users in AI systems, explicability is
an important factor. The process through which AI works
needs to be transparent, and the purpose of the AI system as
well as the decisions made by it must be well understood by
those affected, directly or indirectly. The extent to which an
AI system is explicable is highly based on the context
related to which the system is working.
E. Principle of privacy protection and security
AI system should respect and uphold privacy rights and
data protection, and ensure the security of data. This
includes ensuring proper data governance and management
for all data used and generated by the AI systems.
V. TRUSTWORTHY AI REQUIREMENTS(Fig.3.)
A. Human agency and oversight
Fundamental Rights: AI systems have the capacity to
equally support or hamper fundamental rights. For instance,
they can balloon in the field of education, thus supporting
someone’s right to education. However, the same AI
system can negatively affect someone's fundamental rights.
Fig. 2. Ethical Purposes of AI
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 908
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
In such situations, proper fundamental rights violation
assessment must be performed. This must be done before
the development of the AI system.
Human Agency: There should be a flexible system between
the user and the AI system. The user should have the
necessary knowledge and tools in order to comprehend and
make changes in the AI system according to their needs and
goals. But this must be limited to a certain degree.
Human Oversight: Human oversight can be beneficial.
B. Technical robustness and safety
Resilience to attack: Just like any software, AI systems also
have the vulnerability of getting attacked by adversaries
(eg. hacking). In case an AI system is attacked by an
adversary, there are chances that the AI system may
respond differently and produce an unwanted output. It may
even shut down. Hence, in order to mitigate this, the AI’s
security must be taken into account while designing and
developing the AI system.
Fall Back Plan: Every AI system must have a fall-back plan
in case a problem occurs. It must be ensured that the AI acts
according to the proposed regulations towards its goal
without harming any human being or the environment. The
fall-back may include moving from a statistical approach
to a rule based approach. The system may even take
permission from the human operator before performing
further tasks.
Accuracy: An AI system must be accurate enough to make
correct judgements. This is very crucial at times and
situations where human lives are at risk. Inaccurate
predictions may lead to damage to property and loss of
human lives.
Reliability and Reproducibility: An AI system must work
with a variety of input in order to obtain different outputs,
hence it must be reliable. Also, an AI task must produce the
same output when repeatedly performed under the same
conditions.
C. Privacy and data governance
Privacy, Data protection: The information provided by the
user and the personal information of the user must be kept
safe by the AI system at all times. The AI system must not
misuse it for any reason whatsoever.
Quality and Integrity of Data: Whenever any data is
gathered by the AI system, there are chances that the data
may be full of errors and mistakes. Feeding such type of
data may change the system behaviour. The system must
also reject any malicious data.
Access to Data: Not everyone must have access to the data
collected by the AI system. Certain rules and regularities
must be maintained regarding who will have access to this
data and under what circumstances this data can be
extracted.
D. Transparency
Explainability: There must always be an explanation of
why an AI system made a particular decision. There are
some situations in which analysing a particular decision
made by the AI system is necessary.
Communication: Every user has the right to know that they
are interacting or communicating with an AI system. A user
can knowingly choose to have a human based interaction
with its AI system, but that too under certain conditions.
Also, this must not violate any fundamental rights under
any condition.
E. Diversity, non-discrimination and fairness
Avoidance of unfair bias: The information that goes
through an AI system (whether that data is used to interact
with the user or is used while developing the AI system)
may contain some historical events that are related with
biases in the past. This piece of information may continue
to create various
cultural, racial or sexual bias and prejudice in the future as
well. In order to alleviate the problem, people from a
diverse background may be hired while developing the AI
system. Accessibility and Universal Design: Every AI
system must have a fit-for-all design. This means that it
Fig. 3. Trustworthy AI requirements
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 909
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
must be designed in such a way so that it could be used by
everyone, regardless of age, gender, mental or physical
disabilities.
F. Societal and environmental wellbeing
Sustainable and environment friendly AI: An AI system’s
design, development and usage processes must be
performed in an environmentally friendly way. E.g. energy
consumption during the AI’s usage process must be tracked
and kept under certain limits.
Social impact: AI systems have the ability to alter our
social lives, be it in areas of entertainment, work life or
social life. They can not only make our social lives better,
but can deteriorate it too. When it comes to AI’s negative
impact on our social life, they include both physical as well
as mental effects. In order to mitigate this, the AI systems
must be kept under observation and monitored regularly.
Society and Democracy: Apart from using AI systems to
improve an individual's life, they must be used to impact
the society at large, for e.g. analysing the flaws of a
democracy and suggesting decisions to improve its
structure.
VI. REALIZATION OF A TRUSTWORTHY AI
A. Technical methods
Technical methods ensure that the trustworthy AI that can
be employed in the development, designing and is utilized
in all phases of an AI system. This architecture involves
three step cycle for AI to be trustworthy:
● The sense step, involves recognition of all
environmental factors necessary to follow all the
requirements.
● The plan step, allows involvement of those plans that
adhere to all the requirements.
● The act step, allows only those actions that are limited
to behaviours realizing all the requirements.
a) Ethics and the rule of law by design
Law by design provides accurate and explicit links between
the abstract principles which the system should obey and
implement specific decisions. The norms should be obeyed
for implementation of trustworthy AI system. It provides
safe shutdown in case of failure and resume the operation
after a forced shut-down.
b) Explanation methods
Behaviour of system must be analysed before interpreting
its results for achieving a trustworthy AI system.
c) Testing and validating
Testing and validation of the system must be provided as it
ensures the system behaves as desired throughout its life
cycle. It must include all components of an AI system,
including data, pre-trained models, environments and the
behaviour of the system as a whole. The output must be
consistent with the final results of the preceding processes,
while comparing them with the previously defined policies
to ensure that they are not violated.
B. Non-technical methods
This section elaborates different non-technical methods
which plays an important role in maintaining and securing
the AI.
a) Standardisation
Standardisation of designs, business processing and
manufacturing services act as a quality management system
for AI by providing the users, organisations, research
institutions consumers and governments with the ability to
identify and encourage ethical code of conduct for their
purchasing decisions
b) Certification
The certifications apply standardised designs,
manufacturing services developed for different application
domains and align them appropriately in different contexts
of industrial and societal standards. Certification cannot
replace the responsibility. So it should be complemented by
disclaimers as well as review, accountable frameworks and
readdressed mechanisms.
c) Codes of conduct
An organisation should document its purpose and intentions
when working with AI systems. Also it is supported
standards of some expected values such as transparency,
fundamental rights, and protection from harm.
d) Accountability via governance frameworks
Some governance frameworks should be established
internally and externally by organization to account for the
ethical decisions related to deployment, development and
usage of AI system. Communication channels should also
discuss dilemmas and report emerging issues incorporating
ethical concerns.
e) Education pact with awareness to foster an
ethical mind-set
Trustworthy AI encourages the collaborative and instructed
participation by all stakeholders. Communication,
education and training are important factors for ensuring
the potential impact of AI systems is known, and makes
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 910
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
people aware as they have a vital part in shaping the society
having AI Systems.
f) Stakeholder participation and social
dialogue
AI systems offer huge benefits so it should be guaranteed
that they are available to all. This requires discussions and
dialogues between various social partners and stakeholders
must also include the general public for their views.
g) Diversity and inclusive design teams
The teams developing, designing, testing, deploying,
maintaining, and procurement of AI systems takes into
consideration the diversity of users and society in general.
This ensures objectivity and contributions of various
perspectives and needs. Generally, team diversity is not
only in terms of gender, age, social group, culture but also
includes skill sets, professional skills and background.
VII. ASSESSING TRUSTWORTHY AI
Development of evaluation criteria and administration step
are to be done closely with the interested parties of the
organisation like the stakeholders and government. Various
small scale projects are to be performed first for getting the
relevant feedback on the limitations of the current AI
system.
Hard rules and limitations of AI’s functions is to be
outlined by referencing several factors like safety,
advancement of AI and social acceptance of the people.
A. Climate action and sustainable infrastructure
AI influence on improvement or at least mitigation in
climate change can have a great impact on society. AI
systems can reduce the unwanted needs of resources by
accurately monitoring and managing the data of relevant
energy needs of the society. This will result in the
development of efficient infrastructure and intelligent
logistics.
Indirectly, by taking positive action for climate change, AI
system can also reduce net amount of fatalities in the world.
Additionally, the use of AI on medical sectors will also
support the decrease in fatalities.
B. Health and well-being
AI system can also have both direct and indirect effect on
the medical and health sector. In a direct way, it can be
used
with various measuring instruments and life support devices
to provide a high level of accuracy and control for aiding
the doctors. Trust on the measurement of the AI devices
and their lack of bias by the doctors can improve the
present conditions of treatment exponentially.
In case of indirect influence, through the use of
measurement recorded by the AI, doctors will be able to
determine any potential diseases or problems in the patients
and appropriate preventive measures can be taken.
C. Quality education and digital transformation
AI systems can have the ability to estimate or predict the
upcoming trends regarding the jobs availability,
replacement
of jobs due to better technologies and the measure of
unemployment. All the above factors can be used by these
systems to provide solutions like the skills needed for the
new jobs, change in the present educational material and
the business suggestions for reduction in unemployment.
AI stream will also be a great tool to restructure the
Educational system to be more job oriented and adaptable
to the individual strengths of students. Furthermore, using
the power of internet, it will be used to provide quality
education for children irrespective of their backgrounds.
D. Identifying and tracking individuals with AI
As with the rise of various security measures like face ID,
touch ID for sensitive work like banking or email signing,
privacy of individuals becoming somewhat stable but with
the AI system, these measures will also become obsolete
due to the system’s ability of face recognition and
fingerprint scanning. For organisations and governments,
breach of their privacy will be of dire consequences.
Additionally, when dealing with digital entities there will
certainly be a risk of hacking and AI systems will also share
this disadvantage.
E. Lethal autonomous weapon systems (LAWS)
Research on the autonomous weapons for war and defence
are being done by nearly every country in the world.
Certain risk factors are involved for this technology which
can have dire consequences if not addressed properly.
1. Hacking of the systems will become plausible by
this research which will risk the safety of citizens all around
the world.
2. Malfunction of the AI systems is also a grave
concern for this research. If the various ethical, legal and
humanitarian values and issues are not clearly expressed to
these AI systems, there is a possibility for them to have an
autonomous interpretation which can be lethal for the
society and their action can risk the safety of people.
VIII. CONCLUSION
AI system have numerous positive impacts in the present
and will increase in the future, both on a personal level and
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 911
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.
a professional level, in various sectors like medical,
educational, defence etc. But these systems also accompany
equally large risks and negative impacts on the society.
Therefore, development of the framework for the system
through which they can be regarded as trustworthy is of
paramount importance before their acclimatisation in the
daily lives of people and organisations. As discussed in this
paper, a trustworthy AI should be 1) compliant with various
rules and laws. 2) contain morals and ethics, and adhere to
moral values and principals. 3) sturdy in both social and
technical sense. For achieving this, list of requirements is to
be established based on sources like fundamental rights and
various laws, for the AI system to work on human centric
approach.
References
[1] Floridi, Luciano. "Establishing the rules for building trustworthy AI."
Nature Machine Intelligence1, no. 6 (2019): 261.
[2] Siau, K., Wang, W. (2018), Building Trust in Artificial Intelligence,
Machine Learning, and Robotics,CUTTER BUSINESS
TECHNOLOGY JOURNAL (31), S. 47–53.
[3] Marnau, N. (2019). Comments on the “Draft Ethics Guidelines for
Trustworthy AI” by the High-Level Expert Group on Artificial
Intelligence.
[4] Yu, Han, Zhiqi Shen, Chunyan Miao, Cyril Leung, Voctor R. Lesser,
and Qiang Yang. 2018. “Building Ethics into Artificial Intelligence.”
arXiv, 1–8.
[5] Vakkuri, Ville, and Pekka Abrahamsson. 2018. “The Key Concepts of
Ethics of Artificial Intelligence.” Proceedings of the 2018 IEEE
International Conference on Engineering, Technology and Innovation,
1–6.
[6] Livingston, S., & Risse, M. (2019). The Future Impact of Artificial
Intelligence on Humans and Human Rights. Ethics & International
Affairs, 33(2), 141-158.
[7] Renda, Andrea. "Artificial intelligence: Ethics, governance and policy
challenges." CEPS Task Force Report (2019).
[8] Jabłonowska, Agnieszka, et al. "Consumer Law and Artificial
Intelligence: Challenges to the EU Consumer Law and Policy
Stemming from the Business' Use of Artificial Intelligence-Final
report of the ARTSY project." EUI Department of Law Research
Paper 2018/11 (2018).
2020 6th International Conference on Advanced Computin
g
& Communication S
y
stems (ICACCS)
978-1-7281-5197-7/20/$31.00 ©2020 IEEE 912
Authorized licensed use limited to: AMITY University. Downloaded on February 28,2024 at 09:16:11 UTC from IEEE Xplore. Restrictions apply.