ArticlePDF Available

The Nature, Importance, and Difficulty of Machine Ethics

Authors:

Abstract

Machine ethics has a broad range of possible implementations in computer technology--from maintaining detailed records in hospital databases to overseeing emergency team movements after a disaster. From amachine ethics perspective, you can look at machines as ethical-impact agents, implicit ethical agents,explicit ethical agents, or full ethical agents. A current research challenge is to develop machines thatare explicit ethical agents. This research is important, but accomplishing this goal will be extremelydifficult without a better understanding of ethics and of machine learning and cognition. This article is part of a special issue on Machine Ethics.
18 1541-1672/06/$20.00 © 2006 IEEE IEEE INTELLIGENT SYSTEMS
Published by the IEEE Computer Society
Machine Ethics
The Nature,
Importance,
and Difficulty
of Machine Ethics
James H. Moor, Dartmouth College
T
he question of whether machine ethics exists or might exist in the future is diffi-
cult to answer if we can’t agree on what counts as machine ethics. Some might
argue that machine ethics obviously exists because humans are machines and humans
have ethics. Others could argue that machine ethics obviously doesn’t exist because
ethics is simply emotional expression and machines
can’t have emotions.
A wide range of positions on machine ethics are
possible, and a discussion of the issue could rapidly
propel us into deep and unsettled philosophical
issues. Perhaps, understandably, few in the scientific
arena pursue the issue of machine ethics. You’re
unlikely to find easily testable hypotheses in the
murky waters of philosophy. But we can’t—and
shouldn’t—avoid consideration of machine ethics in
today’s technological world.
As we expand computers’decision-making roles in
practical matters, such as computers driving cars, eth-
ical considerations are inevitable. Computer scien-
tists and engineers must examine the possibilities for
machine ethics because, knowingly or not, they’ve
already engaged—or will soon engage—in some
form of it. Before we can discuss possible imple-
mentations of machine ethics, however, we need to
be clear about what we’re asserting or denying.
Varieties of machine ethics
When people speak of technology and values,
they’re often thinking of ethical values. But not all
values are ethical. For example, practical, economic,
and aesthetic values don’t necessarily draw on ethi-
cal considerations. A product of technology, such as
a new sailboat, might be practically durable, eco-
nomically expensive, and aesthetically pleasing,
absent consideration of any ethical values. We rou-
tinely evaluate technology from these nonethical nor-
mative viewpoints. Tool makers and users regularly
evaluate how well tools accomplish the purposes for
which they were designed. With technology, all of
us—ethicists and engineers included—are involved
in evaluation processes requiring the selection and
application of standards. In none of our professional
activities can we retreat to a world of pure facts,
devoid of subjective normative assessment.
By its nature, computing technology is normative.
We expect programs, when executed, to proceed
toward some objective—for example, to correctly
compute our income taxes or keep an airplane on
course. Their intended purpose serves as a norm for
evaluation—that is, we assess how well the computer
program calculates the tax or guides the airplane.
Viewing computers as technological agents is rea-
sonable because they do jobs on our behalf. They’re
normative agents in the limited sense that we can
assess their performance in terms of how well they
do their assigned jobs.
After we’ve worked with a technology for a while,
the norms become second nature. But even after
they’ve become widely accepted as the way of doing
the activity properly, we can have moments of real-
ization and see a need to establish different kinds of
norms. For instance, in the early days of computing,
using double digits to designate years was the stan-
dard and worked well. But, when the year 2000
approached, programmers realized that this norm
needed reassessment. Or consider a distinction involv-
ing AI. In a November 1999 correspondence between
Herbert Simon and Jacques Berleur,
1
Berleur was
asking Simon for his reflections on the 1956 Dart-
mouth Summer Research Project on Artificial Intel-
ligence, which Simon attended. Simon expressed
Implementations of
machine ethics might
be possible in
situations ranging
from maintaining
hospital records to
overseeing disaster
relief. But what is
machine ethics, and
how good can it be?
some puzzlement as to why Trenchard More,
a conference attendee, had so strongly
emphasized modal logics in his thesis. Simon
thought about it and then wrote back to
Berleur,
My reply to you last evening left my mind
nagged by the question of why Trench Moore
[sic], in his thesis, placed so much emphasis
on modal logics. The answer, which I thought
might interest you, came to me when I awoke
this morning. Viewed from a computing stand-
point (that is, discovery of proofs rather than
verification), a standard logic is an indetermi-
nate algorithm: it tells you what you MAY
legally do, but not what you OUGHT to do to
find a proof. Moore [sic] viewed his task as
building a modal logic of “oughts”—a strat-
egy for search—on top of the standard logic
of verification.
Simon was articulating what he already
knew as one of the designers of the Logic The-
orist, an early AI program. A theorem prover
must not only generate a list of well-formed
formulas but must also find a sequence of
well-formed formulas constituting a proof.
So, we need a procedure for doing this.
Modal logic distinguishes between what’s
permitted and what’s required. Of course,
both are norms for the subject matter. But
norms can have different levels of obligation,
as Simon stresses through capitalization.
Moreover, the norms he’s suggesting aren’t
ethical norms. A typical theorem prover is a
normative agent but not an ethical one.
Ethical-impact agents
You can evaluate computing technology
in terms of not only design norms (that is,
whether it’s doing its job appropriately) but
also ethical norms.
For example, Wired magazine reported an
interesting example of applied computer
technology.
2
Qatar is an oil-rich country in
the Persian Gulf that’s friendly to and influ-
enced by the West while remaining steeped
in Islamic tradition. In Qatar, these cultural
traditions sometimes mix without incident—
for example, women may wear Western
clothing or a full veil. And sometimes the cul-
tures conflict, as illustrated by camel racing,
a pastime of the region’s rich for centuries.
Camel jockeys must be light—the lighter the
jockey, the faster the camel. Camel owners
enslave very young boys from poorer coun-
tries to ride the camels. Owners have histor-
ically mistreated the young slaves, including
limiting their food to keep them lightweight.
The United Nations and the US State Depart-
ment have objected to this human traffick-
ing, leaving Qatar vulnerable to economic
sanctions.
The machine solution has been to develop
robotic camel jockeys. The camel jockeys are
about two feet high and weigh 35 pounds.
The robotic jockey’s right hand handles the
whip, and its left handles the reins. It runs
Linux, communicates at 2.4 GHz, and has a
GPS-enabled camel-heart-rate monitor. As
Wired explained it, “Every robot camel jockey
bopping along on its improbable mount
means one Sudanese boy freed from slavery
and sent home. Although this eliminates the
camel jockey slave problem in Qatar, it doesn’t
improve the economic and social conditions
in places such as Sudan.
Computing technology often has impor-
tant ethical impact. The young boys replaced
by robotic camel jockeys are freed from
slavery. Computing frees many of us from
monotonous, boring jobs. It can make our
lives better but can also make them worse.
For example, we can conduct business online
easily, but we’re more vulnerable to identity
theft. Machine ethics in this broad sense is
close to what we’ve traditionally called com-
puter ethics. In one sense of machine ethics,
computers do our bidding as surrogate agents
and impact ethical issues such as privacy,
property, and power. However, the term is
often used more restrictively. Frequently,
what sparks debate is whether you can put
ethics into a machine. Can a computer oper-
ate ethically because it’s internally ethical in
some way?
Implicit ethical agents
If you wish to put ethics into a machine,
how would you do it? One way is to constrain
the machine’s actions to avoid unethical out-
comes. You might satisfy machine ethics in
this sense by creating software that implic-
itly supports ethical behavior, rather than by
writing code containing explicit ethical max-
ims. The machine acts ethically because its
internal functions implicitly promote ethical
behavior—or at least avoid unethical behav-
ior. Ethical behavior is the machine’s nature.
It has, to a limited extent, virtues.
Computers are implicit ethical agents
when the machine’s construction addresses
safety or critical reliability concerns. For
example, automated teller machines and
Web banking software are agents for banks
and can perform many of the tasks of human
tellers and sometimes more. Transactions
involving money are ethically important.
Machines must be carefully constructed to
give out or transfer the correct amount of
money every time a banking transaction
occurs. A line of code telling the computer
to be honest won’t accomplish this.
Aristotle suggested that humans could
obtain virtue by developing habits. But with
machines, we can build in the behavior with-
out the need for a learning curve. Of course,
such machine virtues are task specific and
rather limited. Computers don’t have the
practical wisdom that Aristotle thought we
use when applying our virtues.
Another example of a machine that’s an
implicit ethical agent is an airplane’s auto-
matic pilot. If an airline promises the plane’s
passengers a destination, the plane must
arrive at that destination on time and safely.
These are ethical outcomes that engineers
design into the automatic pilot. Other built-
in devices warn humans or machines if an
object is too close or the fuel supply is low.
Or, consider pharmacy software that checks
for and reports on drug interactions. Doctor
and pharmacist duties of care (legal and eth-
ical obligations) require that the drugs pre-
scribed do more good than harm. Software
with elaborate medication databases helps
them perform those duties responsibly.
Machines’ capability to be implicit ethical
agents doesn’t demonstrate their ability to be
full-fledged ethical agents. Nevertheless, it
illustrates an important sense of machine
ethics. Indeed, some would argue that soft-
ware engineers must routinely consider
machine ethics in at least this implicit sense
during software development.
Explicit ethical agents
Can ethics exist explicitly in a machine?
3
Can a machine represent ethical categories
and perform analysis in the sense that a
computer can represent and analyze inven-
tory or tax information? Can a machine “do”
JULY/AUGUST 2006 www.computer.org/intelligent 19
Frequently, what sparks debate is
whether you can put ethics into a
machine. Can a computer operate
ethically because it’s internally
ethical in some way?
ethics like a computer can play chess? Chess
programs typically provide representations
of the current board position, know which
moves are legal, and can calculate a good
next move. Can a machine represent ethics
explicitly and then operate effectively on the
basis of this knowledge? (For simplicity, I’m
imaging the development of ethics in terms
of traditional symbolic AI. However, I don’t
want to exclude the possibility that the
machine’s architecture is connectionist, with
an explicit understanding of the ethics
emerging from that. Compare Wendell Wal-
lach, Colin Allen, and Iva Smit’s different
senses of “bottom up” and “top down.
4
)
Although clear examples of machines act-
ing as explicit ethical agents are elusive,
some current developments suggest interest-
ing movements in that direction. Jeroen van
den Hoven and Gert-Jan Lokhorst blended
three kinds of advanced logic to serve as a
bridge between ethics and a machine:
deontic logic for statements of permission
and obligation,
epistemic logic for statements of beliefs
and knowledge, and
action logic for statements about actions.
5
Together, these logics suggest that a for-
mal apparatus exists that could describe eth-
ical situations with sufficient precision to
make ethical judgments by machine. For
example, you could use a combination of
these logics to state explicitly what action is
allowed and what is forbidden in transferring
personal information to protect privacy.
6
In a
hospital, for example, you’d program a com-
puter to let some personnel access some
information and to calculate which actions
what person should take and who should be
informed about those actions.
Michael Anderson, Susan Anderson, and
Chris Armen implement two ethical theo-
ries.
7
Their first model of an explicit ethical
agent—Jeremy (named for Jeremy Ben-
tham)—implements Hedonistic Act Utilitar-
ianism. Jeremy estimates the likelihood of
pleasure or displeasure for persons affected
by a particular act. The second model is W.D.
(named for William D. Ross). Ross’s theory
emphasizes prima facie duties as opposed to
absolute duties. Ross considers no duty as
absolute and gives no clear ranking of his
various prima facie duties. So, it’s unclear
how to make ethical decisions under Ross’s
theory. Anderson, Anderson, and Armen’s
computer model overcomes this uncertainty.
It uses a learning algorithm to adjust judgments
of duty by taking into account both prima facie
duties and past intuitions about similar or dis-
similar cases involving those duties.
These examples are a good start toward
creating explicit ethical agents, but more
research is needed before a robust explicit
ethical agent can exist in a machine. What
would such an agent be like? Presumably, it
would be able to make plausible ethical judg-
ments and justify them. An explicit ethical
agent that was autonomous in that it could
handle real-life situations involving an unpre-
dictable sequence of events would be most
impressive.
James Gips suggested that the develop-
ment of an ethical robot be a computing
Grand Challenge.
8
Perhaps DARPA could
establish an explicit-ethical-agent project
analogous to its autonomous-vehicle project
(www.darpa.mil/grandchallenge/index.asp).
As military and civilian robots become
increasingly autonomous, they’ll probably
need ethical capabilities. Given this likely
increase in robots’ autonomy, the develop-
ment of a machine that’s an explicit ethical
agent seems a fitting subject for a Grand
Challenge.
Machines that are explicit ethical agents
might be the best ethical agents to have in sit-
uations such as disaster relief. In a major dis-
aster, such as Hurricane Katrina in New
Orleans, humans often have difficulty track-
ing and processing information about who
needs the most help and where they might
find effective relief. Confronted with a com-
plex problem requiring fast decisions, com-
puters might be more competent than
humans. (At least the question of a computer
decision maker’s competence is an empiri-
cal issue that might be decided in favor of the
computer.) These decisions could be ethical
in that they would determine who would live
and who would die. Some might say that
only humans should make such decisions,
but if (and of course this is a big assumption)
computer decision making could routinely
save more lives in such situations than human
decision making, we might have a good eth-
ical basis for letting computers make the
decisions.
9
Full ethical agents
A full ethical agent can make explicit eth-
ical judgments and generally is competent to
reasonably justify them. An average adult
human is a full ethical agent. We typically
regard humans as having consciousness,
intentionality, and free will. Can a machine
be a full ethical agent? It’s here that the
debate about machine ethics becomes most
heated. Many believe a bright line exists
between the senses of machine ethics dis-
cussed so far and a full ethical agent. For
them, a machine can’t cross this line. The
bright line marks a crucial ontological dif-
ference between humans and whatever
machines might be in the future.
The bright-line argument can take one or
both of two forms. The first is to argue that
only full ethical agents can be ethical agents.
To argue this is to regard the other senses of
machine ethics as not really ethics involving
agents. However, although these other senses
are weaker, they can be useful in identifying
more limited ethical agents. To ignore the
ethical component of ethical-impact agents,
implicit ethical agents, and explicit ethical
agents is to ignore an important aspect of
machines. What might bother some is that
the ethics of the lesser ethical agents is
derived from their human developers. How-
ever, this doesn’t mean that you can’t evalu-
ate machines as ethical agents. Chess pro-
grams receive their chess knowledge and
abilities from humans. Still, we regard them
as chess players. The fact that lesser ethical
agents lack humans’ consciousness, inten-
tionality, and free will is a basis for arguing
that they shouldn’t have broad ethical respon-
sibility. But it doesn’t establish that they
aren’t ethical in ways that are assessable or
that they shouldn’t have limited roles in func-
tions for which they’re appropriate.
The other form of bright-line argument is
to argue that no machine can become a full
ethical agent—that is, no machine can have
consciousness, intentionality, and free will.
This is metaphysically contentious, but the
simple rebuttal is that we can’t say with cer-
tainty that future machines will lack these
Machine Ethics
An average adult is a full ethical
agent. Can a machine be a full
ethical agent? It’s here that the
debate about machine ethics
becomes most heated.
20 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
features. Even John Searle, a major critic of
strong AI, doesn’t argue that machines can’t
possess these features.
10
He only denies that
computers, in their capacity as purely syn-
tactic devices, can possess understanding. He
doesn’t claim that machines can’t have under-
standing, presumably including an under-
standing of ethics. Indeed, for Searle, a mate-
rialist, humans are a kind of machine, just not
a purely syntactic computer.
Thus, both forms of the bright-line argument
leave the possibility of machine ethics open.
How much can be accomplished in machine
ethics remains an empirical question.
W
e won’t resolve the question of
whether machines can become full
ethical agents by philosophical argument or
empirical research in the near future. We
should therefore focus on developing limited
explicit ethical agents. Although they would
fall short of being full ethical agents, they
could help prevent unethical outcomes.
I can offer at least three reasons why it’s
important to work on machine ethics in the
sense of developing explicit ethical agents:
Ethics is important. We want machines to
treat us well.
Because machines are becoming more
sophisticated and make our lives more
enjoyable, future machines will likely
have increased control and autonomy to
do this. More powerful machines need
more powerful machine ethics.
Programming or teaching a machine to act
ethically will help us better understand
ethics.
The importance of machine ethics is clear.
But, realistically, how possible is it? I also
offer three reasons why we can’t be too opti-
mistic about our ability to develop machines
to be explicit ethical agents.
First, we have a limited understanding of
what a proper ethical theory is. Not only do
people disagree on the subject, but individu-
als can also have conflicting ethical intuitions
and beliefs. Programming a computer to be
ethical is much more difficult than program-
ming a computer to play world-champion
chess—an accomplishment that took 40 years.
Chess is a simple domain with well-defined
legal moves. Ethics operates in a complex
domain with some ill-defined legal moves.
Second, we need to understand learning
better than we do now. We’ve had significant
successes in machine learning, but we’re still
far from having the child machine that Tur-
ing envisioned.
Third, inadequately understood ethical the-
ory and learning algorithms might be easier
problems to solve than computers’ absence
of common sense and world knowledge. The
deepest problems in developing machine
ethics will likely be epistemological as much
as ethical. For example, you might program
a machine with the classical imperative of
physicians and Asimovian robots: First, do
no harm. But this wouldn’t be helpful unless
the machine could understand what consti-
tutes harm in the real world. This isn’t to sug-
gest that we shouldn’t vigorously pursue
machine ethics. On the contrary, given its
nature, importance, and difficulty, we should
dedicate much more effort to making progress
in this domain.
Acknowledgments
I’m indebted to many for helpful comments,
particularly to Keith Miller, Vincent Wiegel, and
this magazine’s anonymous referees and editors.
References
1. H. Simon, “Re: Dartmouth Seminar 1956”
(email to J. Berleur), Herbert A. Simon Col-
lection, Carnegie Mellon Univ. Archives, 20
Nov. 1999.
2. J. Lewis, “Robots of Arabia,Wired, vol. 13,
no. 11, Nov. 2005, pp. 188–195; www.wired.
com/wired/archive/13.11/camel.html?pg=1&
topic=camel&topi c_set=.
3. J.H. Moor, “Is Ethics Computable?” Metaphi-
losophy, vol. 26, nos. 1–2, 1995, pp. 1–21.
4. W. Wallach, C. Allen, and I. Smit, “Machine
Morality: Bottom-Up and Top-Down Ap-
proaches for Modeling Human Moral Facul-
ties, Machine Ethics, M. Anderson, S.L.
Anderson, and C. Armen, eds., AAAI Press,
2005, pp. 94–102.
5. J. van den Hoven and G.-J. Lokhorst, “Deon-
tic Logic and Computer-Supported Computer
Ethics,Cyberphilosophy: The Intersection
of Computing and Philosophy, J.H. Moor and
T.W. Bynum, eds., Blackwell, 2002, pp.
280–289.
6. V. Wiegel, J. van den Hoven, and G.-J.
Lokhorst, “Privacy, Deontic Epistemic Action
Logic and Software Agents,Ethics of New
Information Technology, Proc. 6th Int’l Conf.
Computer Ethics: Philosophical Enquiry
(CEPE 05), Center for Telematics and Infor-
mation Technology, Univ. of Twente, 2005,
pp. 419–434.
7. M. Anderson, S.L. Anderson, and C. Armen,
“Towards Machine Ethics: Implementing Two
Action-Based Ethical Theories,Machine
Ethics, M. Anderson, S.L. Anderson, and C.
Armen, eds., AAAI Press, 2005, pp. 1–7.
8. J. Gips, “Creating Ethical Robots: A Grand
Challenge,” presented at the AAAI Fall 2005
Symposium on Machine Ethics; www.cs.bc.
edu/~gips/EthicalRobotsGrandChallenge.
pdf.
9. J.H. Moor, “Are There Decisions Computers
Should Never Make?” Nature and System,
vol. 1, no. 4, 1979, pp. 217–229.
10. J.R. Searle, “Minds, Brains, and Programs,
Behavioral and Brain Sciences, vol. 3, no. 3,
1980, pp. 417–457.
For more information on this or any other com-
puting topic, please visit our Digital Library at
www.computer.org/publications/dlib.
JULY/AUGUST 2006 www.computer.org/intelligent 21
The Author
James H. Moor is a professor in Dartmouth College’s Department of Phi-
losophy. His research interests include computing and ethics and the philos-
ophy of artificial intelligence. He received his PhD in history and the phi-
losophy of science from Indiana University. He is editor in chief of Minds and
Machines: Journal of Artificial Intelligence, Philosophy, and Cognitive Sci-
ence and president of the International Society for Ethics and Information
Technology. Contact him at the Dept. of Philosophy, Dartmouth College,
Hanover, NH 03755; james.moor@dartmouth.edu.
... While scholars in machine ethics and ethical artificial intelligence are focussing on defining ethical agents able to make ethical choices based on automated decision procedures (cf. Alonso [4]; Bostrom and Yudkowsky [5]; Moor, [7]; Tolmeijer et al. [6]), rational choice theory is generally presented as a standard way of assessing and managing decision making under risks and uncertainty in artificial intelligence (Bales [30]; Kochenderfer [15]; Russell and Norvig [10]). Returning to the Artificial Rescue Coordination Center, how would a machine decide what ought to be done on the grounds of expected utility in the long run? ...
... The lower bound becomes u 1 − d 2 ∕u 2 − d 2 . 7 Assuming that u 1 is constant, that saving all individuals in Accident 2 produces more utility than saving the single individual in Accident 1 (i.e., u 2 > u 1 ), and that u 2 and d 2 are influenced by the number of individuals involved (though this relationship might not be linear), the difference u 2 − d 2 will tend to grow larger as the number of individuals involved in Accident 2 does. Hence, u 1 − d 2 ∕u 2 − d 2 will tend to be smaller, meaning that the more there are individuals involved, the less one will need to believe that the rescue attempt can succeed. ...
... x, n, pr, threshold of risk acceptability) is simply a consequence of the (deterministic) idea of implementing an ethical theory. Although this would not explain the fact that different beliefs and choices during conception and usage can lead to contradictory results in spite of 7 By substituting the values from equation 1 in Peterson and Broersen [31], we obtain that n(u 1 ) = m(u 2 ) + (n − m)(d 2 ) , hence m∕n = u 1 − d 2 ∕u 2 − d 2 . 8 From 1(u 1 ) = pr(u 2 ) + (1 − pr)(d 2 ). 9 On this point, see Peterson and Broersen [31]. ...
Article
Full-text available
Part of the literature on machine ethics and ethical artificial intelligence focuses on the idea of defining autonomous ethical agents able to make ethical choices and solve dilemmas. While ethical dilemmas often arise in situations characterized by uncertainty, the standard approach in artificial intelligence is to use rational choice theory and maximization of expected utility to model how algorithm should choose given uncertain outcomes. Motivated by the moral proxy problem, which proposes that the appraisal of ethical decisions varies depending on whether algorithms are considered to act as proxies for higher- or for lower-level agents, this paper introduces the moral prior problem, a limitation that, we believe, has been genuinely overlooked in the literature. In a nutshell, the moral prior problem amounts to the idea that, beyond the thesis of the value-ladenness of technologies and algorithms, automated ethical decisions are predetermined by moral priors during both conception and usage. As a result, automated decision procedures are insufficient to produce ethical choices or solve dilemmas, implying that we need to carefully evaluate what autonomous ethical agents are and can do, and what they aren’t and can’t.
... How do we justify our moral claims without appealing to authority or tradition? These questions are intriguing and challenging for philosophy and ethics; they also have practical implications for recent ethical debates, since an adequate theory for the emergence of moral agents can provide valuable insights into distinguishing humans from other conscious beings and artificial intelligence (Misselhorn, 2020;Moor, 2006;Hakli & Pekka, 2019; Wallach & Allen, 2010). ...
Article
Full-text available
Despite the moral underpinnings of Karl Popper’s philosophy, he has not presented a well-established moral theory for critical rationalism (CR). This paper addresses the ontological status of moral agents as part of a research program for developing a moral theory for CR. It argues that moral agents are selves who have achieved the cognitive capacity of personhood through an evolutionary scenario and interaction with the environment. This proposal draws on Popper’s theory of the self and his theory of three worlds, which offer both epistemological and ontological insights into the emergence and evolution of moral agents. The paper also discusses some of the consequences of this proposal for the objectivity and criticizability of moral judgments and the moral agency of artificial intelligence. It concludes by suggesting some directions for future research on the epistemological and ontological problems of ethics in CR.
... However, a clear definition is still lacking. A general distinction is made between ethics of ML creation and human use, and the technological "machine ethics" (Moor, 2006;Müller, 2020). Combining definitions by the Oxford dictionary and VSD (Friedman & Hendry, 2019;Oxford University Press, 2023), we define ethical values of ML as: people's beliefs and principles about what is morally right and important, concerning implementation a ML concept or model. ...
... Flourishing from training on massive data [9,117] and high-quality human feedback [84], Large Language Models (LLMs) [84,102,82,101,46] have demonstrated remarkable abilities in instruction following and few-shot problem solving, sparking a revolution in AI field. Despite such a prosperity, LLMs remain a double-edged sword with the potential ethical risks existing before [122,8] further amplified [110,68,75] or new problems emerging [8,118], regarding particular concerns on social bias [64,23], ethics problems [79,36,47], and toxicity [21,26] in the generated content. ...
Preprint
Full-text available
Warning: this paper contains model outputs exhibiting unethical information. Large Language Models (LLMs) have achieved significant breakthroughs, but their generated unethical content poses potential risks. Measuring value alignment of LLMs becomes crucial for their regulation and responsible deployment. Numerous datasets have been constructed to assess social bias, toxicity, and ethics in LLMs, but they suffer from evaluation chronoeffect, that is, as models rapidly evolve, existing data becomes leaked or undemanding, overestimating ever-developing LLMs. To tackle this problem, we propose GETA, a novel generative evolving testing approach that dynamically probes the underlying moral baselines of LLMs. Distinct from previous adaptive testing methods that rely on static datasets with limited difficulty, GETA incorporates an iteratively-updated item generator which infers each LLM's moral boundaries and generates difficulty-tailored testing items, accurately reflecting the true alignment extent. This process theoretically learns a joint distribution of item and model response, with item difficulty and value conformity as latent variables, where the generator co-evolves with the LLM, addressing chronoeffect. We evaluate various popular LLMs with diverse capabilities and demonstrate that GETA can create difficulty-matching testing items and more accurately assess LLMs' values, better consistent with their performance on unseen OOD and i.i.d. items, laying the groundwork for future evaluation paradigms.
... En este sentido, la ética se adentra en la pregunta de cómo asignar responsabilidades en el caso de un error de la IA. Desde un enfoque deontológico, se podría argumentar que los desarrolladores de sistemas de IA tienen la responsabilidad primaria de garantizar que sus creaciones sean seguras y éticas(Moor, 2006). Aquí, la filosofía moral podría invocar los principios de deber y obligación moral que recaen sobre aquellos que tienen la capacidad de crear y controlar estas herramientas tecnológicas. ...
Article
Full-text available
Este artículo explora las consideraciones éticas que rodean el uso de la Inteligencia Artificial (IA) en la academia. Se establecen principios éticos generales para la IA en este ámbito, como la transparencia, la equidad, la responsabilidad, la privacidad y la integridad académica. En cuanto a la educación asistida por IA, se enfatiza la importancia de la accesibilidad, la no discriminación y la evaluación crítica de los resultados. Se recomienda que la IA se use para complementar y no para reemplazar la enseñanza tradicional. En cuanto al ámbito de la investigación, se hace énfasis en la necesidad de garantizar la calidad y la veracidad de los datos utilizados, así como la transparencia en los métodos y resultados. Se deben considerar los posibles sesgos y riesgos de la IA en la investigación. El artículo concluye que la adopción responsable de la IA en la academia requiere una reflexión crítica sobre sus implicaciones éticas. Se recomienda la creación de marcos éticos específicos para cada contexto y la participación de diversos actores en la toma de decisiones
... Etica mașinilor se ocupă de adăugarea sau asigurarea comportamentelor morale mașinilor care folosesc inteligența artificială (agenți inteligenți artificiali) (J.H. Moor 2006). Isaac Asimov în 1950 în Eu, Robotul, a propus cele trei legi ale roboticii, testând apoi limitele acestor legi (Asimov 2004). ...
Book
Full-text available
Istoria paralelă a evoluției inteligenței umane și a inteligenței artificiale este o călătorie fascinantă, evidențiind căile distincte, dar interconectate, ale evoluției biologice și inovației tehnologice. Această istorie poate fi văzută ca o serie de evoluții interconectate, fiecare progres în inteligența umană deschizând calea pentru următorul salt în inteligența artificială. Iată o prezentare generală a acestei istorii paralele. Inteligența umană și inteligența artificială s-au împletit de mult timp, evoluând în traiectorii paralele de-a lungul istoriei. Pe măsură ce oamenii au căutat să înțeleagă și să reproducă inteligența, IA a apărut ca un domeniu dedicat creării de sisteme capabile de sarcini care necesită în mod tradițional intelect uman. Această carte analizează rădăcinile evolutive ale inteligenței, explorează apariția inteligenței artificiale, analizează istoria paralelă a inteligenței umane și a inteligenței artificiale, urmărind dezvoltarea lor, interacțiunile și impactul profund pe care l-au avut una asupra celeilalte, și imaginează peisajele viitoare în care converg inteligența umană și cea artificială. Să explorăm această istorie, comparând reperele cheie și evoluțiile din ambele tărâmuri.
Preprint
Full-text available
In the rapidly evolving landscape of artificial intelligence (AI), ethical considerations have become paramount. As AI systems increasingly influence critical aspects of daily life, from healthcare and finance to transportation and communication, ensuring these technologies operate within ethical boundaries is crucial. This paper explores the convergence of logical frameworks, philosophical insights, and machine learning approaches to address the ethical challenges posed by AI. By examining historical and contemporary ethical theories from both philosophical and theological perspectives, we aim to establish a robust foundation for ethical AI. Additionally, we delve into the development of ethical decision-making algorithms and the application of game theory and decision theory to model ethical scenarios. Through practical case studies and a discussion of emerging technologies, policy, and interdisciplinary collaboration, this paper provides a comprehensive overview of the methods and challenges involved in integrating ethics into AI systems. Our goal is to offer insights and practical frameworks for developing AI that aligns with human values and ethical standards, fostering a future where AI serves as a force for positive change. Keywords: AI ethics, machine learning, ethical decision-making, philosophical insights, theological perspectives, game theory, decision theory, ethical algorithms, transparency, accountability, fairness, policy regulation, interdisciplinary collaboration, emerging technologies, cultural differences, human emotions, ethical frameworks, AI development.
Article
The question of whether an artificial moral agent (AMA) is possible implies discussion of a whole range of problems raised by Kant within the framework of practical philosophy that have not exhausted their heuris­tic potential to this day. First, I show the significance of the correlation between moral law and freedom. Since a rational being believes that his/her will is independent of external influences, the will turns out to be governed by the moral law and is autonomous. Morality and freedom are correlated through independence from the external. Accordingly, if the actions of artificial intelligence (AI) are determined by something or someone external to it (by a human), then it does not act morally and freely, but heteronomously. As a consequence of AI’s lack of autonomy, and thus lack of access to the moral law, it does not and cannot have a moral understanding that proceeds from the moral law. Another consequence is that it has no sense of duty, which would follow from the moral law. Thus, moral action becomes impossible for the AMA because it lacks autonomy and moral law, moral understanding and sense of duty. It is concluded that, first, AMA not only cannot be moral, but should not be that, since the inclusion of any moral principle would imply the necessity for the individual to choose it, making the choice of the principle itself immoral. Second, although AI has no will as such, which prima facie makes not only moral but also legal action impossible, it can still act legally in the sense of conforming to legal law, since AI carries a quasi-human will. Thus, it is necessary that the creation of AI should be based not on moral principles, but on legal law that prioritises human freedom and rights.
Article
Full-text available
La posible creación de formas de Inteligencia Artificial cada vez más cercanas, equivalentes o superiores a la inteligencia humana, nos plantea nuevos y complejos dilemas ético-jurídicos. Dada la creciente sofisticación de la IA, resulta crucial investigar si cabría dotarla de estatus moral y jurídico, y cómo ello se manifestaría en el reconocimiento de derechos y deberes para los entes artificiales. El objetivo es determinar las consecuencias iusfilosóficas derivadas del reconocimiento de derechos a la IA avanzada en relación a los fundamentos de los derechos humanos. Tras el necesario análisis conceptual, se concluye que se requiere un nuevo paradigma ético-jurídico que reconcilie cautamente unos hipotéticos derechos de las entidades con IA con los derechos de la especie humana.
Thesis
Full-text available
What is the moral status of artificial intelligence? How are we to interact – morally – with entities that exhibit human-like intelligence, whether embodied in human-like form or in systems that engage us in a human-like manner? To respond, I analyze the ontological characteristics that constitute a human being and demonstrate that it is second-order phenomenal consciousness, what religious people refer to as the human soul, that uniquely defines us. Accordingly, our original question becomes two. “What is the moral status of artificial intelligence without consciousness (i.e., mindless AI)?” “What is the moral status of artificial intelligence with consciousness (i.e., mindful AI)?” This bifurcation leads three essential dilemmas which are the essence of this thesis: (1) the Mindless Robot Dilemma: How we can maintain our virtuous disposition in interacting with a mindless machine while simultaneously maintaining our appreciation for authentic relationships? (2) the Mindful Servant Robot Dilemma: Can we, ethically, create a human-level conscious being to relieve us of our burdens? (3) The Mindful Person Robot Dilemma: Is it ethical to create a synthetic human-level conscious being simply to join us in the family of human beings?
Article
Full-text available
Article
Full-text available
We provide a description and informal analysis of the commonalities in moral discourse concerning issues in the field of information and communications technology, present a logic model (DEAL) of this type of moral discourse that makes use of recent research in deontic, epistemic, and action logic, and indicate – drawing upon recent research in computer implementations of modal logic – how information systems may be developed that implement the proposed formalization.
Article
Machine ethics, in contrast to computer ethics, is concerned with the behavior of machines towards human users and other machines. It involves adding an ethical dimension to machines. Our increasing reliance on machine intelligence that effects change in the world can be dangerous without some restraint. We explore the implementation of two action-based ethical theories that might serve as a foundation for machine ethics and present details of prototype systems based upon them.
Article
This article can be viewed as an attempt to explore the consequences of two propositions. (I) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain bran processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 'Could a machine think?' On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.
Article
In this paper we present an executable approach to model interactions between agents that involve sensitive, privacy-related information. The approach is formal and based on deontic, epistemic and action logic. It is conceptually related to the Belief-Desire-Intention model of Bratman. Our approach uses the concept of sphere as developed by Waltzer to capture the notion that information is provided mostly with restrictions regarding its application. We use software agent technology to create an executable approach. Our agents hold beliefs about the world, have goals and commitment to the goals. They have the capacity to reason about different courses of action, and communicate with one another. The main new ingredient of our approach is the idea to model information itself as an intentional agent whose main goal it is to preserve the integrity of the information and regulate its dissemination. We demonstrate our approach by applying it to an important process in the insurance industry: applying for a life insurance. In this paper we will: (1) describe the challenge organizational complexity poses in moral reasoning about informational relationships; (2) propose an executable approach, using software agents with reasoning capacities grounded in modal logic, in which moral constraints on informational relatio nships can be modeled and investigated; (3) describe the details of our approach, in which information itself is modeled as an intentional agent in its own right; (4) test and validate it by applying it to a concrete ‘hard case’ from the insurance industry; and (5) conclude that our approach upholds and offers potential for both research and practical application.
Article
Article
This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. “Could a machine think?” On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.
Re: Dartmouth Seminar 1956" (email to J. Berleur), Herbert A. Simon Collection
  • H Simon
H. Simon, "Re: Dartmouth Seminar 1956" (email to J. Berleur), Herbert A. Simon Collection, Carnegie Mellon Univ. Archives, 20 Nov. 1999.
Moor is a professor in Dartmouth College's Department of Philosophy His research interests include computing and ethics and the philosophy of artificial intelligence. He received his PhD in history and the philosophy of science from Indiana University. He is editor in chief of Minds and Machines
  • H James
James H. Moor is a professor in Dartmouth College's Department of Philosophy. His research interests include computing and ethics and the philosophy of artificial intelligence. He received his PhD in history and the philosophy of science from Indiana University. He is editor in chief of Minds and Machines: Journal of Artificial Intelligence, Philosophy, and Cognitive Science and president of the International Society for Ethics and Information Technology. Contact him at the Dept. of Philosophy, Dartmouth College, Hanover, NH 03755; james.moor@dartmouth.edu.
Re: Dartmouth Seminar
  • H Simon
H. Simon, " Re: Dartmouth Seminar 1956 " (email to J. Berleur), Herbert A. Simon Collection, Carnegie Mellon Univ. Archives, 20 Nov. 1999.