Conference PaperPDF Available

Criminological risks and legal aspects of artificial intelligence implementation

Authors:

Abstract

The use of AI inevitably leads to the problem of ethical choice, raises legal issues that require prompt intervention. The article presents the results of a detailed study of the opinions of leading scientists involved in the study of social aspects of AI. The key characteristics of AI that carry criminological risks are identified, the types of criminological risks of using AI are identified, the author's classification of these risks is proposed. The results of a detailed analysis of the legal regulation of the legal personality of AI are presented. Formulated options for bringing to justice those responsible for the actions of the AI, having the ability to self-learning, who decided to commit actions / inactions that qualify as a crime. Authors argue the need for a clear, rigorous and effective definition of ethical frameworks in the development, design, production, use and modification of AI. Arguments are made about the need to recognize AI as a source of increased danger. The paper analyzes the content of the resolution of the European Parliament on the possibility of endowing AI with "legal status". Special attention is paid to the question of giving the AI a personality. It is proposed to use legal fiction as a technique in which the specific legal personality of AI can be perceived as a non-standard legal position, different from reality. It is assumed that such a decision can remove a number of legal restrictions that exist today and prevent the active involvement of AI in the legal space.
Criminological Risks and Legal Aspects of Artificial
Intelligence Implementation
Igor Bikeev
Department of criminal law and
procedure
Kazan Innovative University named
after V.G. Timiryasov
Kazan, Russian Federation
bikeev@ieml.ru
Pavel Kabanov
Director of the anti-corruption center
Kazan Innovative University named
after V.G. Timiryasov
Kazan, Russian Federation
kabanov@chl.ieml.ru
Ildar Begishev
Research Department
Kazan Innovative University named
after V.G. Timiryasov
Kazan, Russian Federation
begishev@mail.ru
Zarina Khisamova
Research Department Krasnodar University of the Ministry of Internal Affairs of the Russian Federation Krasnodar,
Russian Federation
alise89@inbox.ru
ABSTRACT
The use of AI inevitably leads to the problem of ethical choice,
raises legal issues that require prompt intervention. The article
presents the results of a detailed study of the opinions of leading
scientists involved in the study of social aspects of AI. The key
characteristics of AI that carry criminological risks are identified,
the types of criminological risks of using AI are identified, the
author’s classification of these risks is proposed. The results of a
detailed analysis of the legal regulation of the legal personality
of AI are presented. Formulated options for bringing to justice
those responsible for the actions of the AI, having the ability to
self-learning, who decided to commit actions / inactions that
qualify as a crime. Authors argue the need for a clear, rigorous
and effective definition of ethical frameworks in the
development, design, production, use and modification of AI.
Arguments are made about the need to recognize AI as a source
of increased danger. The paper analyzes the content of the
resolution of the European Parliament on the possibility of
endowing AI with “legal status”. Special attention is paid to the
question of giving the AI a personality. It is proposed to use legal
fiction as a technique in which the specific legal personality of
AI can be perceived as a non-standard legal position, different
from reality. It is assumed that such a decision can remove a
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned
by others than ACM must be honored. Abstracting with credit is permitted. To
copy otherwise, or republish, to post on servers or to redistribute to lists, require s
prior specific permission and/or a fee. Request permissions from
Permissions@acm.org.
AIIPCC '19, December 1921, 2019, Sanya, China
© 2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-7633-4/19/12…$15.00
https://doi.org/10.1145/3371425.3371476
number of legal restrictions that exist today and prevent the
active involvement of AI in the legal space.
CCS CONCEPTS
Social and professional topics Computing / technology policy
Government technology policy Governmental regulations
Social and professional topics Computing / technology policy
Computer crime
Security and privacy Human and societal aspects of security
and privacy
KEYWORDS
Artificial intelligence, Intelligent technology, Robot, Machine
learning, Criminological risks, Criminological features, The risks
of the use of artificial intelligence, Threat of use of artificial
intelligence, The criminal capacity of artificial intelligence,
Crimes with the use of artificial intelligence, Technological
singularity, Legal personality, Criminal liability, Legal fiction,
European parliament resolution, The source of high risk,
Liability of artificial intelligence
1 Introduction
One of the oldest challenges facing humanity is a pressing need
to create de-vices that can simplify life. After heavy physical
labor was changed to mechanized and automated machines, the
society thought about the creation of ones capable of performing
intellectual (mental) work, which for a long time was a purely
human prerogative. Turing considered that machines as well as
people are possible to use the available information, as well as
the mind to solve problems and make decisions, in addition, he
described the test (later named after the author), which allows to
determine when the machines will be able to compare with
human mind [1].
AIIPCC2019, December 2019, Sanya, China
Igor Bikeev et al.
Six years later, proof of the possibility of creating AI was
presented at a conference organized by McCarthy, Minsky at
Dartmouth University. At these conference the term of artificial
intelligence (AI) was constructed by D. McCarthy. The
Dartmouth conference was the starting point for AI research,
which has been going up and down for 70 years [2]. Nowadays,
there is an active application of AI in all subject areas and the
continuous expansion of its capabilities. The pace of
development of AI gaining unprecedented momentum every day.
According to Markets and Markets Research [3], by 2020, the AI
market will grow up to $5 billion through the use of machine
learning technologies and intelligent language recognition.
Andin 2030 global GDP will grow by 14%, or 15.7 trillion US
dollars, due to the active use of AI. 72% of the largest
corporations in the world consider AI to be the funeral of the
future [4]. Key areas of vigorous implementation of AI in the
next 10 years will be: health care; transport (by 2025, the volume
of supply of AI-based systems for Autonomous vehicles will
exceed $ 150 million); financial services and technology (hedge
funds that use AI demonstrate much better results than those
driven by people [5]. The assets under management of robosonic
in the world with the 2018 and 2020 will increase by 8 times [6].
The changes will also affect logistics, retail, industry and the
global speech recognition market [7].
However, the average inhabitant of the planet uses AI much
more often than he or she thinks: only 33% of respondents
believe that they use AI-enabled technology, while 77% actually
use an AI-enabled service or device [8].
The undoubted advantages of the introduction of AI, which is
able to relieve mankind of the grunt work and the transition to
creative activities, which machines are not capable of, can be
seen not only by large corporations, but also by ordinary people:
61 % of the 6,000 people surveyed said that they believe that AI
will make the world a better place [9].
However, the ongoing growth of investment and expand the
areas of implementation of AI has led to humanity’s awareness
of the “reverse side” of the coin called “artificial intelligence”.
The scientific and expert community has long pondered over
the still unsolvable issues for humanity. Do we, modern society,
understand what AI is, and what risks does its creation and
turnover entail? Will we be able to extract the benefits of using
AI, avoiding negative consequences? Are there regulatory
mechanisms for con-trolling AI? Is global legislation ready to
regulate situations when AI will participate in the infringement
of legally protected relationship? And would the title of the book
“Our final invention: artificial intelligence and the end of human
era” [10] be prophetic? The present study attempts to find
answers to such ambiguous, and at the same time topical issues.
2 Material and Method
In 2007, in an interview to the question "What is Artificial
Intelligence?" J. McCarthy replied that it was the science and
development of intelligent machines and systems, especially
intelligent computer programs designed to understand human
intelligence. The methods used are not necessarily biologically
plausible [2].
In this study, we have understood AI as a collective term for
intelligent computer software (AI systems, AI technologies) that
can analyze the environment, think, learn, and respond to what
it "feels".
In the broad sense of the word, AI is a kind of intelligent
system capable of making decisions on its own. This system
represents the direction of the development of computer
functions related to human intelligence, such as: reasoning,
training, and problem solving. In other words, AI is the transfer
of human capabilities of mental activity to the plane of computer
and information technologies, but without inherent human vices
[11]. Scientists dealing with this issue are intensively studying
the prospects for recognizing an e-face and the place of a person
in such a world [12].
We agree with Ponkin and Redkina that AI is an artificial
complex cybernetic computer-software-hardware system with
cognitive-functional architecture and own or relevant available
(attached) computing power of the required capacities and speed
[13].
The same conclusion had been reached by Morkhat,
considering that AI is fully or partially autonomous self-
organizing computer-hardware-software virtual or cyber
physical, including bio-cybernetic, the system (unit), endowed
with /possessing the abilities and capabilities. In this case, the
author notes that the active use of AI units entails the emergence
of multiple uncertainties, difficulties and problems in the legal
space. We share this position [14].
In modern scientific and technical literature provides a lot of
different classifications of AI systems and their applications.
However, taking into account the methodology of this study, we
adhered to the classification of AI, taking into account the
applied aspects of modern information and telecommunications
technology [15].
In the PwC study all types of AI are divided into two groups
depending on the interaction with the person in their activities.
Thus, AI interacting with people is usually referred to special
stable systems, such as auxiliary intelligence, which helps people
accomplish tasks faster and better. Stable systems are not able to
learn from their interactions. This group also includes adaptive
(augmented) intelligence. The second group, which does not
interact with humans, includes automated intelligence, designed
to automate mechanical/cognitive and routine tasks. Its activity
is not related to the implementation of new tasks and is in the
field of automation of existing tasks. This group also includes
"Autonomous intelligence" [4]. Such AI is able to adapt to
different situations and act independently without human
intervention, committing, for example, identity theft [16].
3 Theoretical Background
The greatest minds of our time like Hawking [17], Gates and
Musk bring to the fore the problem of technological singularity
the moment when computers in all their manifestations will
become smarter than people. According to Kurzweil, when this
Criminological Risks and Legal Aspects of Artificial Intelligence
Implementation
AIIPCC2019, December 2019, Sanya, China
happens, computers will be able to grow exponentially in
comparison with themselves and reproduce themselves, and
their intelligence will be billions of times faster than humans
[18]. According to Bostrom in 60 years the AI will be-come a
serious threat to humanity. By 2022, the similarity of the
thinking processes of robots and humans will be about 10 %, by
2040 50 %, and in 2075, the thinking processes of robots can no
longer be distinguished from human ones, the similarity will
reach 95 %. However, the pace of technological development
suggests that the process would be much faster [19].
It should be noted that not only the greatest minds of
mankind are concerned about the threats that carry the
development of AI. According to an independent global survey
conducted by innovative companies Northstar and ARM (2018),
just over a third of the people surveyed think that AI is already
having a significant impact on their daily lives. Despite the fact
that more than half of the residents expect a better future for
society thanks to AI, a fifth of respondents expect the worst.
There are also regional differences. Residents of the North
American continent and Europeans have expressed concern
about the reliability of machines with AI, while in Asia there is a
genuine fear that machines with AI are becoming more
intelligent than people. In general, the vast majority of
respondents (85%) are concerned with ensuring the security of
AI technology [20].
4 The Problem of Moral and Ethical Choice of
AI
Professor R. Arkin, who is engaged in the development of robots
for military needs, notes that his research has significant ethical
risks, possible when using his developments for criminal
purposes. The use of AI in wartime can save thousands of lives,
but weapons that are intelligent and stand alone pose a threat
even to their creators. Meanwhile, within 3-5 years military
complexes equipped with artificial intelligence can soon be
adopted by the strongest armies of the world. So, the US Air
Force announced that it is launching a program Skyborg, which
is designed to deter-mine whether drones with artificial
intelligence (AI) can help pilots better perform combat missions.
The US air force plans to launch the first test drones in 2023. By
2023, the Russian Navy plans to put into service a complex of
self-learning minefields, equipped with elements of artificial
intelligence, and able to make their own decision when and what
goal to blow up [21]. Foreseeing the potential risks of his
developments, R. Arkin developed a set of algorithms ("ethical
guide"), which is designed to help robots to act on the battlefield:
in what situations to stop the fire, in what situation to seek to
minimize the number of victims [22].
However, if the potential risks of using AI in the military
sphere are obvious, and scientists are already working to
minimize them, the situation is quite different in the "peaceful"
areas of AI application.
The use of AI will inevitably lead to the problem of ethical
choice. For example, the AI used in an unmanned vehicle, in
conditions of force majeure should make a choice: whose life of
the road users to save.
At the beginning of 2016, a large-scale study “Moral Machine”
(“Ethics for a car”) was launched at the Massachusetts University
of Technology, within which a special website was created,
where the user of a pilot car simulated situations with different
scenarios, and provided the opportunity to choose on the road in
an emergency, for example, whose lives to sacrifice in the first
place in an accident, the tragedy in which is already imminent.
Analysis of 2.3 million responses showed that respondents often
prefer to save people, not animals, young people instead of the
elderly. In addition, the study showed, a significant role in the
choice of corresponding played gender of the potential victim,
the religious preferences of respondents: men are less likely to
leave alive of women, and religious people most frequently give
preference to the salvation of man, not an animal [23].
Representatives of the German automobile company "Mercedes-
Benz" in turn noted that their cars would give priority to
passengers. The Ministry of transport of Germany was
immediately answered that to make such a choice on the basis of
a number of criteria would be illegal, and in any case, the
manufacturer will be responsible [24].
Thus, it is essential to emphasize the need for a clear,
rigorous and effective ethical framework in the design,
construction, production, use and modification of AI.
No exception is the sphere of health care, where the
introduction of AI in the process of treating and diagnosing
oncological diseases has had mixed consequences. Occurred in
the summer of 2018 leaked internal documents of one of the
world's largest manufacturers and suppliers of hardware and
software the "IBM" company suggests that it has developed
medical AI "Watson Health" used in 230 hospitals worldwide for
the treatment of 13 types of cancer at 84 000 patients, commit a
medical error. "Watson Health" offers treatments that can lead.
The leaked internal documents of one of the world's largest
manufacturers and suppliers of hardware and software, IBM, in
the summer of 2018 indicate that Watson Health, the medical AI
developed by it, used in 230 hospitals around the world to treat
13 types of cancer at 84,000 patients makes medical errors.
Watson Health offers incorrect treatments that can lead to the
death of the patient [25].
5 Key Characteristics of AI that Pose a Threat
to Cybersecurity
The problem of ensuring the security of confidential information
is one of the key for all subjects of the digital economy,
including the problem of cybersecurity using AI [26].
According to “InfoWatch”, the largest Russian manufacturer
of solutions to protect organizations from internal and external
threats, as well as from information attacks, in the first half of
2017, more than 920 incidents were registered related to leaks of
confidential information from organizations of various forms of
ownership. Data on incidents include all leaks in all foreign
AIIPCC2019, December 2019, Sanya, China
Igor Bikeev et al.
countries, information about which is published in the media,
blogosphere, social networks and other network resources.
To overcome this problem, companies conduct ongoing
research in this area. For example, the American Corporation
“Google’ has developed a program called “Federal Learning”, in
which the lightweight version of the software “Sensorflow
allows you to use AI on your smartphone and train it. Instead of
collecting and stor-ing information in one place, on Google
servers, for further work with new algorithms, the learning
process takes place directly on the mobile device of each user. In
fact, the phone's processor is used as an auxiliary means of
teaching AI. The advantage of this application is that
confidential information never leaves the user's de-vice, and
application updates are applied in real time. Later, the
information is collected anonymously by Google and is used to
make general adjustments to the application [27].
The world community is concerned about the use of AI for
criminal purposes. Thus, in early 2017, the FBI held a major
conference on the use of AI by law enforcement agencies and
criminals. At the conference it was noted: the data of Interpol,
Europol, FBI and law enforcement agencies of other countries,
the results of studies of leading universities indicate the lack of
activity of criminal structures to create their developments in the
field of AI. According to Ovchinsky [28], despite the lack of
information about the development of cybercriminals in the field
of AI, the potential for such a phenomenon exists. Cybercriminal
has plenty to choose from to create their own powerful AI
platforms. Almost all the development of AI with an open
outcometion code are containers. A container is a platform
where with the help of API can mount the place any third-party
programs, services, databases, etc. If earlier, when creating their
own program or service, everyone had to initially develop
algorithms from beginning to end, and then, using one or
another programming language, translate them into code, today
it is possible to create products and services in the same way as
builders build a house from standard parts delivered to the
construction site.
Thus, the processes of using AI for criminal purposes have
increased public danger [29]. However, the use of open source
communications for crime assessment seems to be a promising
idea, including in the era of big data [30].
According to studies, the main most active areas of AI
implementation are medical applications (disease diagnosis
programs), digital services assistants, autonomous vehicles [9].
At the same time, for example, an AI error in the program of
diagnosis of diseases that has made an incorrect diagnosis can
lead to incorrect treatment of the patient, and as a consequence,
a possible violation of his health [31].
The analysis of trends in the creation and use of AI allowed
us to identify two types of criminological risk of using AI: direct
and indirect.
Direct criminological risk of using AI the risk associated
with direct effect on a person and a citizen of a danger caused by
the use of AI.
These risks include:
1. AI with the ability to self-training, made the decision about
the actions/inactions, constitutes a crime. Criminal act implies
deliberate commission by the AI system of a socially dangerous
attack on: human life and health; freedom, honor and dignity of
the individual; constitutional rights and freedoms of man and
citizen; public security; peace and security of mankind, which
have caused socially dangerous consequences.
2. Intentional actions with the software of the AI system,
which caused socially dangerous consequences. Criminal act
implies illegally accessed to the system, resulting in damage or
modification of its functions, as a result of which a crime was
committed.
3. AI was created by criminals to commit crimes.
Indirect criminological risk of using AI the risk associated
with unintended hazards in the context of the use of AI.
These risks include:
1. Random errors in the software of the AI system (errors
made by the developer of the AI system that led to the
commission of the crime.
2. Errors made by the AI system during its operation.
John. Kingston [32] described a hypothetical situation in 2023
where an AI-driven car moving through the city streets would
knock down a pedestrian, and wondered about the criminal
responsibility. Unfortunately, the situation became real at the
beginning of 2018, when the unmanned vehicle of the American
international company hit a woman in the US state of Arizona
due to the program’s features [33]. According to Leenes and
Lucivero [34] responsibility for the actions and harm caused to
the AI, is borne by the person who programmed it or the person
responsible for its operation within the limits established by the
law.
6 The Problem of Criminal Prosecution for
Illegal Actions of Artificial Intelligence
The issue of bringing AI to criminal responsibility for
committing a crime is inextricably linked with the granting of
legal personality to AI. Note that in legal science, there is
currently an active discussion on this issue. The catalyst for it
was the provision of citizenship of the Kingdom of Saudi Arabia
to the humanoid robot Sofia, developed by the specialist of the
Hong Kong company "HansonRobotics" D. Hanson. So, N.
Nevyans comes to the conclusion of the insolvency of the idea of
creation independent legal personality to AIas in this case there
is an extrapolation of human rights to the actions of AI. In this
case N. Nevyans gives an example with a proposal to give
individual to animals, "having consciousness and capable of
having feelings", for example, dolphins, legal personality [35].
On the contrary, Uzhov comes to the conclusion that AI,
endowed with the ability to analyze and compile a behavioral
algorithm regardless of program pre-sets, needs legal regulation,
and emphasizes the need to start this activity with the
introduction of a new subject of law [36].
It should be noted that in the European Union after a number
of tragic incidents with the use of unmanned vehicles, began to
widely discuss the possibility of granting robots legal status, and
Criminological Risks and Legal Aspects of Artificial Intelligence
Implementation
AIIPCC2019, December 2019, Sanya, China
as a consequence, the possibility of bringing an electronic person
(electronic entity) to justice. So, the Resolution of the European
Parliament together with the recommendations of the
Commission on Civil Law Regulation in the field of robotics of
the European Parliament of February 16, 2017 "Civil Law
Standards on Robotics" aimed at regulating the legal status of
robots in human society through the following actions: the
creation of a special European Agency for robotics and AI; the
development of a regulatory definition of a "reasonable
autonomous ro-bot", the development of a registration system
for all versions of robots, together with their classification
system; the development of requirements for developers to
provide guarantees to prevent risks; development of a new
reporting structure for companies using or needing robots,
which will include information on the impact of robotics and AI
on the company's economic performance [37].
It is noteworthy that the text of the above-mentioned
Resolution of the European Parliament notes the incessant
change of robots, which predetermined not only the ability of
robots to perform certain activities, but also the ability to self-
study and take Autonomous "independent" decisions.
Scientists state the difficulty predetermined by the
polymorphic nature of such systems in determining the subject
responsible for failures in the work of AI [38]. It is necessary to
agree with the opinion of Morkhat about the inexpediency and
incorrectness of imposing responsibility on designers and
developers of AI, which is a complex of equipment and devices
developed separately, where the result of the final solution AI
depends largely on the situation of its application and the tasks
assigned to it [14].
Foreseeing the possible risks and negative consequences of
the AI's independent decisions, the draft resolution proposes to
conduct an in-depth analysis of such consequences, on the basis
of which to develop effective mechanisms in the field of damage
insurance from AI (for example, unmanned vehicles), the
creation of compensation insurance funds to cover damages,
mandatory registration of the AI put into operation [39]. It is
also possible to integrate into the mechanism the operation of
the safety switch, as well as certain software for the immediate
shutdown of all processes in emergency situations [40].
In this regard, the forecast described in the European
Parliament Resolution on the possibility of granting robots "legal
status" is of particular interest. However, it should be noted that
the "legal status" of an electronic person will differ significantly
from that of an individual. There are 3 key approaches among
legal scholars dealing with legal personality of AI:
conferment of legal personality to AI corresponding to
human;
conferment of legal personality to AI similar to the
legal status of a legal entity;
endowing the AI with a limited legal personality [41].
We agree with the opinion of N. Nevyans that the legal
personality of AI cannot be equated with the human or legal
status of a legal entity. A person with legal status acts on the
basis of mental processes, guided by subjective beliefs. Behind
the actions of a legal entity, there are also individuals, without
which the activity of a legal entity is impossible. AI, in turn, acts
independently, without possessing consciousness or feelings [35].
At the same time, some authors note that AI can be endowed
with separate rights, different from the rights of a real individual
[27]. In this case, it is appropriate to talk about legal fiction a
technique in which a specific legal personality.
Hallevy notes that the mandatory elements of the crime are:
criminal behavior actus reus and internal (mental) element
mens rea. With regard to AI, it is impossible to establish a
mental element, and it is impossible to recognize the com-
mission of AI crime [42]. However, as rightly noted by Hallevy
and Kingston, in the Anglo-Saxon system of law, there are
"serious assaults" in which the establishment of mens rea is not
mandatory, and it is justified that in these circumstances AI may
be recognized as the subject of infringement.
However, in the Russian criminal law doctrine such an
approach is impossible, due to the necessity of the mandatory
presence of the four elements of an indication of corpus delicti,
including the presence of a subjective side for all crimes without
exception.
Hallevy wondered whether that the applicability of criminal
liability measures to the perpetrator of the crime and, as a
consequence, the achievement of the purpose of criminal
punishment [42]. There are well-founded doubts about the
effectiveness of traditional forms of criminal punishment, such
as a fine or less freedom for the purpose of re-education of the
“criminal AI”.
There are reasonable doubts about the effectiveness of the
application of traditional forms of criminal punishment to the AI,
such as a fine or deprivation of liberty in order to re-educate the
“criminal AI”.
However, some Western legal scholars hold an opinion on
the need to bring robots and AI to criminal responsibility.
According to Ying Hu from Yale University, special types of
punishment should be provided for AI, such as deactivation,
reprogramming, or conferring the status of a “criminal”, which
will serve as a warning to all participants [43].
Uzhov adheres to a similar opinion, according to which
"rehabilitation" of AI" can be realized only through its complete
reprogramming, "that is possible in a sense to compare with a
lobotomy against the person. That is an absolute and probably
irreversible change in the properties of AI. The second way is
the disposal of the machine [36].
We agree with Ponkin and Redkina that legal support of AI
should be developed consistently (though intensively), taking
into account the preliminary study of all the risks that can be
assumed at the present stage of technology development, and the
specifics of the use of artificial intelligence in various spheres of
life. At the same time, it is essential to ensure a balance between
the interests of society and individuals, including security and
the need to develop innovations in the interests of society» [13].
AI can be perceived as a non-standard legal position, different
from reality. It is known that existence of legal fictions is caused
by need of overcoming of legal gaps and elimination of
AIIPCC2019, December 2019, Sanya, China
Igor Bikeev et al.
uncertainty-news in public relations. We believe that such a
decision can remove a number of legal restrictions that exist
today and prevent the active involvement of AI in the legal space.
7 Summary
The foregoing account information are evidence of the high
crime risk of application of AI prisoners in intelligent
technologies, and the weak theoretical readiness of the science of
criminology to study the problem under consideration.
In the near future (approximately 10-15 years), the pace of
development of systems and devices with AI will lead to the
need for a total revision of all branches of law. In particular, the
institutions of intellectual property, the tax regime, etc. will
require deep processing, which will ultimately lead to the need
to resolve the conceptual problem of endowing an autonomous
AI with certain “rights” and “duties”.
In our opinion, the best way is to endow AI with a specific
"limited" legal personality (through the application of legal
fiction), in terms of endowing an autonomous AI with the
responsibility to bear responsibility for the harm and negative
con-sequences.
This approach will undoubtedly require a rethinking of the
key postulates and principles of criminal law, in particular, the
institutions of the subject and the subjective side of the crime. At
the same time, in our opinion, the AI systems will require the
creation of an independent institution of criminal law, unique in
its essence and content, different from the traditional
anthropocentric approach. Such a legal institution requires a
completely new approach. Within the framework of this
institution, it seems appropriate to provide for different from the
traditional understanding of the subject, based on the symbiosis
of technical and other characteristics of AI, as well as alternative
types of responsibility, such as deactivation, reprogramming or
endowing with the status of "criminal", which will serve as a
warning for all participants of legal relations. We believe that
such a solution in the future can minimize the criminological
risks of using AI.
REFERENCES
[1] Turing A (1950). Computing Machinery and Intelligence. Mind, New Series,
59(236), 433-460.
[2] McCarthy J, Minsk M L, Rochester N and Shannon C E (2006). A Proposal for
the Dartmouth Summer Research Project on Artificial Intelligence. August 31,
1955. AI Magazine, 27(4), 12-14.
[3] Markets and Markets Research Private Ltd (2018). AI in Fintech Market by
Component (Solution, Service), Application Area (Virtual Assistant, Business
Analytics & Reporting, Customer Behavioral Analytics), Deployment Mode
(Cloud, On-Premises), and Region. Global forecast to 2022. Retrieved from:
https://www.marketsandmarkets.com/Market-Reports/ai-in-fintech-market-
34074774.html.
[4] PwC (2017). Artificial intelligence: do not miss the benefit. Retrieved
from: https://www.pwc.ru/ru/press-releases/2017/artificial-intelligence-
enlargement.html.
[5] Reiff N (2017). Artificial Intelligence Hedge Funds Outperforming Humans.
Investopedia. Retrieved from: https://www.investopedia.com/news/artificial-
intelligence-hedge-funds-outperforming-humans/#ixzz4YszizhII.
[6] Jesse McWatters R (2018). The New Physics of Financial Services. Part of the
Future of Financial Services series: understanding how artificial intelligence is
transforming the financial ecosystem. Deloitte, World Economic Forum, 167.
Retrieved from: http://www3.weforum.org/ docs/WEF
_New_Physics_of_Financial_Services.pdf.
[7] Statista (2017). Artificial Intelligence. Report. Retrieved from:
https://www.statista.com/study/50485/artificial-intelligence/.
[8] What Consumers Really Think About AI. Pega. Retrieved from:
https://www1.pega.com/system/files/resources/2017 -11/what-consumers-
really-think-of-ai-infographic.pdf.
[9] Arm Limited (2017). Global Artificial Intelligence Survey. Retrieved from:
http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.
[10] Barrat D (2013). Humanity's Latest invention: Artificial intelligence and the
end of the era of Homo sapiens. New York: Thomas Dunne Books, St. Martin's
Press.
[11] Afanasyev A (2018). Artificial intelligence or intelligence of subjects of
detection, disclosure and investigation of crimes: what will win? Library CSL.
Scientific journal, No. 3(38), 28-34.
[12] Carriço G (2018). The EU and artificial intelligence: a human-centred
perspective. European View, 17(1), 29-36.
[13] Ponkin I V and Redkina A I (2018). Artificial intelligence from the point of
view of law. Bulletin of the Russian University of friendship of peoples. Series:
Legal Sciences, 1(22), 91-109.
[14] Morkhat P M (2017). Artificial intelligence: legal view. M.: BukiVedi, 257.
[15] Amores J (2013). Multiple instance classification: review, taxonomy and
comparative study. Artificial Intelligence, 201, 81-105.
[16] Marron D (2018). Alter Reality: Governing the Risk of Identity Theft. The
British Journal of Criminology, 48(1), 20-38.
[17] Hawking S (2018). Brief Answers to the Big Questions. London: Random
House LLC, 2018, 256.
[18] Kurzweil R (2006). The singularity is near: when humans transcend biology.
Penguin Books, 672.
[19] Bostrom N (2016). Strategic Implications of Openness in AI Development.
Technical Report, 1. Retrieved from: https://www.fhi.ox.ac.uk/reports/2016-
1.pdf.
[20] ARM, Northstar survey (2018). AI today. AI tomorrow. Awareness, acceptance
and anticipation of AI: a global consumer perspective. Retrieved from:
https://pages.arm.com/rs/312-SAX-488/images/arm-ai-survey-report.pdf.
[21] Ramm A and Kozachenko A (2019, March 5). A good face on a naval game, the
Navy will have the ammunition with artificial intelligence. Retrieved from:
https://iz.ru/841783/aleksei-ramm-aleksei-kozachenko/khoroshaia-mina-pri-
morskoi-igre-flot-poluchit-boepripasy-s-iskusstvennym-intellektom.
[22] Rutkin A (2014, September 13). The robot’s dilemma. Magazine issue. 2986.
[23] Casmi E (2018, October, 26). Opinion of millions of people: autonomous cars
have to push the elderly and the young to save. The Network edition “Сnews”.
Retrieved from: http: //www.cnews.ru/news/top/2018-10-
26_mnenie_millionov_chelovek_bespilotnye_avto_dolzhny.
[24] Karlyuk M V (2018). Investments in the future: artificial intelligence. Non-
profit partnership “Russian Council for international Affairs”. Retrieved from:
http://russiancouncil.ru/analytics-and-comments/analytics/eticheskie-i-
pravovye-voprosy-iskusstvennogo-intellekta/.
[25] Kolenov S (2018, July 27). AI-oncologist IBM Watson was convicted of medical
errors. Network edition “Hightech.plus” Retrieved from:
https://hightech.plus/2018/07/27/ii-onkologa-ibm-watson-ulichili-vo-
vrachebnih-oshibkah.
[26] Wilner A S (2018). Cybersecurity and its discontents: artificial intelligence, the
Internet of things, and digital misinformation. International Journal, 73(2),
308-316.
[27] Vincent J (2017). Google is testing a new way of training its AI algorithms
directly on your phone. The Verge. Retrieved from:
https://www.theverge.com/2017/4/10/15241492/google-ai-user-data-federated-
learning.
[28] Ovchinsky V S (2018). Criminology of the digital world. M.: Norm: INFRAM,
352.
[29] Van der Wagen W and Pieters W (2015). From cybercrime to cyborg crime:
botnets as hybrid criminal actor-networks. The British Journal of Criminology,
55(3), 578-595.
[30] Williams M L, Burnap P and Sloan L (2017). Crime Sensing with Big Data: The
Affordances and Limitions of using open-source communications to estimate
crime patterns. The British Journal of Criminology, 57(2), 320-340.
[31] Momi E De and Ferrigno G (2010). Robotic and artificial intelligence for
keyhole neurosurgery: the ROBOCAST project, a multi-modal autonomous
path planner. Proceedings of the Institution of Mechanical Engineers, part H:
Journal of Engineering in Medicine, 224(5), 715-727.
[32] Kingston J K (2016). Artificial intelligence and legal liability. Research and
Development in Intelligent Systems XXXIII: Incorporating Applications and
Innovations in Intelligent Systems XXIV: Conference Paper, 269-279.
[33] Bergen M (2018, March 19). Uber halts autonomous car tests after fatal crash
in Arizona Bloomberg. Retrieved from:
https://www.bloomberg.com/news/articles/2018-03-19/uber-autonomous-car-
involved-in-fatal-crash-in-arizona.
[34] Leenes R and Lucivero F (2014). Laws on Robots, Laws by Robots, Laws in
Robots: Regulating Robot Behavior by Design. Law, Innovation and
Technology, 6(2), 194-222.
Criminological Risks and Legal Aspects of Artificial Intelligence
Implementation
AIIPCC2019, December 2019, Sanya, China
[35] Nevejans N (2016). European civil law rules in robotics: study. Policy
Department С: «Citizens’ Rights and Constitutional Affairs», European
Parliament’s Committee on Legal Affairs. PE 571.379, 15. Retrieved from:
http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_ST
U%282016%29571379_EN.pdf.
[36] Uzhov F W (2017). Artificial intelligence as subject rights. Gaps in Russian
legislation, 3, 357-360.
[37] Delvaux M (2016). Draft Report with recommendations to the Commission on
Civil Law Rules on Robotics (2015/2103(INL)). Committee on Legal Affairs,
European Parliament, PE582.443v01-00, 22 p. Retrieved from:
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=%2F%2FEP%2F%2FN
ONSGML%20COMPARL%20PE-
582.443%2001%20DOC%20PDF%20V0%2F%2FEN.
[38] Prakken H (2016). How AI & law can help autonomous systems obey the law:
a position paper. In Proceedings of the 22nd European Conference on
Artificial Intelligence Workshop on Artificial Intelligence for Justice. Hague:
VU University Amsterdam, 42-46. Retrieved from:
http://www.ai.rug.nl/~verheij/AI4J/papers/AI4J_paper_12_prakken.pdf.
[39] Del Castillo A P (2017). A law on robotics and artificial intelligence in the EU?
The Foresight Brief. Brussels : European Trade Union Institute, 2, 12 p.
Retrieved from:
https://www.etui.org/content/download/32583/302557/file/Foresight_Brief_02
_EN.pdf.
[40] Radutniy O E (2017). Criminal liability of the artificial intelligence. Problems
of legality, 138, 132-141.
[41] Robertson J (2014). Human rights vs. robot rights: forecasts from Japan.
Critical Asian Studies, 46(4), 571-598.
[42] Hallevy G (2010). The criminal liability of artificial intelligence entitiesfrom
science fiction to legal social control. Akron Intellectual Property Journal, 4(2),
171-201.
[43] Kopfstein J (2017). Should Robots Be Punished for Committing Crimes?
Vocativ Website. Retrieved from: https://www.vocativ.com/417732/robots-
punished-committing-crimes/.
... Multiple technological tools are currently being applied in crime prevention, where artificial intelligence leads one of the most frequent applications, as well as its branches such as Data Mining, Machine Learning, BigData, and Robotics; in addition to Operations Research and Satellites [9]. Table 1 shows the actors in criminal justice that apply artificial intelligence; they are the judicial system (31) that makes the greatest use of this technological tool: Justice operators (judicial system) are basing their decisions on artificial intelligence systems, this implies a key challenge due to the high complexity of the model, as well as the potential implications on interests, rights, and human lives [41]; One of the most controversial points is the problem of ethical choice, which the application of AI can bring with it, which entails criminological risks and which makes it necessary to identify the types of criminological risks of the use of AI in such a way that they are open the option of endowing AI with "legal status" within the European Parliament [42]. ...
... The use of AI for risk assessment is a reality in police action, both to prevent and to investigate crimes, including in judicial determination and prison treatment [71]; According to "InfoWatch", in the first half of 2017, more than 920 incidents related to leaks of confidential information from organizations of various forms of ownership were registered [42]; however, in its use to assess the risk of recidivism; there is ample evidence that racial minorities tend to face a higher risk of arrest, especially for crimes targeted by proactive surveillance, such as drug and trafficking offenses; therefore, individuals with the same probability of recidivism may have different probabilities of recidivism [15]; in this sense, the principles of both criminal and procedural law must be respected. ...
Conference Paper
Full-text available
El objetivo del estudio es dar a conocer el estado actual de la aplicación de la Inteligencia Artificial en la Justicia Penal en ley comparativa. Describir qué actores y operadores de justicia aplican y explicar la motivación para su uso en casos específicos, a través de una revisión sistemática de la literatura científica. Para esto, 66 artículos obtenidos de las bases de datos Scopus, Scielo y Dialnet fueron revisados. Entre sus resultados se encontró que la mayor número de autores investigaron la aplicación de Artificial Inteligencia en el Sistema Judicial; Los casos en que artificiales inteligencia se aplican más es para la prevención/predicción de delitos y toma de decisiones judiciales. Se concluyó que Aunque los riesgos del uso de la inteligencia artificial son inevitables. También es ineludible que su uso tenga importantes beneficios. Por lo tanto, es necesario establecer puntos de control antes, durante y después su implementación.
... for body corporates to follow when managing such information, such as getting consent prior to collecting and establishing clear privacy policies. (Authority & Delhi, 2020)(Baranov et al., 2020) It is critical to identify directions for the development of a legislative framework for the regulation of current new technologies and digital processes.(Bikeev et al., 2019) ...
Article
Artificial intelligence has grown to be an essential element of many companies, including the legal sector. It is critical to guarantee that AI is used safely and responsibly throughout the country, and the actions of the Indian government and business organisations are an excellent place to start. As AI becomes more integrated into the legal system and different sectors, various legislative frameworks controlling its application and use in India has been emerging. It becomes essential to understand India's legal framework for AI governance and monitoring. The study focuses on the numerous legal and regulatory frameworks in India that govern the development and application of AI. It also covers several national laws, guidelines, and regulations emphasising responsible and ethical AI implementation along with identification of countries that are encouraging regulators and law makers to implement AI Regulations.
... The idea of recognizing AI as a subject of law contradicts such ideas about the subject of law as socio-legal value, dignity, autonomous legal will, and also conflicts with the composition of the legal relationship, the composition of the offense and is insignificant within the framework of the institution of representation. At the same time, AI does not have the necessary and sufficient characteristics of the subject of law, in particular, it does not have the potential to independently acquire and exercise subjective rights and legal obligations, to bear legal responsibility, to independently make legal decisions, it does not have its own legal interests and aspirations and so forth (Bikeev et al., 2019). At the same time, some authors note that AI can be endowed with separate rights different from the rights of a real individual (Asaro, 2007, p.3). ...
Article
Full-text available
The active use of artificial intelligence leads to the need to resolve a number of ethical and legal problems. The ethical framework for the application and use of data today is highly blurred, which poses great risks in ensuring data confidentiality. In the article, the authors analyzed in detail the main problems in the field of cybersecurity in connection with the active use of AI. The study identified the main types of criminological risks associated with the active implementation of AI. By a separate question, the authors investigated the issues of bringing to responsibility and compensation for damage caused by AI. The authors argue the position about the need to recognize AI as a source of increased danger. It is proposed to use the legal fictitious as a method in which a particular legal personality of AI can be perceived as a non-standard legal position, different from reality.
... When the public is asked questions concerning the adoption of autonomous agents (or robots), one of the first issues that comes up is who is going to be held responsible in the face of a bad decision. Think about specific contexts such as autonomous driving vehicles [88], [89], [90], AI medicine [91], [92], and/or criminal justice [93], [94], [95]. This is investigated at length in -for instance- [58] and in [9]. ...
Article
Full-text available
Automated Guided Vehicles (AGVs) have become a vital part of the automation sector and a key component of a new industrial revolution that promises to: i. automate the entire manufacturing process, ii. increase productivity rates, iii. develop safer workplaces, while iv. maximising profits and reducing running costs for businesses. However, several concerns arise in the face of this very promising revolution. A major issue is how to ensure that AGVs function effectively and safely during interactions with humans. Another one concerns the ethical desirability of pervasive, continuous, and multidimensional couplings (or interactions) between humans and robots. Generally speaking, automated systems, in virtue of their vast sensing capabilities, may pose privacy challenges to their users. This is because such systems can seamlessly gather information about people' behaviors, without people's consent or awareness. To tackle the important issues abovementioned, we performed a systematic literature review [SLR] on AGVs with mounted serial manipulators. We used as an input 282 papers published in the relevant scientific literature. We analysed these papers and selected 50 papers based on certain criteria to find out trends, algorithms, performance metrics used, as well as potential ethical concerns raised by the deployment of AGVs in the industry. Our findings suggest that corporations can effectively rely on AGVs with mounted manipulators as an efficient and safe solution to production challenges.
Chapter
With the characteristics of AI technology, the criminal imputation for the negligence crime involving AI has challenged the traditional imputation theory, which creates a gap in criminal imputation that results from the inability to attribute criminal liability to the appropriate subject of the criminal liability. In order to cope with such challenges, the theory has been working in two main directions to fill the imputation gap: first, the legislative direction of adding AI crimes to the criminal law and the countermeasure direction of establishing the AI criminal law system. Although these two theoretical efforts have various theoretical and practical shortcomings in filling the criminal imputation gap, they provide possibility for discussion on the reasonable criminal imputation. Therefore, it is a sound choice to discuss these theoretical endeavors in detail before arguing for the theoretical premises of imputation for the negligence crime involving AI.
Chapter
Academic research on the crime involving AI initially only focused on such crime that has already occurred and the strategies to deal with it. However, with the increasing penetration of AI applications in our daily life and the growing severity of AI-related crimes, academic research has focused much more on the criminal imputation for the crime involving AI. In this process, the presuppositional question facing scholars is what constitutes the crime involving AI?
Chapter
Since the creation of the internet, technology has evolved and continues to mass produce systems that humans can rely on and make their lives more productive. Artificial intelligence (AI) is one of those software as it is a tool that mimics human intelligence to perform tasks that we sometimes find repetitive and time-consuming. Another important use of this technology is the introduction of cybersecurity, which ensures the protection of a user's sensitive data online to prevent unauthorized use. When cybersecurity and AI interlay, an increment of protection shows a tangible result; it assures greater protection against cybercriminals. First, this chapter discusses and introduces terms related to cybersecurity and AI; then it goes on to widely explore multiple methods of how AI tools are being integrated within the cybersecurity space, as well as what characteristics of AI pose a threat to the protection that cybersecurity offers. The main idea explored in this chapter is how AI tools are being used to improve and revolutionize the way cybersecurity works worldwide.
Conference Paper
Full-text available
Yapay Zeka ve Toplum: Yapay Zeka Sosyolojisiyle Eleştirel Bir Bakış Ulaş Başar Gezgin Özet Yapay zeka ile toplum arasında ilişki hangi formlar almaktadır? Yapay zekanın toplum üzerinde ve toplumun yapay zeka üzerinde etkileri nelerdir? Bu çalışmada, yapay zeka sosyolojisi kapsamına giren çalışmalar taranarak, bu ve benzeri sorulara yanıt olarak bir bireşime ulaşmak hedeflendi. Bir kere, yapay zekanın değişik kullanım alanları var; bunların toplumsal etkileri farklı farklı. Toplum da bir bütün olarak algılanabileceği gibi, değişik kesimlerden oluşan bir karışım olarak da değerlendirilebilir. “Yapay zekanın toplumsal etkileri hangi alanlarda öne çıkıyor?” diye sorarsak, akla ilk olarak, yapay hukuk, tıpta yapay zeka kullanımı, eğitim amaçlı yapay zeka uygulamaları, sürücüsüz araçlar, yapay zekalı silahlar, ‘akıllı’ kent tartışmaları vb. gelecektir. Sosyolojik bir bakışla baktığımızda, emeğini satmak zorunda olan emekçi sınıflar da bir dönüşüm geçirecekler. Kapitalizmin refah toplumu anlayışıyla harmanlandığı ülkelerde, çalışma saatleri ve/ya da günleri azalacak; böylelikle, bireyler, eşe dosta, sanata, spora, belki de bilime daha çok zaman ayırabilecekler. Kapitalizmin daha geri olduğu toplumlarda ise, ‘yapay zekalanma’ süreci daha fazla sömürü ve baskı getirecek. Daha fazla sömürü, çünkü kârlar artarken, ücretlerin düşmesi olası. Daha fazla baskı, çünkü çalışanları gözetleme teknolojileri, hepgöz kameralardan elektronik prangalara kadar evrimleşerek baskıyı arttıracak. Teknolojik ilerlemenin iyimserleri ve kötümserleri var. İyimserler sayıca daha fazla olsa da – son çıkan bir teknolojinin yapabildiklerinden kim etkilenmez ki -, kötümserlerin eleştirilerine kulak vermemiz gerekiyor. İlk soru, teknolojik ilerlemenin toplumun hangi kesimlerine yarar sağlayacağı... İkinci soru, teknolojik ilerlemenin insan hak ve özgürlüklerini ne ölçüde destekleyeceği ve bunlara ne ölçüde ket vuracağı... Eleştirel bir bakış bir kez takınıldı mı, birçok yeni soru ortaya çıkacaktır. Bilim ve teknolojinin insanlık ya da kamu yararına kullanımı da olanaklı, kötüye kullanımı da. Otoriter devletler elinde bilim ve teknoloji, iç tehdit sayılan yurttaşları daha çok baskı altında tutmak ve dış tehdit sayılanlara karşı daha çok askeri harcama yapmak üzere kullanılıyor. Bu kötüye kullanımlara büyük şirketlerin kâr mantığı eşlik ediyor. Geçtiğimiz yıllarda Afganistan’da sivil hedefleri (sehven!) vuran yapay zekalı silahlar ve Çin’de veri ve görüntü işleme bağlamında gözetim teknolojileri, haklar ve özgürlükler yerine iç-dış güvenlik söylemli kötüye kullanımlara örnek olarak verilebilir. Bu çalışmada eleştirel teknoloji çalışmalarının kapısı aralanıyor. Elbette bir metin kısalığında her konuya girilemeyecektir. Ancak yine de, kimi görüşler ortaya atılmış olacaktır. Anahtar Sözcükler: Yapay zeka, yapay zeka sosyolojisi, teknoloji sosyolojisi, eleştirel teknoloji çalışmaları, ve gözetim teknolojileri. Artificial Intelligence and Society: A Critical Look Through Sociology of Artificial Intelligence Ulaş Başar Gezgin Abstract What forms does the relationship between artificial intelligence and society take? What are the effects of artificial intelligence on society and society on artificial intelligence? In this study, it was aimed to reach a synthesis in response to these and similar questions by scanning the studies within the scope of sociology of artificial intelligence. For one thing, artificial intelligence has different uses; their social effects are different. The society can be perceived as a whole or as a mixture of different segments. “In which areas do the social effects of artificial intelligence stand out?” If we ask this question, artificial law, the use of artificial intelligence in medicine, artificial intelligence applications for educational purposes, driverless vehicles, artificial intelligence weapons, 'smart' city discussions, etc. will come to mind. When we look at it from a sociological perspective, the working classes that have to sell their labor will also undergo a transformation. In countries where capitalism is blended with the notion of welfare society, working hours and/or days will decrease; In this way, individuals will be able to devote more time to friends, art, sports, and perhaps science. In societies where capitalism is more backward, the process of 'artificial intelligence' will bring more exploitation and oppression. More exploitation, because while profits rise, wages are likely to fall. More pressure, because employee surveillance technologies will evolve from panopticon cameras to electronic shackles, increasing the pressure. Technological progress has its optimists and pessimists. While the optimists outnumber the pessimists - who wouldn't be impressed by what the latest technology can do - we need to listen to the criticisms of the pessimists. The first question is to which segments of society technological progress will benefit. The second question is to what extent technological progress will support and hinder human rights and freedoms. Once a critical view is taken, many new questions will arise. It is possible to use science and technology for the benefit of humanity or the public, and its abuse is also possible. In the hands of authoritarian states, science and technology are used to put more pressure on citizens who are considered internal threats and to spend more on military expenditures against those considered external threats. These abuses are accompanied by the profit logic of large corporations. Artificial intelligence weapons that (inadvertently!) hit civilian targets in Afghanistan in the past years and surveillance technologies in the context of data and image processing in China can be given as examples of abuses with internal-external security discourse instead of rights and freedoms. In this study, the door to critical technology studies is opened. Of course, it is not possible to cover every subject in a short text. However, some opinions will be raised. Keywords: Artificial intelligence, sociology of artificial intelligence, sociology of technology, critical technology studies, and surveillance technologies.
Book
Full-text available
ISTC_PhD Proceedings Book: The International Symposium on Communication and Technology with its Philosophical Dimentions (ISTC_PhD)
Article
Full-text available
The article deals with the influence of robotics units on the life of modern mankind, the possibility of creating an artificial intelligence equal to the human intelligence or exceeding its level, the possibilities and validity of the recognition of artificial intelligence physically embodied in a robotics unit by an object and (or) subject of criminal legal relations, the relationship between information security and artificial intelligence research and its results. Keywords: artificial intelligence; robotics unit; criminal liability of artificial intelligence; criminal liability of a robotics unit; electronic entity; criminal measures towards electronic entities. Радутный А. Э., кандидат юридических наук, доцент, доцент кафедры уголовного права № 1, Национальный юридический университет имени Ярослава Мудрого, член ВОО «Ассоциа-ция уголовного права», Украина, г. Харьков. Уголовная ответственность искусственного интеллекта Рассматриваются вопросы влияния объектов робототехники на жизнь современного челове-чества, возможности создания искусственного интеллекта, равного интеллекту человека или пре-вышающего его уровень, возможности и обоснованность признания искусственного интеллекта, физически воплощенного в объекте робототехники, объектом и(или) субъектом уголовно-пра-вовых правоотношений, связи информационной безопасности с исследованиями искусственного интеллекта и их результатами. Ключевые слова: искусственный интеллект; объект робототехники; уголовная ответствен-ность искусственного интеллекта; уголовная ответственность объекта робототехники; электрон-ное лицо; меры уголовно-правового характера по отношению к электронным лицам. Issue statement. One of the modern-day challenging issues is the role and place of AI (artificial intelligence) in the system of social relations protected by criminal law, the relationship between information security and artificial intelligence research and its results, the possibility and validity of the recognition of artificial intelligence physically embodied in a robotics unit by an object and (or) subject of criminal
Article
Full-text available
This article analyses the potential benefits and drawbacks of artificial intelligence (AI). It argues that the EU should become a leading force in AI development. As a goal that captures the public imagination and mobilises a variety of actors, the EU should develop mission-based innovations that focus on using this technological leadership to solve the most pressing societal problems of our time whilst avoiding potential dangers and risks. This leadership could be achieved either by adapting the EU’s available instruments to focus on AI development or by designing new ones. Be it seeking a visionary future for AI or addressing concerns about it, progress should always be driven with the human-centred perspective in mind, that is, one that seeks to augment human intelligence and capacity, and not to supersede it.
Article
Full-text available
A recent issue of a popular computing journal asked which laws would apply if a self-driving car killed a pedestrian. This paper considers the question of legal liability for artificially intelligent computer systems. It discusses whether criminal liability could ever apply; to whom it might apply; and, under civil law, whether an AI program is a product that is subject to product design legislation or a service to which the tort of negligence applies. The issue of sales warranties is also considered. A discussion of some of the practical limitations that AI systems are subject to is also included.
Article
Full-text available
The article deals with the influence of robotics units on the life of modern mankind, the possibility of creating an artificial intelligence equal to the human intelligence or exceeding its level, the possibilities and validity of the recognition of artificial intelligence physically embodied in a robotics unit by an object and (or) subject of criminal legal relations, the relationship between information security and artificial intelligence research and its results.
Article
Full-text available
This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short-term impacts of increased openness appear mostly socially beneficial in expectation. The strategic implications of medium and long-term impacts are complex. The evaluation of long-term impacts, in particular, may depend on whether the objective is to benefit the present generation or to promote a time-neutral aggregate of well-being of future generations. Some forms of openness are plausibly positive on both counts (openness about safety measures, openness about goals). Others (openness about source code, science, and possibly capability) could lead to a tightening of the competitive situation around the time of the introduction of advanced AI, increasing the probability that winning the AI race is incompatible with using any safety method that incurs a delay or limits performance. We identify several key factors that must be taken into account by any well-founded opinion on the matter.
Conference Paper
Full-text available
A recent issue of a popular computing journal asked which laws would apply if a self-driving car killed a pedestrian. This paper considers the question of legal liability for artificially intelligent computer systems. It discusses whether criminal liability could ever apply; to whom it might apply; and, under civil law, whether an AI program is a product that is subject to product design legislation or a service to which the tort of negligence applies. The issue of sales warranties is also considered. A discussion of some of the practical limitations that AI systems are subject to is also included.
Article
Full-text available
This paper critically examines the affordances and limitations of big data for the study of crime and disorder. We hypothesize that disorder-related posts on Twitter are associated with actual police crime rates. Our results provide evidence that naturally occurring social media data may provide an alternative information source on the crime problem. This paper adds to the emerging field of computational criminology and big data in four ways: (1) it estimates the utility of social media data to explain variance in offline crime patterns; (2) it provides the first evidence of the estimation offline crime patterns using a measure of broken windows found in the textual content of social media communications; (3) it tests if the bias present in offline perceptions of disorder is present in online communications; and (4) it takes the results of experiments to critically engage with debates on big data and crime prediction.
Article
The future of cybersecurity is in flux. Artificial intelligence challenges existing notions of security, human rights, and governance. Digital misinformation campaigns leverage fabrications and mistruths for political and geostrategic gain. And the Internet of Things—a digital landscape in which billions of wireless objects from smart fridges to smart cars are tethered together—provides new means to distribute and conduct cyberattacks. As technological developments alter the way we think about cybersecurity, they will likewise broaden the way governments and societies will have to learn to respond. This policy brief discusses the emerging landscape of cybersecurity in Canada and abroad, with the intent of informing public debate and discourse on emerging cyber challenges and opportunities.