Content uploaded by Pavel Kabanov
Author content
All content in this area was uploaded by Pavel Kabanov on Dec 26, 2019
Content may be subject to copyright.
Criminological Risks and Legal Aspects of Artificial
Intelligence Implementation
Igor Bikeev
Department of criminal law and
procedure
Kazan Innovative University named
after V.G. Timiryasov
Kazan, Russian Federation
bikeev@ieml.ru
Pavel Kabanov
Director of the anti-corruption center
Kazan Innovative University named
after V.G. Timiryasov
Kazan, Russian Federation
kabanov@chl.ieml.ru
Ildar Begishev
Research Department
Kazan Innovative University named
after V.G. Timiryasov
Kazan, Russian Federation
begishev@mail.ru
Zarina Khisamova
Research Department Krasnodar University of the Ministry of Internal Affairs of the Russian Federation Krasnodar,
Russian Federation
alise89@inbox.ru
ABSTRACT
The use of AI inevitably leads to the problem of ethical choice,
raises legal issues that require prompt intervention. The article
presents the results of a detailed study of the opinions of leading
scientists involved in the study of social aspects of AI. The key
characteristics of AI that carry criminological risks are identified,
the types of criminological risks of using AI are identified, the
author’s classification of these risks is proposed. The results of a
detailed analysis of the legal regulation of the legal personality
of AI are presented. Formulated options for bringing to justice
those responsible for the actions of the AI, having the ability to
self-learning, who decided to commit actions / inactions that
qualify as a crime. Authors argue the need for a clear, rigorous
and effective definition of ethical frameworks in the
development, design, production, use and modification of AI.
Arguments are made about the need to recognize AI as a source
of increased danger. The paper analyzes the content of the
resolution of the European Parliament on the possibility of
endowing AI with “legal status”. Special attention is paid to the
question of giving the AI a personality. It is proposed to use legal
fiction as a technique in which the specific legal personality of
AI can be perceived as a non-standard legal position, different
from reality. It is assumed that such a decision can remove a
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned
by others than ACM must be honored. Abstracting with credit is permitted. To
copy otherwise, or republish, to post on servers or to redistribute to lists, require s
prior specific permission and/or a fee. Request permissions from
Permissions@acm.org.
AIIPCC '19, December 19–21, 2019, Sanya, China
© 2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-7633-4/19/12…$15.00
https://doi.org/10.1145/3371425.3371476
number of legal restrictions that exist today and prevent the
active involvement of AI in the legal space.
CCS CONCEPTS
• Social and professional topics • Computing / technology policy
• Government technology policy • Governmental regulations
• Social and professional topics • Computing / technology policy
• Computer crime
• Security and privacy • Human and societal aspects of security
and privacy
KEYWORDS
Artificial intelligence, Intelligent technology, Robot, Machine
learning, Criminological risks, Criminological features, The risks
of the use of artificial intelligence, Threat of use of artificial
intelligence, The criminal capacity of artificial intelligence,
Crimes with the use of artificial intelligence, Technological
singularity, Legal personality, Criminal liability, Legal fiction,
European parliament resolution, The source of high risk,
Liability of artificial intelligence
1 Introduction
One of the oldest challenges facing humanity is a pressing need
to create de-vices that can simplify life. After heavy physical
labor was changed to mechanized and automated machines, the
society thought about the creation of ones capable of performing
intellectual (mental) work, which for a long time was a purely
human prerogative. Turing considered that machines as well as
people are possible to use the available information, as well as
the mind to solve problems and make decisions, in addition, he
described the test (later named after the author), which allows to
determine when the machines will be able to compare with
human mind [1].
AIIPCC2019, December 2019, Sanya, China
Igor Bikeev et al.
Six years later, proof of the possibility of creating AI was
presented at a conference organized by McCarthy, Minsky at
Dartmouth University. At these conference the term of artificial
intelligence (AI) was constructed by D. McCarthy. The
Dartmouth conference was the starting point for AI research,
which has been going up and down for 70 years [2]. Nowadays,
there is an active application of AI in all subject areas and the
continuous expansion of its capabilities. The pace of
development of AI gaining unprecedented momentum every day.
According to Markets and Markets Research [3], by 2020, the AI
market will grow up to $5 billion through the use of machine
learning technologies and intelligent language recognition.
Andin 2030 global GDP will grow by 14%, or 15.7 trillion US
dollars, due to the active use of AI. 72% of the largest
corporations in the world consider AI to be the funeral of the
future [4]. Key areas of vigorous implementation of AI in the
next 10 years will be: health care; transport (by 2025, the volume
of supply of AI-based systems for Autonomous vehicles will
exceed $ 150 million); financial services and technology (hedge
funds that use AI demonstrate much better results than those
driven by people [5]. The assets under management of robosonic
in the world with the 2018 and 2020 will increase by 8 times [6].
The changes will also affect logistics, retail, industry and the
global speech recognition market [7].
However, the average inhabitant of the planet uses AI much
more often than he or she thinks: only 33% of respondents
believe that they use AI-enabled technology, while 77% actually
use an AI-enabled service or device [8].
The undoubted advantages of the introduction of AI, which is
able to relieve mankind of the grunt work and the transition to
creative activities, which machines are not capable of, can be
seen not only by large corporations, but also by ordinary people:
61 % of the 6,000 people surveyed said that they believe that AI
will make the world a better place [9].
However, the ongoing growth of investment and expand the
areas of implementation of AI has led to humanity’s awareness
of the “reverse side” of the coin called “artificial intelligence”.
The scientific and expert community has long pondered over
the still unsolvable issues for humanity. Do we, modern society,
understand what AI is, and what risks does its creation and
turnover entail? Will we be able to extract the benefits of using
AI, avoiding negative consequences? Are there regulatory
mechanisms for con-trolling AI? Is global legislation ready to
regulate situations when AI will participate in the infringement
of legally protected relationship? And would the title of the book
“Our final invention: artificial intelligence and the end of human
era” [10] be prophetic? The present study attempts to find
answers to such ambiguous, and at the same time topical issues.
2 Material and Method
In 2007, in an interview to the question "What is Artificial
Intelligence?" J. McCarthy replied that it was the science and
development of intelligent machines and systems, especially
intelligent computer programs designed to understand human
intelligence. The methods used are not necessarily biologically
plausible [2].
In this study, we have understood AI as a collective term for
intelligent computer software (AI systems, AI technologies) that
can analyze the environment, think, learn, and respond to what
it "feels".
In the broad sense of the word, AI is a kind of intelligent
system capable of making decisions on its own. This system
represents the direction of the development of computer
functions related to human intelligence, such as: reasoning,
training, and problem solving. In other words, AI is the transfer
of human capabilities of mental activity to the plane of computer
and information technologies, but without inherent human vices
[11]. Scientists dealing with this issue are intensively studying
the prospects for recognizing an e-face and the place of a person
in such a world [12].
We agree with Ponkin and Redkina that AI is an artificial
complex cybernetic computer-software-hardware system with
cognitive-functional architecture and own or relevant available
(attached) computing power of the required capacities and speed
[13].
The same conclusion had been reached by Morkhat,
considering that AI – is fully or partially autonomous self-
organizing computer-hardware-software virtual or cyber
physical, including bio-cybernetic, the system (unit), endowed
with /possessing the abilities and capabilities. In this case, the
author notes that the active use of AI units entails the emergence
of multiple uncertainties, difficulties and problems in the legal
space. We share this position [14].
In modern scientific and technical literature provides a lot of
different classifications of AI systems and their applications.
However, taking into account the methodology of this study, we
adhered to the classification of AI, taking into account the
applied aspects of modern information and telecommunications
technology [15].
In the PwC study all types of AI are divided into two groups
depending on the interaction with the person in their activities.
Thus, AI interacting with people is usually referred to special
stable systems, such as auxiliary intelligence, which helps people
accomplish tasks faster and better. Stable systems are not able to
learn from their interactions. This group also includes adaptive
(augmented) intelligence. The second group, which does not
interact with humans, includes automated intelligence, designed
to automate mechanical/cognitive and routine tasks. Its activity
is not related to the implementation of new tasks and is in the
field of automation of existing tasks. This group also includes
"Autonomous intelligence" [4]. Such AI is able to adapt to
different situations and act independently without human
intervention, committing, for example, identity theft [16].
3 Theoretical Background
The greatest minds of our time like Hawking [17], Gates and
Musk bring to the fore the problem of technological singularity –
the moment when computers in all their manifestations will
become smarter than people. According to Kurzweil, when this
Criminological Risks and Legal Aspects of Artificial Intelligence
Implementation
AIIPCC2019, December 2019, Sanya, China
happens, computers will be able to grow exponentially in
comparison with themselves and reproduce themselves, and
their intelligence will be billions of times faster than humans
[18]. According to Bostrom in 60 years the AI will be-come a
serious threat to humanity. By 2022, the similarity of the
thinking processes of robots and humans will be about 10 %, by
2040 – 50 %, and in 2075, the thinking processes of robots can no
longer be distinguished from human ones, the similarity will
reach 95 %. However, the pace of technological development
suggests that the process would be much faster [19].
It should be noted that not only the greatest minds of
mankind are concerned about the threats that carry the
development of AI. According to an independent global survey
conducted by innovative companies Northstar and ARM (2018),
just over a third of the people surveyed think that AI is already
having a significant impact on their daily lives. Despite the fact
that more than half of the residents expect a better future for
society thanks to AI, a fifth of respondents expect the worst.
There are also regional differences. Residents of the North
American continent and Europeans have expressed concern
about the reliability of machines with AI, while in Asia there is a
genuine fear that machines with AI are becoming more
intelligent than people. In general, the vast majority of
respondents (85%) are concerned with ensuring the security of
AI technology [20].
4 The Problem of Moral and Ethical Choice of
AI
Professor R. Arkin, who is engaged in the development of robots
for military needs, notes that his research has significant ethical
risks, possible when using his developments for criminal
purposes. The use of AI in wartime can save thousands of lives,
but weapons that are intelligent and stand alone pose a threat
even to their creators. Meanwhile, within 3-5 years military
complexes equipped with artificial intelligence can soon be
adopted by the strongest armies of the world. So, the US Air
Force announced that it is launching a program Skyborg, which
is designed to deter-mine whether drones with artificial
intelligence (AI) can help pilots better perform combat missions.
The US air force plans to launch the first test drones in 2023. By
2023, the Russian Navy plans to put into service a complex of
self-learning minefields, equipped with elements of artificial
intelligence, and able to make their own decision when and what
goal to blow up [21]. Foreseeing the potential risks of his
developments, R. Arkin developed a set of algorithms ("ethical
guide"), which is designed to help robots to act on the battlefield:
in what situations to stop the fire, in what situation to seek to
minimize the number of victims [22].
However, if the potential risks of using AI in the military
sphere are obvious, and scientists are already working to
minimize them, the situation is quite different in the "peaceful"
areas of AI application.
The use of AI will inevitably lead to the problem of ethical
choice. For example, the AI used in an unmanned vehicle, in
conditions of force majeure should make a choice: whose life of
the road users to save.
At the beginning of 2016, a large-scale study “Moral Machine”
(“Ethics for a car”) was launched at the Massachusetts University
of Technology, within which a special website was created,
where the user of a pilot car simulated situations with different
scenarios, and provided the opportunity to choose on the road in
an emergency, for example, whose lives to sacrifice in the first
place in an accident, the tragedy in which is already imminent.
Analysis of 2.3 million responses showed that respondents often
prefer to save people, not animals, young people instead of the
elderly. In addition, the study showed, a significant role in the
choice of corresponding played gender of the potential victim,
the religious preferences of respondents: men are less likely to
leave alive of women, and religious people most frequently give
preference to the salvation of man, not an animal [23].
Representatives of the German automobile company "Mercedes-
Benz" in turn noted that their cars would give priority to
passengers. The Ministry of transport of Germany was
immediately answered that to make such a choice on the basis of
a number of criteria would be illegal, and in any case, the
manufacturer will be responsible [24].
Thus, it is essential to emphasize the need for a clear,
rigorous and effective ethical framework in the design,
construction, production, use and modification of AI.
No exception is the sphere of health care, where the
introduction of AI in the process of treating and diagnosing
oncological diseases has had mixed consequences. Occurred in
the summer of 2018 leaked internal documents of one of the
world's largest manufacturers and suppliers of hardware and
software – the "IBM" company suggests that it has developed
medical AI "Watson Health" used in 230 hospitals worldwide for
the treatment of 13 types of cancer at 84 000 patients, commit a
medical error. "Watson Health" offers treatments that can lead.
The leaked internal documents of one of the world's largest
manufacturers and suppliers of hardware and software, IBM, in
the summer of 2018 indicate that Watson Health, the medical AI
developed by it, used in 230 hospitals around the world to treat
13 types of cancer at 84,000 patients makes medical errors.
Watson Health offers incorrect treatments that can lead to the
death of the patient [25].
5 Key Characteristics of AI that Pose a Threat
to Cybersecurity
The problem of ensuring the security of confidential information
is one of the key for all subjects of the digital economy,
including the problem of cybersecurity using AI [26].
According to “InfoWatch”, the largest Russian manufacturer
of solutions to protect organizations from internal and external
threats, as well as from information attacks, in the first half of
2017, more than 920 incidents were registered related to leaks of
confidential information from organizations of various forms of
ownership. Data on incidents include all leaks in all foreign
AIIPCC2019, December 2019, Sanya, China
Igor Bikeev et al.
countries, information about which is published in the media,
blogosphere, social networks and other network resources.
To overcome this problem, companies conduct ongoing
research in this area. For example, the American Corporation
“Google’ has developed a program called “Federal Learning”, in
which the lightweight version of the software “Sensorflow”
allows you to use AI on your smartphone and train it. Instead of
collecting and stor-ing information in one place, on Google
servers, for further work with new algorithms, the learning
process takes place directly on the mobile device of each user. In
fact, the phone's processor is used as an auxiliary means of
teaching AI. The advantage of this application is that
confidential information never leaves the user's de-vice, and
application updates are applied in real time. Later, the
information is collected anonymously by Google and is used to
make general adjustments to the application [27].
The world community is concerned about the use of AI for
criminal purposes. Thus, in early 2017, the FBI held a major
conference on the use of AI by law enforcement agencies and
criminals. At the conference it was noted: the data of Interpol,
Europol, FBI and law enforcement agencies of other countries,
the results of studies of leading universities indicate the lack of
activity of criminal structures to create their developments in the
field of AI. According to Ovchinsky [28], despite the lack of
information about the development of cybercriminals in the field
of AI, the potential for such a phenomenon exists. Cybercriminal
has plenty to choose from to create their own powerful AI
platforms. Almost all the development of AI with an open
outcometion code are containers. A container is a platform
where with the help of API can mount the place any third-party
programs, services, databases, etc. If earlier, when creating their
own program or service, everyone had to initially develop
algorithms from beginning to end, and then, using one or
another programming language, translate them into code, today
it is possible to create products and services in the same way as
builders build a house – from standard parts delivered to the
construction site.
Thus, the processes of using AI for criminal purposes have
increased public danger [29]. However, the use of open source
communications for crime assessment seems to be a promising
idea, including in the era of big data [30].
According to studies, the main most active areas of AI
implementation are medical applications (disease diagnosis
programs), digital services – assistants, autonomous vehicles [9].
At the same time, for example, an AI error in the program of
diagnosis of diseases that has made an incorrect diagnosis can
lead to incorrect treatment of the patient, and as a consequence,
a possible violation of his health [31].
The analysis of trends in the creation and use of AI allowed
us to identify two types of criminological risk of using AI: direct
and indirect.
Direct criminological risk of using AI – the risk associated
with direct effect on a person and a citizen of a danger caused by
the use of AI.
These risks include:
1. AI with the ability to self-training, made the decision about
the actions/inactions, constitutes a crime. Criminal act implies
deliberate commission by the AI system of a socially dangerous
attack on: human life and health; freedom, honor and dignity of
the individual; constitutional rights and freedoms of man and
citizen; public security; peace and security of mankind, which
have caused socially dangerous consequences.
2. Intentional actions with the software of the AI system,
which caused socially dangerous consequences. Criminal act
implies illegally accessed to the system, resulting in damage or
modification of its functions, as a result of which a crime was
committed.
3. AI was created by criminals to commit crimes.
Indirect criminological risk of using AI – the risk associated
with unintended hazards in the context of the use of AI.
These risks include:
1. Random errors in the software of the AI system (errors
made by the developer of the AI system that led to the
commission of the crime.
2. Errors made by the AI system during its operation.
John. Kingston [32] described a hypothetical situation in 2023
where an AI-driven car moving through the city streets would
knock down a pedestrian, and wondered about the criminal
responsibility. Unfortunately, the situation became real at the
beginning of 2018, when the unmanned vehicle of the American
international company hit a woman in the US state of Arizona
due to the program’s features [33]. According to Leenes and
Lucivero [34] responsibility for the actions and harm caused to
the AI, is borne by the person who programmed it or the person
responsible for its operation within the limits established by the
law.
6 The Problem of Criminal Prosecution for
Illegal Actions of Artificial Intelligence
The issue of bringing AI to criminal responsibility for
committing a crime is inextricably linked with the granting of
legal personality to AI. Note that in legal science, there is
currently an active discussion on this issue. The catalyst for it
was the provision of citizenship of the Kingdom of Saudi Arabia
to the humanoid robot Sofia, developed by the specialist of the
Hong Kong company "HansonRobotics" D. Hanson. So, N.
Nevyans comes to the conclusion of the insolvency of the idea of
creation independent legal personality to AIas in this case there
is an extrapolation of human rights to the actions of AI. In this
case N. Nevyans gives an example with a proposal to give
individual to animals, "having consciousness and capable of
having feelings", for example, dolphins, legal personality [35].
On the contrary, Uzhov comes to the conclusion that AI,
endowed with the ability to analyze and compile a behavioral
algorithm regardless of program pre-sets, needs legal regulation,
and emphasizes the need to start this activity with the
introduction of a new subject of law [36].
It should be noted that in the European Union after a number
of tragic incidents with the use of unmanned vehicles, began to
widely discuss the possibility of granting robots legal status, and
Criminological Risks and Legal Aspects of Artificial Intelligence
Implementation
AIIPCC2019, December 2019, Sanya, China
as a consequence, the possibility of bringing an electronic person
(electronic entity) to justice. So, the Resolution of the European
Parliament together with the recommendations of the
Commission on Civil Law Regulation in the field of robotics of
the European Parliament of February 16, 2017 "Civil Law
Standards on Robotics" aimed at regulating the legal status of
robots in human society through the following actions: the
creation of a special European Agency for robotics and AI; the
development of a regulatory definition of a "reasonable
autonomous ro-bot", the development of a registration system
for all versions of robots, together with their classification
system; the development of requirements for developers to
provide guarantees to prevent risks; development of a new
reporting structure for companies using or needing robots,
which will include information on the impact of robotics and AI
on the company's economic performance [37].
It is noteworthy that the text of the above-mentioned
Resolution of the European Parliament notes the incessant
change of robots, which predetermined not only the ability of
robots to perform certain activities, but also the ability to self-
study and take Autonomous "independent" decisions.
Scientists state the difficulty predetermined by the
polymorphic nature of such systems in determining the subject
responsible for failures in the work of AI [38]. It is necessary to
agree with the opinion of Morkhat about the inexpediency and
incorrectness of imposing responsibility on designers and
developers of AI, which is a complex of equipment and devices
developed separately, where the result of the final solution AI
depends largely on the situation of its application and the tasks
assigned to it [14].
Foreseeing the possible risks and negative consequences of
the AI's independent decisions, the draft resolution proposes to
conduct an in-depth analysis of such consequences, on the basis
of which to develop effective mechanisms in the field of damage
insurance from AI (for example, unmanned vehicles), the
creation of compensation insurance funds to cover damages,
mandatory registration of the AI put into operation [39]. It is
also possible to integrate into the mechanism the operation of
the safety switch, as well as certain software for the immediate
shutdown of all processes in emergency situations [40].
In this regard, the forecast described in the European
Parliament Resolution on the possibility of granting robots "legal
status" is of particular interest. However, it should be noted that
the "legal status" of an electronic person will differ significantly
from that of an individual. There are 3 key approaches among
legal scholars dealing with legal personality of AI:
– conferment of legal personality to AI corresponding to
human;
– conferment of legal personality to AI similar to the
legal status of a legal entity;
– endowing the AI with a limited legal personality [41].
We agree with the opinion of N. Nevyans that the legal
personality of AI cannot be equated with the human or legal
status of a legal entity. A person with legal status acts on the
basis of mental processes, guided by subjective beliefs. Behind
the actions of a legal entity, there are also individuals, without
which the activity of a legal entity is impossible. AI, in turn, acts
independently, without possessing consciousness or feelings [35].
At the same time, some authors note that AI can be endowed
with separate rights, different from the rights of a real individual
[27]. In this case, it is appropriate to talk about legal fiction – a
technique in which a specific legal personality.
Hallevy notes that the mandatory elements of the crime are:
criminal behavior – actus reus and internal (mental) element –
mens rea. With regard to AI, it is impossible to establish a
mental element, and it is impossible to recognize the com-
mission of AI crime [42]. However, as rightly noted by Hallevy
and Kingston, in the Anglo-Saxon system of law, there are
"serious assaults" in which the establishment of mens rea is not
mandatory, and it is justified that in these circumstances AI may
be recognized as the subject of infringement.
However, in the Russian criminal law doctrine such an
approach is impossible, due to the necessity of the mandatory
presence of the four elements of an indication of corpus delicti,
including the presence of a subjective side for all crimes without
exception.
Hallevy wondered whether that the applicability of criminal
liability measures to the perpetrator of the crime and, as a
consequence, the achievement of the purpose of criminal
punishment [42]. There are well-founded doubts about the
effectiveness of traditional forms of criminal punishment, such
as a fine or less freedom for the purpose of re-education of the
“criminal AI”.
There are reasonable doubts about the effectiveness of the
application of traditional forms of criminal punishment to the AI,
such as a fine or deprivation of liberty in order to re-educate the
“criminal AI”.
However, some Western legal scholars hold an opinion on
the need to bring robots and AI to criminal responsibility.
According to Ying Hu from Yale University, special types of
punishment should be provided for AI, such as deactivation,
reprogramming, or conferring the status of a “criminal”, which
will serve as a warning to all participants [43].
Uzhov adheres to a similar opinion, according to which
"rehabilitation" of AI" can be realized only through its complete
reprogramming, "that is possible in a sense to compare with a
lobotomy against the person. That is – an absolute and probably
irreversible change in the properties of AI. The second way is
the disposal of the machine [36].
We agree with Ponkin and Redkina that legal support of AI
should be developed consistently (though intensively), taking
into account the preliminary study of all the risks that can be
assumed at the present stage of technology development, and the
specifics of the use of artificial intelligence in various spheres of
life. At the same time, it is essential to ensure a balance between
the interests of society and individuals, including security and
the need to develop innovations in the interests of society» [13].
AI can be perceived as a non-standard legal position, different
from reality. It is known that existence of legal fictions is caused
by need of overcoming of legal gaps and elimination of
AIIPCC2019, December 2019, Sanya, China
Igor Bikeev et al.
uncertainty-news in public relations. We believe that such a
decision can remove a number of legal restrictions that exist
today and prevent the active involvement of AI in the legal space.
7 Summary
The foregoing account information are evidence of the high
crime risk of application of AI prisoners in intelligent
technologies, and the weak theoretical readiness of the science of
criminology to study the problem under consideration.
In the near future (approximately 10-15 years), the pace of
development of systems and devices with AI will lead to the
need for a total revision of all branches of law. In particular, the
institutions of intellectual property, the tax regime, etc. will
require deep processing, which will ultimately lead to the need
to resolve the conceptual problem of endowing an autonomous
AI with certain “rights” and “duties”.
In our opinion, the best way is to endow AI with a specific
"limited" legal personality (through the application of legal
fiction), in terms of endowing an autonomous AI with the
responsibility to bear responsibility for the harm and negative
con-sequences.
This approach will undoubtedly require a rethinking of the
key postulates and principles of criminal law, in particular, the
institutions of the subject and the subjective side of the crime. At
the same time, in our opinion, the AI systems will require the
creation of an independent institution of criminal law, unique in
its essence and content, different from the traditional
anthropocentric approach. Such a legal institution requires a
completely new approach. Within the framework of this
institution, it seems appropriate to provide for different from the
traditional understanding of the subject, based on the symbiosis
of technical and other characteristics of AI, as well as alternative
types of responsibility, such as deactivation, reprogramming or
endowing with the status of "criminal", which will serve as a
warning for all participants of legal relations. We believe that
such a solution in the future can minimize the criminological
risks of using AI.
REFERENCES
[1] Turing A (1950). Computing Machinery and Intelligence. Mind, New Series,
59(236), 433-460.
[2] McCarthy J, Minsk M L, Rochester N and Shannon C E (2006). A Proposal for
the Dartmouth Summer Research Project on Artificial Intelligence. August 31,
1955. AI Magazine, 27(4), 12-14.
[3] Markets and Markets Research Private Ltd (2018). AI in Fintech Market by
Component (Solution, Service), Application Area (Virtual Assistant, Business
Analytics & Reporting, Customer Behavioral Analytics), Deployment Mode
(Cloud, On-Premises), and Region. Global forecast to 2022. Retrieved from:
https://www.marketsandmarkets.com/Market-Reports/ai-in-fintech-market-
34074774.html.
[4] PwC (2017). Artificial intelligence: do not miss the benefit. Retrieved
from: https://www.pwc.ru/ru/press-releases/2017/artificial-intelligence-
enlargement.html.
[5] Reiff N (2017). Artificial Intelligence Hedge Funds Outperforming Humans.
Investopedia. Retrieved from: https://www.investopedia.com/news/artificial-
intelligence-hedge-funds-outperforming-humans/#ixzz4YszizhII.
[6] Jesse McWatters R (2018). The New Physics of Financial Services. Part of the
Future of Financial Services series: understanding how artificial intelligence is
transforming the financial ecosystem. Deloitte, World Economic Forum, 167.
Retrieved from: http://www3.weforum.org/ docs/WEF
_New_Physics_of_Financial_Services.pdf.
[7] Statista (2017). Artificial Intelligence. Report. Retrieved from:
https://www.statista.com/study/50485/artificial-intelligence/.
[8] What Consumers Really Think About AI. Pega. Retrieved from:
https://www1.pega.com/system/files/resources/2017 -11/what-consumers-
really-think-of-ai-infographic.pdf.
[9] Arm Limited (2017). Global Artificial Intelligence Survey. Retrieved from:
http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.
[10] Barrat D (2013). Humanity's Latest invention: Artificial intelligence and the
end of the era of Homo sapiens. New York: Thomas Dunne Books, St. Martin's
Press.
[11] Afanasyev A (2018). Artificial intelligence or intelligence of subjects of
detection, disclosure and investigation of crimes: what will win? Library CSL.
Scientific journal, No. 3(38), 28-34.
[12] Carriço G (2018). The EU and artificial intelligence: a human-centred
perspective. European View, 17(1), 29-36.
[13] Ponkin I V and Redkina A I (2018). Artificial intelligence from the point of
view of law. Bulletin of the Russian University of friendship of peoples. Series:
Legal Sciences, 1(22), 91-109.
[14] Morkhat P M (2017). Artificial intelligence: legal view. M.: BukiVedi, 257.
[15] Amores J (2013). Multiple instance classification: review, taxonomy and
comparative study. Artificial Intelligence, 201, 81-105.
[16] Marron D (2018). Alter Reality: Governing the Risk of Identity Theft. The
British Journal of Criminology, 48(1), 20-38.
[17] Hawking S (2018). Brief Answers to the Big Questions. London: Random
House LLC, 2018, 256.
[18] Kurzweil R (2006). The singularity is near: when humans transcend biology.
Penguin Books, 672.
[19] Bostrom N (2016). Strategic Implications of Openness in AI Development.
Technical Report, 1. Retrieved from: https://www.fhi.ox.ac.uk/reports/2016-
1.pdf.
[20] ARM, Northstar survey (2018). AI today. AI tomorrow. Awareness, acceptance
and anticipation of AI: a global consumer perspective. Retrieved from:
https://pages.arm.com/rs/312-SAX-488/images/arm-ai-survey-report.pdf.
[21] Ramm A and Kozachenko A (2019, March 5). A good face on a naval game, the
Navy will have the ammunition with artificial intelligence. Retrieved from:
https://iz.ru/841783/aleksei-ramm-aleksei-kozachenko/khoroshaia-mina-pri-
morskoi-igre-flot-poluchit-boepripasy-s-iskusstvennym-intellektom.
[22] Rutkin A (2014, September 13). The robot’s dilemma. Magazine issue. 2986.
[23] Casmi E (2018, October, 26). Opinion of millions of people: autonomous cars
have to push the elderly and the young to save. The Network edition “Сnews”.
Retrieved from: http: //www.cnews.ru/news/top/2018-10-
26_mnenie_millionov_chelovek_bespilotnye_avto_dolzhny.
[24] Karlyuk M V (2018). Investments in the future: artificial intelligence. Non-
profit partnership “Russian Council for international Affairs”. Retrieved from:
http://russiancouncil.ru/analytics-and-comments/analytics/eticheskie-i-
pravovye-voprosy-iskusstvennogo-intellekta/.
[25] Kolenov S (2018, July 27). AI-oncologist IBM Watson was convicted of medical
errors. Network edition “Hightech.plus” Retrieved from:
https://hightech.plus/2018/07/27/ii-onkologa-ibm-watson-ulichili-vo-
vrachebnih-oshibkah.
[26] Wilner A S (2018). Cybersecurity and its discontents: artificial intelligence, the
Internet of things, and digital misinformation. International Journal, 73(2),
308-316.
[27] Vincent J (2017). Google is testing a new way of training its AI algorithms
directly on your phone. The Verge. Retrieved from:
https://www.theverge.com/2017/4/10/15241492/google-ai-user-data-federated-
learning.
[28] Ovchinsky V S (2018). Criminology of the digital world. M.: Norm: INFRA–M,
352.
[29] Van der Wagen W and Pieters W (2015). From cybercrime to cyborg crime:
botnets as hybrid criminal actor-networks. The British Journal of Criminology,
55(3), 578-595.
[30] Williams M L, Burnap P and Sloan L (2017). Crime Sensing with Big Data: The
Affordances and Limitions of using open-source communications to estimate
crime patterns. The British Journal of Criminology, 57(2), 320-340.
[31] Momi E De and Ferrigno G (2010). Robotic and artificial intelligence for
keyhole neurosurgery: the ROBOCAST project, a multi-modal autonomous
path planner. Proceedings of the Institution of Mechanical Engineers, part H:
Journal of Engineering in Medicine, 224(5), 715-727.
[32] Kingston J K (2016). Artificial intelligence and legal liability. Research and
Development in Intelligent Systems XXXIII: Incorporating Applications and
Innovations in Intelligent Systems XXIV: Conference Paper, 269-279.
[33] Bergen M (2018, March 19). Uber halts autonomous car tests after fatal crash
in Arizona Bloomberg. Retrieved from:
https://www.bloomberg.com/news/articles/2018-03-19/uber-autonomous-car-
involved-in-fatal-crash-in-arizona.
[34] Leenes R and Lucivero F (2014). Laws on Robots, Laws by Robots, Laws in
Robots: Regulating Robot Behavior by Design. Law, Innovation and
Technology, 6(2), 194-222.
Criminological Risks and Legal Aspects of Artificial Intelligence
Implementation
AIIPCC2019, December 2019, Sanya, China
[35] Nevejans N (2016). European civil law rules in robotics: study. Policy
Department С: «Citizens’ Rights and Constitutional Affairs», European
Parliament’s Committee on Legal Affairs. PE 571.379, 15. Retrieved from:
http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_ST
U%282016%29571379_EN.pdf.
[36] Uzhov F W (2017). Artificial intelligence as subject rights. Gaps in Russian
legislation, 3, 357-360.
[37] Delvaux M (2016). Draft Report with recommendations to the Commission on
Civil Law Rules on Robotics (2015/2103(INL)). Committee on Legal Affairs,
European Parliament, PE582.443v01-00, 22 p. Retrieved from:
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=%2F%2FEP%2F%2FN
ONSGML%20COMPARL%20PE-
582.443%2001%20DOC%20PDF%20V0%2F%2FEN.
[38] Prakken H (2016). How AI & law can help autonomous systems obey the law:
a position paper. In Proceedings of the 22nd European Conference on
Artificial Intelligence Workshop on Artificial Intelligence for Justice. Hague:
VU University Amsterdam, 42-46. Retrieved from:
http://www.ai.rug.nl/~verheij/AI4J/papers/AI4J_paper_12_prakken.pdf.
[39] Del Castillo A P (2017). A law on robotics and artificial intelligence in the EU?
The Foresight Brief. Brussels : European Trade Union Institute, 2, 12 p.–
Retrieved from:
https://www.etui.org/content/download/32583/302557/file/Foresight_Brief_02
_EN.pdf.
[40] Radutniy O E (2017). Criminal liability of the artificial intelligence. Problems
of legality, 138, 132-141.
[41] Robertson J (2014). Human rights vs. robot rights: forecasts from Japan.
Critical Asian Studies, 46(4), 571-598.
[42] Hallevy G (2010). The criminal liability of artificial intelligence entities–from
science fiction to legal social control. Akron Intellectual Property Journal, 4(2),
171-201.
[43] Kopfstein J (2017). Should Robots Be Punished for Committing Crimes?
Vocativ Website. Retrieved from: https://www.vocativ.com/417732/robots-
punished-committing-crimes/.