ResearchPDF Available

Analysis and conceptualization of deepfake technology as cyber threat

Authors:

Abstract and Figures

The emergence of a new generation of digitally manipulated media called “deepfakes” capable of generating highly realistic and very convincing videos has generated substantial concerns about possible misuse. In this digital era, it has increasingly become difficult to judge whether a piece of multimedia content is authentic or not, a special case are deepfake videos, indeed, thanks to recent advancements in artificial intelligence we are nearing a future where humans would be unable to differentiate between a computer generated video and an authentic camera captured one. Deepfake technology presents significant ethical challenges, the literature that addresses the ethical implications of deepfake shows that they are developing rapidly, and are becoming cheaper and more accessible day by day, the ability to produce realistic looking and sounding video or audio files of people doing or saying things they did not do or say brings with it unprecedented opportunities for deception.
Content may be subject to copyright.
School of Political Science “Cesare Alfieri”
Research Paper
ANALYSIS AND
CONCEPTUALIZATION OF
DEEPFAKE TECHNOLOGY AS
CYBER THREAT
A new emerging Cyber threat
Written by Lorenzo DAMI
lorenzo.dami@stud.unifi.it
Course
ICT POLICIES AND CYBERSECURITY
Taught by Prof. Luigi MARTINO
luigi.martino3@unibo.it
2021-2022
©All Rights Reserved.
1
Contents
1 Introduction 5
1.1 What is a deepfake content . . . . . . . . . . . . . . . . . . . . . 5
1.2 How does a deepfake work? . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Computer-generated imagery (CGI) vs Deepfake . . . . . 7
1.3 Evolution of deepfakes . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Evolution of the technology . . . . . . . . . . . . . . . . . 9
1.3.2 Evolution of the usage . . . . . . . . . . . . . . . . . . . . 10
2 Methodology and Data 11
2.1 Literaturereview........................... 11
3 Analysis and Findings 11
3.1 Benefits of deepfakes . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Risksofdeepfake........................... 12
3.3 Deepfake as cyberattack . . . . . . . . . . . . . . . . . . . . . . . 14
3.3.1 Deepfake pornography as cyberattack . . . . . . . . . . . 3
3.4 NotableDeepfakes .......................... 5
3.5 Regulatory landscape . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.5.1 Measures taken against deepfake . . . . . . . . . . . . . . 7
3.5.2 Solutions to deepfakes and further research . . . . . . . . 8
4 Conclusions 9
5 Bibliography 10
Keywords: Deepfakes, Cyber threat, Cyber propaganda, Cyber opera-
tion, Digital ethics, Deception, Misrepresentation, Deepfake pornography, Social
identity, Face manipulation, Face swapping, Artificial intelligence, Deep learn-
ing.
2
Abstract
The emergence of a new generation of digitally manipulated media called “deep-
fakes” capable of generating highly realistic and very convincing videos has
generated substantial concerns about possible misuse. In this digital era, it has
increasingly become difficult to judge whether a piece of multimedia content is
authentic or not, a special case are deepfake videos, indeed, thanks to recent
advancements in artificial intelligence we are nearing a future where humans
would be unable to differentiate between a computer generated video and an
authentic camera captured one (Chapter 1.1, 1.2 and 1.2.1). Deepfake tech-
nology presents significant ethical challenges, the literature that addresses the
ethical implications of deepfake shows that they are developing rapidly, and are
becoming cheaper and more accessible day by day, the ability to produce real-
istic looking and sounding video or audio files of people doing or saying things
they did not do or say brings with it unprecedented opportunities for deception
(Chapter 1.3, 1.3.2 and 1.3.1). The findings of the research were based on a
review of scientific and grey literature, relevant policies, and other personal re-
searches (Chapter 2.1). The risks associated with deepfakes shows that they can
be psychological, financial, and societal in nature, and their impacts can range
from the individual to the societal level, readily gain momentum in the race to
influence people or intentionally deceive as well as broader implications for trust
and accountability (Chapter 3.2, 3.3 and 3.4) with possible consequences also
on a sexual level through the use of pornography to damage a person’s image
(Chapter 3.3.1). Politics, citizens, institutions, and businesses can no longer ig-
nore the construction of a set of stringent rules as a barrier to this, which protect
the most ethically sensitive aspects but also to develop broader solutions that
aim at spreading awareness and information on the problem (Chapter 3.5.1). A
combination of measures will likely be necessary to limit the risks of deepfakes
and its spreading through the internet, while harnessing their potential (Chap-
ter 3.5.2). The overall purpose of the study was to study and careful analyse the
current state of the art of the deepfake technology and then trying to answer
the research questions:
What is the difference between deepfake technologies and other technolo-
gies? (Chapter 1.2.1)
What are the advantages and disadvantages of deepfake technology, and
how it may play a central role over the years? (Chapter 3.2 and 3.1)
Can deepfakes be used as a cyber threat? What kind of consequences
could their malicious use lead to? If so, have they already been conducted
in the past? (Chapter 3.3 and 3.4)
What will finally happen when anyone, as today already happens with
fake news, will be able to create deepfakes with extreme ease and spread
them on the Internet? (Chapter 3.3)
Why deepfakes would represent a possible means of cyber threat for almost
anyone, and not just public figures? (Chapter 3.3.1)
How will digital platforms maintain authority and credibility in the age of
disinformation and counterfeiting of news and now also deepfakes? (Chap-
ter 3.5 and 3.5.1)
3
How could the criteria for establishing the reliability of an online content
be established? (Chapter 3.5.2)
4
1 Introduction
Deepfakes are a synthetic media created by machine-learning algorithms, named
for the deep-learning methods used in the creation process and the fake events
they depict (Chapter 1.1). The process of creation of deepfakes is technically
complex and generally requires a vast amount of data which is then fed to an
algorithm with machine learning technology to train and generate the synthetic
video (Chapter1.2 and 1.2.1) We will then talk how deepfakes were born and
how they are currently widespread on the internet (Chapter 1.3, 1.3.1 and 1.3.2).
1.1 What is a deepfake content
Figure 1: On the left: The Goebbels family portrait photo taken between 1940
and 1942. On the right: Example of a video altered with the deepfake technology
in 2018.
We’re used to spotting fake images and videos with our eyes. It’s easy to look
at the Joseph Goebbels family portrait illustrated in Figure 1 and say, “there’s
something strange about that guy in the back”, indeed, The Goebbels family
portrait is considered one of the first examples of photograph manipulation in
the history, Harald Quandt (Magda Goebbels elder son) was indeed away on
military duties when the photo was taken, but was later inserted and retouched
for propaganda purposes[10][2]. Nowadays, the techniques of data manipula-
tion and alteration has become clever and more difficult to spot. Deepfakes
(an amalgamation of both “deep learning” and “fake”) are synthetic media that
is created by artificial production, manipulation, and modification of data and
media through the use of Artificial Intelligence and Machine Learning algo-
rithms, in which a person in an existing image or video is replaced both visual
and video with someone else’s likeness such as for the purpose of changing an
original meaning or misleading people as illustrated in Figure 1[17][8][11].
5
1.2 How does a deepfake work?
Figure 2: The two main technique for generating a deepfake.
Without going into too much details, there are mainly two kinds of deepfake
illustrated in Figure 2:
Face swapping;
Face manipulation.
Like a student in a classroom, the Artificial Intelligence (AI)1engaged in the
creation of the deepfake video has to learn how to perform its intended task.
It does this through a process of brute-force trial and error, usually referred to
as Machine Learning (ML)2or Deep Learning (DL)3. An AI that’s designed to
win a game of chess, it will play the game over and over again until it figures
out the best way to win. The person designing the AI needs to provide some
data to get things started, along with a few rules when things go wrong along
the way. To work well the algorithm must have a large amount of content, the
more it has, the more it manages to falsify the image the better, aside from that
all the work is done by the AI, and all it needs to have[4]:
1Artificial intelligence leverages computers and machines to mimic the problem-solving and
decision-making capabilities of the human mind.
2Machine learning is a subfield of AI, which is broadly defined as the capability of a machine
to imitate intelligent human behaviour. AI systems are used to perform complex tasks in a
way that is similar to how humans solve problems.
3Deep learning is a subset of machine learning, which is essentially a neural network with
three or more layers. These neural networks attempt to simulate the behaviour of the human
brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data.
6
1. Destination content: deepfakes work best with clear, clean destination
video, that’s why some of the most convincing deepfakes are of politicians;
they tend to stand still at a podium under consistent lighting, but they
can adapt to any kind of conditions.
2. Face datasets: for mouth and head movements to look accurate, we need
a dataset of subject A’s face and a dataset of subject B’s face. After that,
we let the AI do its job, it will try to create the deepfake over and over
again, learning from its mistakes along the way returning at the end the
work done.
1.2.1 Computer-generated imagery (CGI) vs Deepfake
Arrived at this point, given by the fact that movies like “Westworld” (1973),
“Star Wars” (1977), “Tron” (1982), “Golgo 13: The Professional” (1983), “The
Last Starfighter” (1984), “Young Sherlock Holmes” (1985) and “Flight of the
Navigator” (1986) have used computer-based techniques to alter the faces of the
actors in some scenes during the production of the movie, it may come naturally
to wonder if the deepfake technology is equal to the CGI and if not, what are the
differences between the two approaches? Which of the two technologies allows
you to alter a face of a subject with the lowest cost?
Figure 3: A comparison between the creation of the character Princess Leia
in Rogue One (2016) using CGI technology (made by Disney) and deepfake
technology (made by a Star Wars fan).
7
Let’s analyse the difference between the two technologies[10]:
Computer-generated imagery (CGI):
mainly used to recreate scenes: that on many occasions would
be much more expensive to create in real life or would not be possible
to obtain otherwise, but it can be implied to create faces.
mainly performed by computer generated images: it needs a
human operator for creating the scene (2D or 3D).
expensive and tough technique: which requires a skilled human
operator and powerful technology components to properly master it.
convincing: the final work is well-made.
Deepfake:
mainly used to replace faces: rejuvenate them or age them in an
ultra-realistic way.
mainly performed by artificial intelligence computing: even
though it may require a slight fix after the work done by the AI to
adjust the face of the subject from all possible angles and considering
light condition.
cheap and easy: all the work is handled and done by the AI, no
skills needed nor particular powerful technology
convincing: convincing as far as a CGI work.
An example of what said above, here there are some comparison between
the two techniques:
Rogue One (2016) Princess Leia CGI vs Deepfakes Replacement as
illustrate in Figure 3 (CGI is better).
An example of CGI could be the computer generated graphics for the
popular series The Mandalorian (CGI is better)
The Mandalorian Luke Skywalker Deepfake (Deepfake is better)
In conclusion although the CGI technique is capable of creating generally an
excellent quality work, the Deepfake technology is able to achieve similar and
convincing results and sometimes even better than CGI4but with decisively
fewer capabilities talking about human skill and resources, and in addition to
that it’s also cheaper.
1.3 Evolution of deepfakes
Let’s now illustrate the evolution of deepfake technology over time under both
the technological and usage point of view. Since deepfakes are still a relatively
new phenomenon, not many reliable data datasets exist regarding the disclosure
of deepfakes on the internet. The most reliable source at the moment would
appear to be “Sensity”[18], which is what some findings discussed in this research
paper are based on.
4www.youtu.be/wrHXA2cSpNU?t=31: Compare this movie scene (originally made by CGI
technology) vs a fan-made remake (using deepfake technology). The deepfake tape turns out
to be clearer, sharper and more natural than the CGI.
8
1.3.1 Evolution of the technology
The term deepfake was first coined around the end of 2017.
2017: Commonly viewed by the public as ultra realistic fake videos,
deepfake technology first appeared on Reddit.com, a popular social me-
dia platform consisting of smaller communities used for content sharing
and discussion. In November 2017, a user, known only as u/deepfakes,
created such a community and shared the first rendition of the deepfake
algorithm. The first generation of deepfake videos were characterized by:
(i)low-quality synthesized faces; (ii) different colour contrast among the
synthesized fake mask and the skin of the original face; (iii) visible bound-
aries of the fake mask; (iv) visible facial elements from the original video;
(v) low pose variations; (vi) strange artefacts among sequential frames[23].
2018: Further research improved on the technique by using a motion
extractor that learns the extraction of key points along with their local
affine transformations. Afterwards, a generator network models occlusions
in the target motions and combines the appearance extracted from the
source image and the motion derived from the driving video. The outcome
is the animation of a static photo without requiring any prior information
or knowledge of facial landmarks[24].
2019: Further advancements were done by the researchers at Samsung’s
AI lab that created a method to train a model using a single photo and
various landmark facial features (e.g., shape of the face, eyes, mouth shape,
etc.). While glitches were still clear and obvious, the technique elevates
the risks of misinformation, deception, and fraud to new levels [20][10].
2020: Technological improvements have enabled other forms of deepfakes
to evolve in terms of voice cloning technologies, that could generate, based
on the concept of Text-to-Speech (TTS) technology, synthetic speech that
closely resembles a targeted human voice[24].
2021: Further advancements were made in order to simplify the transfor-
mation process by using artificial intelligence to automatically learn the
characteristics of anyone’s face and voice from hundreds or even thousands
of examples[15].
2022: As of today, deepfake videos have evolved in artificial intelligence
and deep learning have increased the quality of Parametric TTS-based
voice cloning, different acquisition scenarios (e.g., indoors and outdoors),
light conditions (e.g., day and night), distances from the person to the
camera, as well as pose variations, leading to the creation of easily acces-
sible and reusable tools such as Tacotron[19], WaveNet[16], Deep Voice[1],
and Voice Loop[22][10].
The proliferation of deepfakes through the society and its improvements in qual-
ity over the years are driven by three factors[26]:
Technological advances: increased computing power, 5G connectivity,
3D sensors, high-quality algorithms and pre-trained models.
9
Publicly available databases: more and more data and resources are
available on the internet. This gives to the IA the possibility to train and
learn how to efficiently create a deepfake of a subject.
Societal context: the changing media landscape (a shift from the tra-
ditional model of centralized information distribution to user-generated
content platforms) and the reliance on visual communication.
1.3.2 Evolution of the usage
Figure 4: The state of deepfakes over the years[18].
The first collection of deepfake videos in 2017-2018, contained pornographic
content, in which the faces of the original actresses were replaced by well-known
celebrities. It has since found other applications in a wide range of domains,
from internet memes to art and the cinema industry. However, the applications
which have caught the most attention were those related to deceptive usage, such
us those regarding political contents (Chapter 3.4). The volume of deepfake
videos shows staggering growth with reputation attacks topping the list, in
fact the number of deepfake videos has been doubling every six months since
observations started in December 2020[18]. Ever since, deepfakes videos have
garnered widespread attention also (and unfortunately only) for their uses in
creating child sexual abuse material, deepfake pornography, revenge porn, fake
news, hoaxes, bullying, and financial fraud (Chapter 3.2 and 3.3). This has
elicited responses from both industry and government to detect and limit their
use[17][2][8].
10
2 Methodology and Data
The findings reported in this document are the result of research based on
literature review, personal research, expert interviews, expert review and how
these methods were applied (Chapter 2.1).
2.1 Literature review
This report is based primarily on a review of primary and secondary scientific
literature. Searches were conducted in several literature databases, including
Research Gate, Google Scholar and IEEE. Additional literature was collected
based on hand-searching of references in the identified articles, taking special
note of frequently cited articles. Other keywords were “disinformation”, “image
synthesis”, “image manipulation”, “cyberattacks in cyberspace”, “deepfakes”,
“cyber threat”, “cyber propaganda”, “cyber operation”, “digital ethics”, “de-
ception in cyberspace”, “misrepresentation in cyberspace”, “deepfake pornog-
raphy”, “social identity in cyberspace”, “voice cloning”, “face manipulation”,
“face swapping”, “artificial intelligence” and “deep learning”. To get a sense of
the practical use and current trends in deepfake content and technology, online
communities were also explored. During the course of the study, I conducted
several searches on YouTube for “deepfake”, and I searched on online forums
such us Reddit, Twitter, and Facebook.
3 Analysis and Findings
Deepfake technology has evolved considerably over the last years (Chapter 1.3.2
and 2). As for any kind of tool and discovery, Deepfake technology as well can
be used for a wide variety of purposes, with both positive and negative impacts
(Chapter 3.1 and 3.2). We will analyse under a critical point of view the risks
associated with deepfakes and how they can be used as a cyberattack vector or
cyber threat (Chapter 3.3 and 3.3.1) with some examples of notable attacks or
demonstration of power already happened in the past (Chapter 3.4). Analysis
and findings during this process were presented and based on literature study,
personal research and expert interviews. We conclude with a critical analysis
and approach of the current legislation of deepfake usage in various States of the
world and the approach of Over the Top (OTT) (Chapter 3.5 and 3.5.1). and
the proposed solutions in order to contain the possible bad events that could
occur due to the usage of deepfake as a cyber threat (Chapter 3.5.2).
11
3.1 Benefits of deepfakes
Figure 5: Guy Henry as Governor Tarkin on the ‘Rogue One’ set and layers
of the CGI recreation of Peter Cushing (on the left) compared to the process
generally entire done by the IA for the deepfake generation (right).
Anyone who has used a modern smartphone for photography has probably ex-
perienced some benefits of basic deepfake technologies. Often camera apps are
equipped with beauty filters, that automatically modify images, app such us
FaceApp, Instagram etc. More advanced deepfakes in which entire faces are
exchanged or speech is modified can also be lawfully created to provide for
example critical comments, satire, and parodies or simply to entertain an au-
dience. Other obvious possibilities for beneficial use of deepfakes are described
below[14][10]:
1. Audio-graphic productions: with the use of special effects on movies,
as showed in the Figure 5.
2. Human-machine interactions: deepfake technology can be used to im-
prove 3D virtual assistants, enabling more natural human-like interactions.
3. Satire and personal creative expression.
4. Voice-synthesis for medical and healthcare purposes: treatment of
patients, who have lost a loved one, or for patients who have Alzheimer
or other conditions.
5. Human-machine interactions: in robots and digital assistants that can
give a better experience when we are interacting with them as they can
mimic facial expression to the most delicate detail.
6. Culture: animation of art to create virtual museums.
7. Interactive educational lessons.
3.2 Risks of deepfake
Deepfake technologies may also have a malicious, misleading and even destruc-
tive potential at an individual, organizational, and societal level, since deepfakes
target individual persons, there are:
1. Direct psychological consequences for the target.
12
2. It is also clear that deepfakes are created and distributed with the intent
to cause a wide range of financial harms.
3. There are grave concerns about the overarching societal consequences of
the technology.
That said, deepfakes can be used as cyberattacks, an overview of the risks
identified are presented and categorized below[10][11]:
1. (S)extortion: inflicting hard-to-repair damage on the reputation of promi-
nent individuals (e.g., threaten a person with a deepfake video).
2. Defamation: threats and damage to the reputation of a subject (e.g.,
making a person falsely confess a crime not committed).
3. [!] Deepfake revenge porn: deepfake video of somebody face superim-
posed on a pornographic video (Chapter 3.3.1).
4. Intimidation and Bullying: e.g., creating a humiliating deepfake of a
schoolmate.
5. Undermining trust: in people or through News media manipulation
(e.g., deepfake of a politician spread on the social media).
6. Identity theft.
7. Fraud: e.g., Insurance or Payment.
8. Reputational damage: for a company (Brand damage) or individual.
9. Damage to economic stability: e.g., Stock-price manipulation.
10. Damage to the justice system: e.g., Manipulated sound clip as evi-
dence.
11. [!] Damage to democracy: e.g., False statement to influence politics or
manipulation of elections.
12. [!] Damage to national security and international relations: e.g.,
Fake declaration of war, diplomacy, or peace.
It should now be more clear the huge and devastating effects that deepfakes
may have in our society once spread[11].
13
3.3 Deepfake as cyberattack
Figure 6: Document 210310-001 of the Federal Bureau of Investigation (FBI)
of the United States.
“Synthetic content may also be used in a newly defined cyberattack
vector referred to as Business Identity Compromise (BIC)65.” (FBI,
2021)
First we can affirm that in 2021, according to the point of view of The Federal
Bureau of Investigation (FBI)6of the Unites States, deepfake has been officially
classified as a possible means of cyberattacks as written in the Figure 6[3].
“Deepfake technology is being weaponized against women by inserting
their faces into porn. It is terrifying, embarrassing, demeaning, and
silencing. Deepfake sex videos say to individuals that their bodies are
not their own and can make it difficult to stay online, get or keep
a job, and feel safe.” (Danielle Citron, Professor of Law at Boston
University and author of Hate Crimes in Cyberspace, 2019)
It is likely that there are subjects who are solicited along the trajectories of
manipulation through free apps that literally democratize these skills and allow
anyone to carry out various types of deepfakes.
“Defamation has always existed, but before the network, this was
considered a “localized” phenomenon, instead, now once something
is on the network the phenomenon isn’t “localized” any more but
5Source: www.ic3.gov/Media/News/2021/210310-2.pdf.
6The Federal Bureau of Investigation (FBI) is the domestic intelligence and security service
of the United States and its principal federal law enforcement agency. Operating under the
jurisdiction of the United States Department of Justice, the FBI is also a member of the
U.S. Intelligence Community and reports to both the Attorney General and the Director of
National Intelligence.
14
always present, with deepfakes there will be no more respite and the
results will constantly affect the subject.” (Nunzia Ciardi Chief of
Postal and Communications Police, 2019)
The public may find it increasingly difficult to differentiate from what is
real and what has been maliciously generated. Even if a video is proven to be
fake, and removed from the Internet after a while, the individual and societal
harm is often already done and hardly repairable. Taking in account also the
fact that once something is on the Internet even if you could take it down with
Digital Millennium Copyright Act (DMCA)7, but once somebody downloaded
it on their device, they may be able to upload it again on the internet, even
without your legitimate consent (violating the law). Thus, once something is
shared on the internet, you technically expose yourself to some risks.
“One can doubt the veracity of a news, to not have the confidence in
the source, but we are not used to contesting what affects the senses,
sight and hearing, which represent our main source of knowledge (it
is no coincidence that in criminal cases the word of eyewitnesses -
of those who have seen - is worth much more than those who report
to have learned a fact or circumstance [...] The early philosophers
proclaimed the superiority of autopsy (in Greek to see with one’s
own eyes) over all other means of acquiring knowledge. However,
now because of deepfakes we will no longer be able to trust even our
own eyes, so seeing is believing is no longer enough [...] A clearly
false content could sooner or later be identified as such by people, but
instead with a plausible content, i.e., content that contains factual
elements that are not visible to everyone, this could be difficult to
spot.” (Francesco Posteraro Commissioner AGCOM, 2019)
The effects of deepfakes will be spread of disinformation, increase in doubt
as well as ability to deny accountability (claiming material is fake).
“Deepfakes is a new and very fearful threat that in the near future
risks transforming the digital ecosystem into a world in which being
able to distinguish what is true from what is false will be increas-
ingly difficult. Manipulating on the computer starting from simple
photographs, videos, faces, voices, and people’s movements to create
very realistic visual narratives with a total adherence of the words to
the lip, the possible consequences of an illegitimate use of deepfake
technology are distressing if we think about the possibility already in
effects experimented with creating fake videos in which politicians or
public figures make statements capable of causing consequences of
a certain weight [...] The risks increase exponentially if applied to
a communication tool that is normally considered an authoritative
proof of thoughts and intentions, this means in fact that not even
images and movements will no longer be able to represent an absolute
guarantee of truthfulness and in a digital ecosystem already polluted
7The Digital Millennium Copyright Act (DMCA) is a controversial United States digital
rights management (DRM) law enacted October 28, 1998 by then-President Bill Clinton. The
intent behind DMCA was to create an updated version of copyright laws to deal with the
special challenges of regulating digital material: www.dmca.com.
0
by post-truths and the proliferation of fake news. The creation of
a video false but perfectly credible and its viral propagation through
the Internet connection risks leading us straight to the collapse of
reality, an aberrant perspective that undermines the same empiri-
cal foundation according to which ge do not and that not even the
great writers of the twentieth century like Orwell or Bradbury have
been able to imagine in the disturbing dystopian scenarios described
in their novels [...] In general, its criminogenic potential does not
only jeopardise the reputational identities of people, organisations
and companies but undermine the very hold of digital information
with possible serious consequences for democratic systems.” (Maria
Elisabetta Alberti Casellati President of the Senate of the Italian
Republic, 2019)
Deepfakes, present new challenges for establishing the veracity of online con-
tent.
Figure 7: Users in the dark web (First figure: a website on the dark web; Second
figure: a popular hacking community board in the dark web) where you can hire
hackers to make custom deepfake content purchasable with cryptocurrency only.
Following research carried out personally in the deep web, there are hackers
able to provide deepfake services, as shown in the Figure 7 at a price quite cheap
comparing to other services offered.
1
During the writing of this research paper, I come across some quotes and state-
ments that I would like to mention since I truly consider them relevant for this
work, and I agree with what those statements say. These statements came from
politicians and government organizations, you can find the main concept of the
statement highlighted in yellow and then the author of such quotes and their
current occupation.
“We are experiencing a historical phase in which technological inno-
vations and digital tools increasingly pervade our newspapers, creat-
ing extraordinary opportunities from viewpoints but at the same time
deeply affecting behavior, lifestyle, ordering, taste, consumption and
opinions.” (Francesco Verducci Member of the Senate of the Italian
Republic, 2019)
The broad societal impact of deepfakes is almost never limited to a single
type of risk, but rather to a combination of cascading impacts at different levels.
Perpetrators often act anonymously, making it harder to hold them accountable
(It seems that platforms could play a pivotal role in helping the victim to identify
the perpetrator).
“The rapid improvement of deepfake technologies has severe con-
sequences for the trustworthiness of all audio graphic material. It
gives rise to a wide range of potential societal and financial harms,
including manipulation of democratic processes, the economy, justice
and scientific systems [...] It is therefore very likely that it will be
impossible for a human being to identify a deepfake video without de-
tection tools. And detection tools will always by definition only work
for a limited period of time, until the production technologies re-ad-
just [...] However, we can be sure that visual manipulation is here to
stay. There are no quick fixes. Mitigating the risks of deepfakes thus
requires continuous reflection and permanent learning[10].” (Tack-
ling deepfakes in European policy, 2021)
With the rapid improvement of deepfakes performance, fed a great concern
within the AI ethics community: we may soon be incapable of distinguishing
machine-generated content from real content, leaving us vulnerable to sophisti-
cated disinformation campaigns for the purpose of elections manipulation.
“We’ve already seen deepfake circulated online with the attempt to
distort political discourse or de-legitimize politicians, and even se-
lective editing has been used to falsely represent how a politically-
charged event occurred. High-quality AI-manipulated video ups the
stakes considerably. It is only a matter of time before deepfakes are
used in an attempt to manipulate elections.” (Paul Scharre Director
of Technology & National Security CNAS, 2020)
Deepfakes are providing cybercriminals with new, sophisticated capabilities
to enhance social engineering and fraud.
“Falsehood on Social Media diffused significantly farther, faster, deeper,
and more broadly than the truth in all categories of information, and
2
the effects were more pronounced for false political news than for
false news about terrorism, natural disasters, science, urban legends,
or financial information. We found that false news was more novel
than true news, which suggests that people were more likely to share
novel information[25].” (The spread of true and false news online,
2018)
“In a pre-registered behavioural experiment (N = 210), we show that:
- (i) people cannot reliably detect deepfakes;
- (ii) neither raising awareness nor introducing financial incentives
improves their detection accuracy;
- (iii) people are biased toward mistaking deepfakes as authentic
videos (rather than vice versa);
- (iv) overestimate their own detection abilities;
[...] these results suggest that people adopt a “seeing-is-believing”
heuristic for deepfake detection while being overconfident in their
(low) detection abilities. The combination renders people particu-
larly susceptible to be influenced by deepfake content[14].” (Fooled
twice: People cannot detect deepfakes but think they can, 2022)
In the current social context, where on average, a large proportion of the peo-
ple are already quite inclined to share text-based fake news on social networks
and through the internet (such us newspaper or gossip articles) more frequently
than real news[18], this is because they think it’s the truth, then given the fact
that according to the last quotes from the “Fooled twice: People cannot detect
deepfakes but think they can” research paper, deepfakes are more difficult to
spot than the fake news, the probability of people ready to believe a deepfake
video will be real or vice versa will be much greater. Ergo, Deepfakes will be -
and already are - more difficult for an untrained person to spot.
“The global risk of massive digital misinformation sits at the cen-
tre of a constellation of technological and geopolitical risks ranging
from terrorism to cyberattacks and the failure of global governance.”
(World Economic Forum, 2013)
3.3.1 Deepfake pornography as cyberattack
As if the kind of cyberattacks previously discussed were not enough cruel and
damaging (Chapter 3.2), there is also a more crude phenomenon of the deepfake,
that’s truly disgusting, i.e., the deepfake attack through the usage of pornogra-
phy.
3
Figure 8: The Home Page of one of the most popular website which allows
you to create fake porn content (=I obscured the private parts and face of the
subject in the images because they were clearly visible).
In the firsts years when the subreddit regard deepfakes were active, many
users on the now-banned deepfakes Reddit (Chapter 1.3.1) forum asked how
to create deepfakes of ex-girlfriends, crushes, friends, and classmates) in order
to deliberately damage their reputation just for their entertainment. In 2020,
more than 100,000 deepfake images depicting fake nudes were generated by an
ecosystem of AI, that enables users to literally ‘strip’ the clothes off images of
women, so that they appear naked[18][10]. Most of the original images appeared
to have been taken from social media pages or directly from private communi-
cation, with the individuals likely unaware that they had been targeted. While
the majority of these targets were private individuals, I additionally identified a
significant number of social media influencers, gaming streamers, and high pro-
file celebrities in the entertainment industry. A limited number of images also
appeared to feature underage targets, suggesting that some users were primarily
using the AI to generate and share paedophile content[26].
The deepfakes have been shared across various social media platforms for the
purpose of public shaming, revenge, or extortion. All the users who use the ser-
vice have to do is upload a photo, and a manipulated image is returned within
minutes as Figure 8 says, it’s easy to see how these features can be abused[18].
The website works like this:
Upload any photo of a subject (even fully clothed);
The AI through the ways of building a deepfake, removes the subject’s
clothes from the image (Chapter 1.2);
4
Substitute (in a very credible almost irrefutable way) the sexual parts of
the subject with some others that are obviously not yours, but in practice
anyone who sees the photo they will assume it is you [10].
“See anyone Nude: The most powerful image deepfake AI ever cre-
ated. See any girl clothless with the click of a button.” (Website,
2022)
I think the header image and the text of the website is enough to understand
what kind of huge and tremendous negative potential impact it can have over
people’s reputation. There is literally no way to protect yourself, it will be suf-
ficient a single photo (collected on the social media or directly taken in person)
to execute this kind of cyberattack toward a person, with permanent and not
easily denied proofs that the person in the deepfake pornography material it’s
actually not you.
3.4 Notable Deepfakes
Figure 9: A deepfake video of President of Ukraine Volodymyr Zelenskyy and
President of Russia Vladimir Putin online before get reported. Both videos
received more than 300,000 views in a few hours.
Over the past few years, multiple examples of deepfake videos have appeared
on YouTube, a popular video sharing and social media platform. Most deep-
fake videos were created solely for entertainment purposes, however, a few key
examples illustrate the potential power of the videos as a tool to spread misin-
formation, let’s discuss the most convincing (and potentially harmful) deepfakes
are all-out impersonations[11]:
1. President of the United States Barack Obama (2018): The video was
created using FakeApp and involved taking an original video of Barack
Obama and pasting Jordan Peele’s mouth into it. The video, which took
roughly 56 hours to develop, was created to raise awareness regarding the
potential misuse of deepfake videos.
2. President of the United States Donald Trump (2018): An example of
a politically-motivated deepfake video encouraging Belgium to withdraw
from the Paris climate agreement. The deepfake video was published by
a Belgian social Democratic Party with the purpose of starting a public
debate regarding the necessity to address climate change. The video was
eventually debunked by Lead Stories.
5
3. [!] President of Gabon Ali Bongo Ondiba (2019): a video with Ali Bongo
Ondiba was published online, for months before he had not been seen in
public, and it had become a popular belief that he was in poor health, or
even dead, the video led to a national crisis, a story that the video was
a deepfake gained momentum, as it seemed to support a theory that the
government was trying to hide the condition of the President. Ultimately,
this story led to an unsuccessful coup by the Gabonese military[10].
4. CEO of Meta Mark Zuckerberg (2019): a video appeared on Instagram
falsely portraying Facebook’s Chief Executive Officer, Mark Zuckerberg,
crediting a secretive organization for the success of Facebook, the video
was created by Canny AI, an Israeli company, and constructed using deep-
fake technology, an actor’s voice and 2017 footage of Mark Zuckerberg.
5. A CEO of a U.K.-based energy firm (2019): The evolution of deepfake
technology was further demonstrated by criminals using AI-based soft-
ware to impersonate the voice and face of a chief executive officer (CEO)
and demand a fraudulent transfer of $243.000. The target, a CEO of
a UK-based energy firm, was convinced the caller was the CEO of the
firm’s German parent company and, therefore, approved the request. The
criminals relied on the principles of authority and urgency to ensure the
transfer took place within an hour[10]8.
6. A Bank Manager in Hong Kong (2020): was defrauded into transferring
US$35 million by a voice he recognized to be the director of a company
based in the United Arab Emirates. The deepfake call was supplemented
with emails to the bank manager from the purported director confirming
the transfer.
7. Belgian Prime Minister Sophie Wilm`es in 2020: the activist group “Extinc-
tion Rebellion” released a fake video of the Belgian Prime Minister, Sophie
Wilm`es, suggesting a possible link between deforestation and Covid-19,
the video got 100,00 views in 24 hours and many who watched the video
considered it to be genuine.
8. Actor Tom Cruise (2021): a viral deepfake videos of a faux Tom Cruise
doing coin flips and covering Dave Matthews Band songs last year, for
instance, showed how deepfakes can appear convincingly real.
9. Russian opposition politician Leonid Volkov (2021), various news outlets
reported several European Parliament members were targeted by deepfake
video calls imitating Russian opposition, the calls were created by Russian
pranksters.
10. [!] President of Ukraine President Zelensky (2022): The deepfake appeared
on the hacked website of Ukrainian TV network “Ukrayina 24” where
showed Ukraine’s president talking of surrendering to Russia, Volodymr
Zelensky appears behind a podium, telling Ukrainians to put down their
weapons9, as shown in Figure 9.
8Source: www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-
cybercrime-case-11567157402.
9Source: www.twitter.com/MikaelThalen/status/1504123674516885507.
6
11. [!] President of Russia Vladimir Putin in 2022: A deepfake video shared on
Twitter, appearing to show Russian President Vladimir Putin declaring
peace10 , as shown in Figure 9.
3.5 Regulatory landscape
We saw that there are different types of risks associated with deepfake tech-
nology, and that these risks manifest themselves at different levels (Chapter
3.2). We now assess the current legal basis for protecting individuals, as well
as for mitigating the broader societal impact of deepfakes, for example through
policies and measures against disinformation, and we discuss them.
3.5.1 Measures taken against deepfake
We now present the approaches taken by States and Over the top (OTT) in
regard to deepfakes, and we briefly express the current legislation in Italy and
other countries.
States:
United States: Virginia, California and Texas were the first states
to pass laws in 2019[13]. In 2020 DARPA created a Media Foren-
sics (MediFor) that builds algorithms to detect manipulated images
or videos, then produces a quantitative measure of integrity, which
enables filtering and prioritization of media at scale[5].
India: In 2020, Indraprastha Institute of Information Technology
Delhi, as a State University by the Delhi Government, has introduced
an algorithm which follows principles of anomaly detection to help
identify fake videos[12].
China: consider it a “threat that can subvert public safety and social
order”[7], that’s why the Chinese government passed a law in 2020
that stipulates that all deepfake videos or audio content, or con-
tent created using deep learning algorithms or Virtual reality (VR)11
technologies must be labelled accordingly by the app providers.
Italy and other States with similar legislation: there is cur-
rently no precise legislative provision, therefore the law does not offer
deepfake protection as such, that’s why the legislation should there-
fore be adapted to technological progress. Currently, in Italy deep-
fake may well give rise to a plurality of criminal offences, that, at the
moment of the publication of this document are [21]:
Art. 494 penal code Person replacement;
Art. 595 penal code, paragraph 3 Defamation;
10Source: www.twitter.com/sternenko/status/1504090918994993160.
11Virtual reality (VR) is a simulated experience that can be similar to or completely dif-
ferent from the real world, applications of virtual reality include entertainment (particularly
video games), education (such as medical or military training) and business (such as virtual
meetings).
7
Art. 167 civil code privacy Illicit data processing1213 .
However, it will be necessary from time to time and depending on
the case to examine how the conduct was actually carried out.
Over the top (OTT)[10]:
Meta’s policy (Ex Facebook): remove manipulated media except
for parodies.
YouTube’s policy: on ‘deceptive practices’ prohibits doctored con-
tent that’s misleading or may pose serious risks
TikTok: removes “digital forgeries” including inaccurate health in-
formation that are misleading and cause harm
Reddit: removes media that impersonates individuals or entities in
a misleading or deceptive manner, but has also created an exemption
for satire and parody.
With the volume and sophistication of deepfakes skyrocketing, it’s unclear, how-
ever, how social networks will be able to keep up with enforcing these poli-
cies[10][11].
3.5.2 Solutions to deepfakes and further research
Strong solutions and means of defence to this hateful manipulation technique
have to[10][27]:
Technology[9]:
Blockchain: systems use a decentralized, immutable ledger to record
information in a way that’s continuously verified and re-verified by
every entity that uses it, making it nearly impossible to change infor-
mation after it’s been created. In this way we could classify original
and unaltered material, only content validated through blockchain.
Policy options within the AI framework: clarifying which AI
practices should be prohibited under the AI framework.
Detection technology: is crucial in stopping the circulation of
malicious deepfakes. However, if deepfake technology providers are
aware of the detection technologies, they can adjust the deepfake pro-
duction technologies and circumvent detection. This leads to a cat-
and-mouse-game between deepfake production technology and deep-
fake detection technology.
Combination of multiple technologies: In the future, we may
use a combination of tools such us: AI, algorithms, machine learning,
blockchain and other technology, to fight against deepfakes. Theoret-
ically, AI could scan videos to look for deepfake “fingerprints,” and
12Special thanks to Prof. Stefano Pietropaoli (professor of Computer Forensics at the
school of Law of the “University of Florence”) for confirming me these kinds of crimes and
punishments.
13Personal note: the punishment for one or more of these crimes does not include (in the
worst case) more than 3 or 4 years of imprisonment according to the current legal system
in Italy. Making the punishment currently for crimes such deepfake, that can cause almost
irreparable harm to persons or entities laughable and not enough.
8
blockchain tech installed across operating systems could flag users
or files that have touched deepfake software, but eventually, we may
reach a point where deepfakes will be impossible to detect even for
machines, and we’ll have a lot more to worry about than fake celebrity
porn and silly deepfake video posted on TikTok.
Law to repress it[6]:
Detect and sanction: the deepfake quickly and with a sufficient
degree of certainty, even if it is not certain that the timing of the
judicial proceedings can be compatible with its spread through the
internet).
Platforms awareness: The platforms should be subject to editorial
responsibility for the great influence they have and the advantage of
the traffic generated on the network, and consequently must be held
accountable for this. If the author and the platform do not notify,
they should be punishable and responsible.
Diplomatic actions, economic sanctions and international
agreements to refrain from the use of deepfakes: The use of
disinformation and deepfakes by foreign states, or actors associated
with foreign state institutions, contributes to increasing geopolitical
tensions.
(it must be done in advance) Digital education[14]:
Invest in education and raise awareness of AI professionals.
Ban certain applications or usage: (e.g., political advertising or
communication).
Providing ID to sign up to Social Network: Like in China,
users of online platforms need to register with their identity (ID)
before being able to enter platforms. The discussion as to what level
of anonymity is acceptable and desirable online is a sensitive one. On
the one hand, platform user anonymity provides cover for malicious
users. On the other hand, anonymity serves as a useful protection
for activists and whistle-blowers.
4 Conclusions
There is no doubt that deepfake technology itself can be used as a cyber threat
with the evil purpose to damage to democracies, tighten international relations
between states, target people’s trust, fraud, economic stability etc. It’s known
that the control of information and its manipulation, especially now with the
Internet where everything is omnipresent and accessible to all, can lead to dis-
order and power-change policies. If this phenomenon is not managed, regulated
and made known to the population through awareness campaigns it could easily
lead to effects of alarming severity for the life of a single citizen, groups of them
or of an entire state. The legislation of deepfakes fortunately, as it is already
happening in some countries will be revised and will certainly have to oblige
those who propagate deepfakes to specify that they are inauthentic videos as
9
well as the legislative tools will serve to recall the companies that own social
networks to act to protect freedom of speech certainly not to limit it, by moni-
toring the contents that damage rights, democracy or put the safety of citizens
at risk. Greater attention to the problem on the part of institutions must cer-
tainly guarantee a more effective and more timely application of existing laws,
starting with those that guarantee human rights, privacy, and encourage in-
ternational cooperation that can monitor and tackle the problem on a global
scale, especially as regards the possible interference of fake news and deepfake
on political elections, international relationships and decisions of public rele-
vance. However, if the average web user is no longer able to distinguishing real
videos from fake ones and recognizing a lie conveyed as well-founded news, the
solution to the problem cannot lie solely in legislative countermeasures but also
requires parallel interventions on the information and training front. It is there-
fore necessary to work on computer literacy, on the choice, comparison, and
authoritativeness of sources and promote a responsible and conscious use of the
web, especially among children. As well as at the same time it is necessary to
strengthen the role of information professionals to restore value to the function
of cultural mediation that belongs to journalists who, leveraging their profes-
sionalism and skills, today even more than in the past, can act as a sentinel of
the truth about the facts and give voters the certainty of being able to offer real,
credible, impartial and verified content. The ultimate goal should be to build
a new digital culture by making the internet a source of knowledge, a source of
objective and proven truths, a source of authentic knowledge.
5 Bibliography
References
[1] Sercan ¨
O Arık et al. “Deep voice: Real-time neural text-to-speech”. In:
International Conference on Machine Learning. PMLR. 2017, pp. 195–
204.
[2] Sven Charleer. “Family fun with deepfakes. Or how I got my wife onto
the Tonight Show”. In: Towards Data Science (2018).
[3] Vincenzo Ciancaglini et al. “Malicious uses and abuses of artificial intelli-
gence”. In: Trend Micro Research (2020).
[4] Hao Dang et al. “On the detection of digital face manipulation”. In: Pro-
ceedings of the IEEE/CVF Conference on Computer Vision and Pattern
recognition. 2020, pp. 5781–5790.
[5] Jonathan G Fiscus et al. “NIST Media Forensic Challenge (MFC) Evalu-
ation 2020-4th Year DARPA MediFor PI meeting”. In: (2020).
[6] Asher Flynn, Jonathan Clough, and Talani Cooke. “Disrupting and pre-
venting deepfake abuse: Exploring criminal law responses to AI-facilitated
abuse”. In: The palgrave handbook of gendered violence and technology.
Springer, 2021, pp. 583–603.
[7] Kacper Grado´n. “Crime in the time of the plague: Fake news pandemic
and the challenges to law-enforcement and intelligence community”. In:
Society Register 4.2 (2020), pp. 133–148.
10
[8] Del Harvey. Help us shape our approach to synthetic and manipulated
media. Twitter Blog. 2020.
[9] Haya R Hasan and Khaled Salah. “Combating deepfake videos using
blockchain and smart contracts”. In: Ieee Access 7 (2019), pp. 41596–
41606.
[10] Mari¨ette van Huijstee, Pieter van Boheemen, and Djurre Das. “Tackling
deepfakes in European policy”. In: (2021).
[11] Tim Hwang et al. “Deepfakes–Primer and Forecast”. In: (2020).
[12] Simran Jain and Piyush Jha. “Deepfakes in India: regulation and privacy”.
In: South Asia@ LSE (2020).
[13] Tyrone Kirchengast. “Deepfakes and image manipulation: criminalisation
and control”. In: Information & Communications Technology Law 29.3
(2020), pp. 308–323.
[14] Nils C obis, Barbora Doleˇzalov´a, and Ivan Soraperra. “Fooled twice:
People cannot detect deepfakes but think they can”. In: Iscience 24.11
(2021), p. 103364.
[15] Medikonda Neelima and I Santiprabha. “Mimicry voice detection using
convolutional neural networks”. In: 2020 International Conference on Smart
Electronics and Communication (ICOSEC). IEEE. 2020, pp. 314–318.
[16] Aaron van den Oord et al. “Wavenet: A generative model for raw audio”.
In: arXiv preprint arXiv:1609.03499 (2016).
[17] Oscar Schwartz. “You thought fake news was bad? Deep fakes are where
truth goes to die”. In: The Guardian 12 (2018), p. 2018.
[18] Sensity. “The State of Deepfakes 2020: Updates on Statistics and Trends”.
In: The state of deepfakes: landscape, threats and impact. 2020, pp. 3–27.
[19] Jonathan Shen et al. “Natural tts synthesis by conditioning wavenet on
mel spectrogram predictions”. In: 2018 IEEE international conference on
acoustics, speech and signal processing (ICASSP). IEEE. 2018, pp. 4779–
4783.
[20] Joan Solsman. Samsung deepfake AI could fabricate a video of you from
a single profile pic. 2019.
[21] Claudia Spinato. “Arte e intelligenza artificiale nell’era dei deepfake”. In:
(2021).
[22] Yaniv Taigman et al. “Voiceloop: Voice fitting and synthesis via a phono-
logical loop”. In: arXiv preprint arXiv:1707.06588 (2017).
[23] Ruben Tolosana et al. “An Introduction to Digital Face Manipulation”.
In: Handbook of Digital Face Manipulation and Detection. Springer, Cham,
2022, pp. 3–26.
[24] Namosha Veerasamy and Heloise Pieterse. “Rising Above Misinformation
and Deepfakes”. In: Proceedings of the 17th International Conference on
Information Warfare and Security. 2022, p. 340.
[25] Soroush Vosoughi, Deb Roy, and Sinan Aral. “The spread of true and false
news online”. In: Science 359.6380 (2018), pp. 1146–1151.
11
[26] Mika Westerlund. “The emergence of deepfake technology: A review”. In:
Technology Innovation Management Review 9.11 (2019).
[27] Peipeng Yu et al. “A survey on deepfake video detection”. In: IET Bio-
metrics 10.6 (2021), pp. 607–624.
12
... As technology continues to evolve, the proactive adoption of these advanced techniques is imperative to stay ahead of malicious actors seeking to exploit vulnerabilities in the digital realm. This paper delves into the methodologies and strategies employed in this endeavor, with the overarching goal of creating a resilient cybersecurity framework capable of withstanding the ever-changing landscape of cyber threats [3]. ...
Research
Full-text available
In the rapidly evolving landscape of cyberspace, defending against sophisticated threats like deepfakes and malware requires cutting-edge strategies. This paper explores advanced tactics utilizing machine learning to safeguard digital frontiers. Deepfakes, manipulated media often indistinguishable from authentic content, pose significant risks to various sectors, including politics, business, and security. Traditional detection methods struggle to keep pace with the rapid proliferation of deepfake technology, highlighting the urgent need for innovative solutions. Leveraging machine learning algorithms, such as neural networks and deep learning architectures, offers a promising approach to identify and mitigate these threats. By analyzing patterns, anomalies, and subtle cues within multimedia content, machine learning models can effectively distinguish between genuine and manipulated media, enhancing detection accuracy and efficiency. Furthermore, in the realm of cybersecurity, the proliferation of sophisticated malware strains presents formidable challenges. Through the application of advanced machine learning techniques, such as anomaly detection and behavioral analysis, security professionals can strengthen defense mechanisms against evolving malware threats. This paper elucidates the potential of integrating machine learning into cybersecurity frameworks to fortify defenses and mitigate the risks posed by deepfakes and malware in cyberspace. Introduction:
... This erosion of trust can have profound societal implications, including increased skepticism, polarization, and a breakdown of consensus on shared realities. B. Spread of Disinformation and Manipulation (Lorenzo, 2022): Deepfakes enable the creation and dissemination of highly realistic false narratives. Malicious actors can exploit this technology to propagate disinformation, manipulate public opinion, and influence societal discourse. ...
Conference Paper
Full-text available
Deepfake technology, which allows the manipulation and fabrication of audio, video, and images, has gained significant attention due to its potential to deceive and manipulate. As deepfakes proliferate on social media platforms, understanding their impact becomes crucial. This research investigates the detection, misinformation, and societal implications of deepfake technology on social media. Through a comprehensive literature review, the study examines the development and capabilities of deepfakes, existing detection techniques, and challenges in identifying them. The role of deepfakes in spreading misinformation and disinformation is explored, highlighting their potential consequences on public trust and social cohesion. The societal implications and ethical considerations surrounding deepfakes are examined, along with legal and policy responses. Mitigation strategies, including technological advancements and platform policies, are discussed. By shedding light on these critical aspects, this research aims to contribute to a better understanding of the impact of deepfake technology on social media and to inform future efforts in detection, prevention, and policy development.
Chapter
Full-text available
Digital manipulation has become a thriving topic in the last few years, especially after the popularity of the term DeepFakes. This chapter introduces the prominent digital manipulations with special emphasis on the facial content due to their large number of possible applications. Specifically, we cover the principles of six types of digital face manipulations: (i) entire face synthesis, (ii) identity swap, (iii) face morphing, (iv) attribute manipulation, (v) expression swap (a.k.a. face reen-actment or talking faces), and (vi) audio-and text-to-video. These six main types of face manipulation are well established by the research community, having received the most attention in the last few years. In addition, we highlight in this chapter publicly available databases and code for the generation of digital fake content.
Conference Paper
Full-text available
Misinformation can be rapidly spread in cyberspace. It thrives in the social media landscape as well as news platforms. Misinformation can readily gain momentum in the race to influence people or intentionally deceive. With the use of bots, misinformation can be easily shared, especially in environments like Twitter and Facebook. While, some measures are taken to stop the spread of misinformation, threats like Deepfakes are posing a higher challenge. Deepfakes provide a means to generate fake digital content in order to impersonate a person. With the use of audio, images and videos, artificial intelligence is used to depict the speech and actions of people. Deepfakes are typically made of presidents or influential businessmen such as Donald Trump and Mark Zuckerberg. Deep Fakes can be very realistic and convincing as this form of synthetic media is raising concerns about its possible misuse. The effects of Deepfakes are to spread disinformation, confuse users or create influence. This can lead to further effects like political factions, blackmail, harassment and extortion. Deepfakes could lead to a distrust in digital content as many may feel that anything we see is actually just a manipulation. Deepfakes has arisen as a new generation of misinformation through the manipulation of digital media in order to create realistic videos. This paper looks at the governing, communal and technical issues relating to Deepfakes. At the technical level, the use of audio and text analysis used to create Deepfake videos is advancing at a rapid pace which has also made its affordability and accessibility easier. An evaluation of the threats stemming from Deepfakes reveals that there are various mental, monetary and group dynamics involved. In this paper, the various types of threats emanating from Deepfakes is discussed. This paper also looks at five factors in the field of Deepfakes that should be taken into consideration (Technical Source Dissemination Victim Viewers). The paper discussed these five factors in order to help identify measures to help curb the spread of Deepfakes. A combination of these measures can help limit the spread of Deepfakes and support mitigation of the threat. Due to prominence and power that digital media has, it is imperative that this threat not be overlooked. The paper provides a holistic approach to understanding the risk and impact of Deepfakes, as well measures to help mitigate abuse thereof.
Article
Full-text available
Hyper-realistic manipulation of audio-visual content, i.e., deepfakes, presents a new challenge for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N=210), we show that (a) people cannot reliably detect deepfakes, and (b) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (c) people are biased towards mistaking deepfakes as authentic videos (rather than vice versa) and (d) overestimate their own detection abilities. Together, these results suggest that people adopt a “seeing-is-believing” heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content.
Article
Full-text available
The emergence of a new generation of digitally manipulated media – also known as deepfakes – has generated substantial concerns about possible misuse. In response to these concerns, this report assesses the technical, societal and regulatory aspects of deepfakes. The rapid development and spread of deepfakes is taking place within the wider context of a changing media system. An assessment of the risks associated with deepfakes shows that they can be psychological, financial and societal in nature, and their impacts can range from the individual to the societal level. The report identifies five dimensions of the deepfake lifecycle that policy-makers could take into account to prevent and address the adverse impacts of deepfakes. The report includes policy options under each of the five dimensions, which could be incorporated into the AI legislative framework, the digital service act package and beyond. A combination of measures will likely be necessary to limit the risks of deepfakes, while harnessing their potential.
Article
Full-text available
Abstract Recently, deepfake videos, generated by deep learning algorithms, have attracted widespread attention. Deepfake technology can be used to perform face manipulation with high realism. So far, there have been a large amount of deepfake videos circulating on the Internet, most of which target at celebrities or politicians. These videos are often used to damage the reputation of celebrities and guide public opinion, greatly threatening social stability. Although the deepfake algorithm itself has no attributes of good or evil, this technology has been widely used for negative purposes. To prevent it from threatening human society, a series of research have been launched, including developing detection methods and building large‐scale benchmarks. This review aims to demonstrate the current research status of deepfake video detection, especially, generation process, several detection methods and existing benchmarks. It has been revealed that current detection methods are still insufficient to be applied in real scenes, and further research should pay more attention to the generalization and robustness.
Article
Full-text available
The Paper explores the problem of fake news and disinformation campaigns in the turmoil era of the COVID-19 coronavirus pandemic. The Author addresses the problem from the perspective of Crime Science, identifying the actual and potential impact of fake news propagation on both the social fabric and the work of the law-enforcement and security services. The Author covers various vectors of disinformation campaigns and offers the overview of challenges associated with the use of deep fakes and the abuse of Artificial Intelligence, Machine-, Deep- and Reinforcement-Learning technologies. The Paper provides the outline of preventive strategies that might be used to mitigate the consequences of fake news proliferation, including the introduction of counter-narratives and the use of AI as countermeasure available to the law-enforcement and public safety agencies. The Author also highlights other threats and forms of crime leveraging the pandemic crisis. As the Paper deals with the current and rapidly evolving phenomenon, it is based on qualitative research and uses the most up-to-date, reliable open-source information, including the Web-based material. KEYWORDS: Crime; terrorism; cybercrime; cyber-enabled crime; nation-state influence; information warfare; cyber warfare; covid-19; coronavirus; Wuhan virus; disinformation; misinformation; malinformation; fake news; pandemic; epidemic; artificial intelligence; machine learning; deep learning; reinforcement learning
Article
Deepfakes are a form of human image synthesis where an existing picture or image is superimposed into a video to change the identity of those depicted in the video. The technology relies on machine learning or artificial intelligence to map an existing image, usually a photo of a person's face, to transfer that image to an existing video image. The technology emerged into the latter part of 2017, and has since given rise to apps and other programmes that allow users to create their own deepfakes. We already use filters and emojis to alter images by consent, however, deepfakes are particularly problematic because they allow for production of videos that are highly convincing, taken to be a real video of the person depicted. Deepfakes provide for the manipulation of all manner of video, but particular risks include videos produced to incite political deception, voter manipulation, commercial fraud, and ‘revenge porn’. The production of deepfake ‘revenge porn’ presents as especially insidious given the ability to transfer the face of any person onto an already existing pornographic video. Harm is exacerbated where that video is then disseminated, via the internet or by social media.