ArticlePDF Available

The Momo Challenge: measuring the extent to which YouTube portrays harmful and helpful depictions of a suicide game

Authors:

Abstract and Figures

Suicide is the second leading cause of death among adolescents (15 to 29 years), who are in a life stage of exceptional vulnerability and susceptibility to depictions of non-suicidal self-injury and suicide. Allegedly, the suicide game Momo Challenge used this vulnerability to demand their players to perform self-harming dares and, ultimately, commit suicide. This study gives insight into the content, engagement rates and community formation of Momo Challenge videos on YouTube. We combine a network analysis ( n = 209) with a manual content analysis of the videos ( n = 105; 50%). Results show that more than two thirds of the videos include some form of harmful depiction. In addition, videos with a higher extent of harmful depictions are more likely to be engaged with, e.g., through likes ( ρ = 0.332, p < 0.001). We discuss how YouTube has responded to the challenge and which implications arise for practice and theory.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
SN Soc Sci (2021) 1:86
https://doi.org/10.1007/s43545-021-00065-1
ORIGINAL PAPER
The Momo Challenge: measuring theextent towhich
YouTube portrays harmful andhelpful depictions
ofasuicide game
LaraKobilke1 · AntoniaMarkiewitz2
Received: 20 July 2020 / Accepted: 16 January 2021 / Published online: 15 February 2021
© The Author(s) 2021, corrected publication 2022
Abstract
Suicide is the second leading cause of death among adolescents (15 to 29years),
who are in a life stage of exceptional vulnerability and susceptibility to depictions
of non-suicidal self-injury and suicide. Allegedly, the suicide game Momo Chal-
lenge used this vulnerability to demand their players to perform self-harming dares
and, ultimately, commit suicide. This study gives insight into the content, engage-
ment rates and community formation of Momo Challenge videos on YouTube. We
combine a network analysis (n = 209) with a manual content analysis of the videos
(n = 105; 50%). Results show that more than two thirds of the videos include some
form of harmful depiction. In addition, videos with a higher extent of harmful depic-
tions are more likely to be engaged with, e.g., through likes (ρ = 0.332, p < 0.001).
We discuss how YouTube has responded to the challenge and which implications
arise for practice and theory.
Keywords Momo Challenge· Suicide game· NSSI· YouTube· Social network
analysis
Introduction
‘Carve a whale on your hand with a razor, send a photo to curator’, ‘Do something
painful to yourself, make yourself sick’, ‘Jump off a high building. Take your life.
These examples present what vulnerable adolescents are challenged with when play-
ing the suicide-inducing Blue Whale Game (Mukhra etal. 2017). Programmed in
2013 by the Russian psychologist Philipp Budeikin, media reports state that the
* Lara Kobilke
l.kobilke@ikmz.uzh.ch
1 Department ofCommunication andMedia Research (IKMZ), University ofZurich (UZH),
Andreasstrasse 15, 8050Zurich, Switzerland
2 Department ofMedia andCommunication (IfKW), University ofMunich (LMU), Munich,
Germany
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 2 of 30
Blue Whale Challenge has caused the deaths of more than one hundred adolescents
worldwide (Evon 2017; Mukhra etal. 2017). Via mobile applications, the game
connects players with anonymous curators, who mentor the vulnerable youngsters
over a period of 50days. During this time, the curators dare their mentees to per-
form escalating tasks and, ultimately, push them to commit suicide (Lupariello etal.
2019). While the Blue Whale Game most likely originated as a sensational hoax
by a Russian newspaper (Evon 2017), cyberbullying researchers believe that copy-
cats have since made use of its contagious effect (Adeane 2019; Timm-Garcia and
Hartung 2017). Evidence is provided by Sumner etal. (2019), who show that social
media content in support of the Blue Whale Game spread rapidly to 127 countries
from 2013 to 2017. In 2018, Balhara, Bhargava, Pakhre, and Bhati presented the
first medical report showing self-inflicted injuries of a young boy, who followed
instructions presented by a mobile app resembling the Blue Whale Game. In 2019,
five medical reports from Italian and Romanian girls were released that confirm a
“‘contagious’ quality of the Internet and the importance of epidemiological, psycho-
logical, psychiatric, social, and cultural risk factors” for participating in the game
(Lupariello etal. 2019, p.641).
In addition to the Blue Whale Challenge, youngsters turned to another suicide
game in 2018/2019. Reportedly, the Momo Challenge shared many similarities with
its predecessor. To this day, media reports have tried to link the deaths of five teen-
agers to the Momo Challenge (batimes.com.ar 2018; Davidson 2018; Ferber 2018;
Kitching 2018; Schneider 2018). Again, it remains unclear which incidents were
hoaxes or wrongly attributed to the game. Also, little is known about the challenge
from a scientific point-of-view. As part of the challenge, adolescents communicate
with a WhatsApp account called Momo, which shows a picture of a grotesque sculp-
ture as an avatar. When written to, the Momo account shares horrifying pictures and
escalating assignments instigating self-harm and suicide. Cases have been reported
of players refusing to perform these tasks, resulting in the account threatening to
harm the participant’s families (e.g., Webb 2018b) or to leak private information
(e.g., newindianexpress.com 2018). Therefore, law enforcement agencies warned
that the Momo Challenge might be linked to data theft (e.g., Chanda 2018; Webb
2018a).
Contrary to the Blue Whale Game, Momo accounts use snowball effects to gain
reach. Journalists have therefore criticized the large amount of YouTube videos
in which YouTubers have promoted the challenge by passing on the phone num-
bers to their viewers (theweek.co.uk 2018). Momo even became an internet meme:
The horror figure made appearances in Let’s Play videos of the game Fortnite, in
storylines of popular Minecraft series, and in screen recordings of the preschool
television show Peppa Pig (Cooper 2019). This prompted celebrities to express
their concern about the impact on young children and call on YouTube to take
action (Needham 2019; Waterson 2019). Finally, YouTube published an official
statement: “We want to clear something up regarding the Momo Challenge: We’ve
seen no recent evidence of videos promoting the Momo Challenge on YouTube.
Videos encouraging harmful and dangerous challenges are against our policies.
(YouTube 2019)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 3 of 30 86
However, videos promoting the challenge did in fact exist, even though You-
Tube has demonetized, flagged and deleted most of these video since March 2019
(Alexander 2019), shortly after the data collection for this study was completed
(turn 2018/2019). The present research raises questions about the extent to which
these videos have portrayed harmful and helpful depictions of the suicide game.
Additionally, we seek to understand what consequences these depictions have for
the engagement rates of the videos and for community formation and what vari-
ables drive this relationship. We proceed in three steps. First, we derive an ana-
lytical scheme for assessing the extent to which these videos feature harmful and
helpful depictions. Second, we combine these depictions to assess the individual
danger level of each video, using this index in multivariate regression analyses
to predict engagement rates, i.e., views, comments, likes, and dislikes. Third, we
use methods from network community analysis to understand what media diet
and content preferences specific audiences expose themselves to. Our findings
suggest that more than two thirds of the videos include at least some form of
harmful depiction and that the danger level is a predictor of the engagement rates
that a video will receive, except for views. In accordance with this finding, the
most dominant community on YouTube is also the one that prefers to engage with
the most harmful depictions of the challenge. This study expands the literature
on suicide games by providing an analytical scheme to assess media depictions
and by drawing assumptions about the consequences of these depictions based on
the literature on Werther and Papageno effects. Most importantly, it advances our
understanding of the spreading mechanism of suicide games in social media by
investigating the relationship between harmful depictions of suicide games and
engagement rates.
But why should we even care about YouTube videos that endorse or oppose the
Momo Challenge? As we will argue more extensively in the following theory sec-
tion, media play a decisive role in suicide and non-suicidal self-injury prevention for
adolescents; especially social media like YouTube. Drawing from the guidelines on
responsible reporting on suicide (World Health Organization [W.H.O.] 2008), we
argue that some characteristics of the Momo Challenge videos will be more likely to
elicit Werther effects (i.e., suicide and non-suicidal self-injury imitation), while oth-
ers will be more likely to elicit Papageno effects (i.e., suicide prevention). Of course,
the Momo Challenge is neither the first suicide game nor is it likely to be the last.
Therefore, “it is urgent to monitor social media posts related to... self-harm chal-
lenges (e.g., the Momo Challenge)” (Khasawneh etal. 2019, p.888).
In our view, the viral spread of Momo-related content shows that suicide games
will continue to be a difficult-to-handle cyber threat. While the challenge itself was
probably nothing more than a hoax, institutions issued warnings that related media
content, such as YouTube videos and news reports, was nonetheless dangerous to
young children (Waterson 2019). New challenges, such as the Skull Breaker Chal-
lenge and the Jonathan Galindo Challenge, have already been launched as successors
to the Momo Challenge. This material confronts youngsters with the concept of self-
harm, sometimes for the first time in their life. Therefore, “posting about the chal-
lenges in unsafe manners can contribute to contagion regardless of the challenges
true nature” (Khasawneh etal. 2019, p.888). On that note, the Momo Challenge
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 4 of 30
evolves into a case study for the Thomas theorem: if adolescents and parents are
made to believe that the Momo Challenge is real, it has real consequences.
Theoretical background
Adolescents, YouTube use, andtherole ofYouTube fortheMomo Challenge
Adolescence is a period of unique vulnerability” (Miller and Prinstein 2019,
p.425), a period of biological restructuring. Acute stress is less properly processed
during adolescence, often causing emotional overreactions. This makes adoles-
cents very susceptible to harmful media depictions in social media: “Because of
their limited capacity for self-regulation and susceptibility to peer pressure, chil-
dren and adolescents are at some risk as they navigate and experiment with social
media” (O’Keeffe and Clarke-Pearson 2011, p. 800). Nevertheless, adolescents
tend to spend more time on SNS than adults (Yoon etal. 2019) and video-shar-
ing has become “one of the fastest growing online activities among youth” (Lewis
etal. 2011, p. e553). In 2018, during the peak of the Momo Challenge (July 2018
until March 2019), YouTube was the most important SNS for teenagers aged 13 to
17 (Anderson and Jiang 2018; Farokhmanesh 2018; Smith etal. 2018). This posi-
tion is now increasingly contested by competitors like TikTok (Qustodio 2020).
Moreover, YouTube was the platform that got under crossfire during the peak of the
Momo Challenge July 2018 until March 2019 since journalists and celebrities gave
hints that YouTubers did not only inform about the challenge, but also distributed
alleged Momo phone numbers and dares (Needham 2019; Tarlton 2019; theweek.
co.uk 2018; Waterson 2019). Finally, in February 2019, pictures of Momo, together
with suicide and self-harm advice, were spliced into children’s cartoons on You-
Tube Kids. This was not the first time that YouTube Kids had to act against violent
and disturbing content on its platform (Maheshwari 2017), but the Momo Challenge
offered a new vehicle. YouTube also took on outstanding significance for the Momo
Challenge because other video-sharing platforms such as TikTok were still new on
the market and did not enjoy similar popularity back then. TikTok, for example, did
not become a competitor until August 2018 (when TikTok merged with musical.ly),
and it took until May 2020 for TikTok to catch up with the average minutes children
aged between 4 and 15 spend each day on YouTube (Qustodio 2020). Even Snapchat
reached approximately 15% less of the teenagers than YouTube during the peak of
the Momo Challenge (Anderson and Jiang 2018) and also accounts for less time
spent per day, namely 65min compared to 97min (Qustodio 2020). This, combined
with studies indicating that YouTube videos have described tasks of the Blue Whale
Challenge in the past (Khasawneh etal. 2019) and promoted NSSI videos (Bastone
2019), sparked our interest and the need to investigate the depiction of the Momo
Challenge on YouTube.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 5 of 30 86
Possible consequences ofadolescents’ engagement withtheMomo Challenge
onYouTube
Among adolescents aged between 15 and 29, suicide is the second leading cause
of death (W.H.O. 2017). Suicide poses a global health problem with youths as a
particularly vulnerable population group at its center (Wasserman and Wasserman
2009). The media play a key role, both, in suicide promotion as well as suicide pre-
vention (Mann etal. 2005; Niederkrotenthaler etal. 2010; Phillips 1974), mainly
because “the characters’ [displayed in the media] experience suggests ways for the
viewer to deal with their own problems” (Hoffner and Buchanan 2005, p.329) either
positively (Papageno effect) or negatively (Werther effect). The occurrence of both
effects depends on how the media display such sensitive content (Mann etal. 2005).
Precisely, the Werther effect describes the phenomenon of an increased suicide
rate after a suicide has been depicted in the media (Phillips 1974). Strikingly, even
details, such as method or place of the suicide may be imitated (Stack 2000). To
date, the Werther effect as a possible media effect is widely considered confirmed in
research (Frei etal. 2003; John etal. 2017; Schäfer and Quiring 2015). Another self-
harm inducing factor—albeit with non-fatal intentions—is non-suicidal self-injury
(NSSI; Arendt 2019). NSSI is common among adolescents with an estimated rate
of 5.6 to 6.7% of adolescents engaging in self-harming, non-fatal behavior (Buelens
etal. 2019). In the context of suicide and suicidal behavior, NSSI is seen to be a
critical factor as it may mark “the transition from suicidal ideation to actions” (Ernst
etal. 2019, p.2).
Following Bandura’s social-cognitive theory of learning, the occurrence of both,
NSSI and suicide can be explained by a weakening of social norms and a reduction
of inhibitions (Scherr 2016). Behavior depicted in the media can serve as a dry prac-
tice for the viewer (Gould etal. 2003; Valkenburg and Peter 2013). Importantly, it
has been acknowledged that these principles do not only apply for traditional media,
but also for the internet and social media (Luxton etal. 2012; Térrasse etal. 2019).
For youngsters who are on the lookout for role models worth imitating, social media
may become a platform of involvement and identificationtwo factors well-known
for enlarging the chances of (imitation) suicides (Arendt etal. 2016; Hoffner and
Buchanan 2005; John et al. 2017; Niederkrotenthaler etal. 2015; Till etal. 2010).
Consequently, Twenge etal. (2018) were able to show that the use of social media
by U.S. adolescents is associated with suicide-related outcomes.
However, depictions of suicide and NSSI may pose potential for positive out-
comes, i.e., prevention, as well. Media depictions can help vulnerable recipients to
seek support by depicting favorable coping strategies for suicidal ideation or behav-
ior, stories of individuals overcoming a crisis or by providing information on coun-
seling services (Niederkrotenthaler etal. 2010). Just as with the Werther effect, the
Papageno effects are not limited to traditional media. In social media, the process
of involvement and identification with role models may not only increase Werther
effects, but may also enhance positive outcomes, e.g., help-seeking (Niederk-
rotenthaler et al. 2010). Overall, whether social media have positive or negative
effects on their recipients depends on how the sensitive content is presented and
framed (Markiewitz etal. 2020b; Schäfer and Quiring 2015). This may be the reason
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 6 of 30
why research on the relationship of social media use and self-harming behavior is
ambiguous and has produced mixed findings in the past (Markiewitz etal. 2020a).
Thus, when adolescents watch YouTube videos dealing with suicide games like
the Momo Challenge, they expose themselves to the chance of encountering these
effects, even if they do not participate in the challenge themselves (Uvais 2019).
One the one hand, watching YouTubers play it cool with suicide games or even pro-
mote the Momo Challenge could lead to the weakening of inhibitions and social
norms, thus normalizing self-harming behavior (Jonas and Brömer 2002). On the
other hand, trigger warnings and references to counseling could relieve adolescents
from acute stress and encourage help-seeking behavior. Given these likely to occur
effects, this study examines whether and to what extent Momo Challenge videos fea-
ture depictions that are suspected of triggering these effects:
RQ1 To what extent do Momo Challenge videos on YouTube portray harmful and
helpful depictions of the suicide game?
The relationship betweenharmful orhelpful depictions oftheMomo Challenge
andengagement rates
Even if YouTube videos show harmful and helpful depictions of the suicide game,
these depictions will have no effect on adolescents if no one actually engages with
them because of them being unpopular. To measure the popularity of a YouTube
video, engagement rates are used, such as views, comments, likes, and dislikes.
The higher the popularity of a video, the more likely the YouTube recommendation
system will recommend it to new viewers. This recommendation in turn increases
the popularity of the video, resulting in a spiral process called rich-get-richer effect
(Borghol etal. 2013; Welbourne and Grant 2016). This effect would be particularly
problematic if it leads to the recommendation system recommending videos with
harmful depictions more frequently to new viewers. For this spiral process to start,
however, viewers would first have to show a preference for harmful depictions.
Do we have any reason to believe that viewers show a preference for harmful
depictions? Research on the Uses and Gratifications (U&G) framework shows that
entertainment motives are the strongest predictors for video viewing, liking, and
disliking, and that social interaction motives best explain commenting behavior on
YouTube (Khan 2017). Ladhari etal. (2020) investigated the role of emotions as a
driver of popularity on YouTube and were able to find a significant positive relation-
ship between the emotional attachment of the viewers to a YouTuber and the You-
Tuber’s popularity. In addition, Alhabash etal. (2015) used an experimental design
to show that highly arousing video content increases the chances of behavioral reac-
tions, such as engagement. This finding is not only consistent with other studies on
arousal levels of videos and viewing / sharing behavior (Hagerstrom etal. 2014;
Tellis etal. 2019), but also with arousal and news value theory. Unpleasant stimuli
and negativity, especially in the form of damage, violence, and death, create high
arousal and are therefore strong predictors of media engagement (Bednarek 2016;
Greer 2007; Schimmack and Derryberry 2005; Takeuchi etal. 2005).
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 7 of 30 86
The extent to which this arousal is used in YouTube videos, i.e., the extent of
harmful depictions, might vary depending on the video’s genre. For example, some
genres may increase the chance of seeing interactions with alleged Momo accounts,
while others may increase the chance of encountering information on counseling. It
seems reasonable to assume that videos of YouTubers performing live demonstra-
tions of the dare will portray more harmful depictions than, for example, informa-
tion videos, which serve a fact-based, informative purpose. Since information videos
on YouTube often snippets from news broadcasts or talk shows, chances are higher
that the news anchors already adhere to guidelines on responsible reporting on sui-
cides compared to YouTubers who perform live demonstrations of the dare. At the
same time, the genre of a video could also be a predictor of its engagement rates, as
certain genres could attract more viewers and encourage more users to leave a com-
ment. It is plausible, for example, that emotionally charged art productions inspire
more comments than fact-based, unemotional information videos. In addition, live
demonstrations could promise a higher thrill than information videos and therefore
convince more viewers to click and view. Because of these plausible associations,
we need to control for video genre when analyzing the relationship between the
extent of harmful or helpful depictions and engagement rates.
Following these arguments, we assume that if YouTube videos contain harm-
ful depictions of the Momo Challenge, these videos will not only be watched but
even more frequently than videos with helpful depictions. In fact, a higher extent of
harmful depictions should promote engagement rates, such as views and comments,
even when controlling for other content characteristics such as genre. Supporting
this assumption, Lewis etal. (2011) found that NSSI videos attract more views and
likes when they depict more NSSI images and multiple NSSI methods. We assume a
similar trend for Momo Challenge videos and therefore hypothesize:
H1 More harmful depictions of the Momo Challenge are positively related to
engagement rates.
The Momo Challenge andcommunity formation
Engagement rates do not only show how popular a video is, but also enable com-
munity detection on YouTube. Since engagement rates can almost directly be mon-
etized in the age of social media, YouTube’s recommendation system and autoplay
feature will do their best to keep the users engaged. The lion’s share of YouTube
users (81%) state that they at least occasionally watch suggestions presented by the
recommendation system (Smith etal. 2018). Therefore, researchers can use engage-
ment rates as a proxy that indicates which two videos are likely to be recommended
to each other’s audience. For example, based on the previously commented videos,
YouTube recommends videos that have been commented on by viewers with similar
tastes. The aggregation of shared commenters not only shows how similar the audi-
ence’s preferences of two videos are but also how likely they are to be recommended
to and viewed by each other’s audience. Mapping these connections leads to the
identification of communities. Each community represents a group of intertwined
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 8 of 30
videos that share audience preferences, similar content, and are more likely to be
recommended to each other’s respective viewers.
It is very likely that such distinct communities have also developed around Momo
Challenge videos. As this community formation is driven by content preferences,
some communities might show an affinity for content that shows a greater extent of
harmful depictions, while others might show a preference for more helpful depic-
tions. The advantages of identifying these communities are twofold: First, it gives
deeper insight into the spreading mechanism of suicide games. Second, it is a start-
ing point for educators, parents, and adolescents to learn how to identify and avoid
harmful communities and to instead look out for helpful content. Therefore, we pro-
pose a second research question:
RQ2 What communities have formed around the Momo Challenge videos and what
content preferences do they show?
Toward ananalytical scheme toassess theextent towhich social media portray
harmful andhelpful depictions ofasuicide game
In order to answer the research questions—and, thus, capture the extent to which
videos on YouTube portray harmful and helpful depictions of the Momo Chal-
lenge—this section combines the above-discussed strands of theory and draws from
practical experience with guidelines on responsible reporting (RRS) to provide an
analytical scheme to capture the extent of harmful and helpful depictions. This ana-
lytical scheme is later applied to YouTube videos, but could also be used to analyze
various social media content.
RSS have originally been developed for non-fictional, journalistic media content
in order to improve NSSI and suicide depictions in favor of prevention, i.e., to coun-
teract negative effects and promote positive effects. In fact, these guidelines have
largely been found to obtain the desired beneficial effects (Beam etal. 2018; Pirkis
etal. 2006; Schäfer etal. 2006; W.H.O. 2008; Yaqub etal. 2017). Similarly, guide-
lines for suicide-related content on the internet have been published by The Suicide
Prevention Resource Center (SPRC 2014). In an initial, two page long presentation,
Khasawneh etal. (2019) analyzed whether the portrayal of the suicide game Blue
Whale Challenge on Twitter and YouTube adheres to these guidelines. The results
show that 87% of the videos adhere to half or less of the guidelines. Following this
fruitful approach, we adapted the media guidelines on RRS and combined them with
media effect theory of Werther and Papageno effects to measure both harmful and
helpful depictions.
First, for media depictions to elicit helpful media effects, guidelines suggest to
include information on help and assistance (e.g., telephone counseling). Research
on Papageno effects shows that indicating places to go or call when in need can
have positive effects in terms of suicide prevention because this information makes
it easier to overcome fears and seek help (Arendt etal. 2016; Arendt and Scherr
2017). Therefore, Momo Challenge videos should provide information on help and
assistance. Second, guidelines ask for sensitivity when dealing with the subject of
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 9 of 30 86
suicide/suicidality (Mueller 2017), e.g., YouTube videos should provide trigger
warnings and should avoid portraying the issue in a highly emotionalized manner
(Frei etal. 2003; Niederkrotenthaler etal. 2010). Similarly, guidelines suggest that
emotionalization and evoking fear should be avoided (e.g., when YouTubers verbal-
ize fear or depict Momo as a real entity that could pose actual harm) since this might
motivate adolescents to test their courage (Johnston and Warkentin 2010). Finally,
YouTubers should avoid presenting any depictions that could serve as a dry prac-
tice or tutorial for viewers on how to perform self-harm. This may include provid-
ing contact details of Momo accounts or showing how to establish contact with an
alleged Momo account. Similarly, YouTubers can cause harm by showing live inter-
actions with alleged Momo accounts (e.g., text messaging, calling, or even the com-
pletion of tasks related to the challenge).
This analytical scheme is our basis for assessing the extent to which Momo Chal-
lenge videos on YouTube portray harmful and helpful depictions of the suicide
game. Although we cannot assess the actual effects, this analytical scheme at least
allows us to investigate how much content circulates the platform that is suspected
of triggering these effects. In the method section, we will further explicate how we
put this scheme into practice by using it for building content analytical categories
and combining the results of this content analysis with network analysis.
Method
A study that tries to assess the extent of harmful and helpful depictions of social
media content requires a large data set. To further analyze a video’s content, its
engagement rates, and community formation, it is also necessary to trace the rela-
tionships between users and media content. For this purpose, we used a common
snowball sampling with multiple starting points, which crawl from the anchor points
to new network members by following ties iteratively (Hansen etal. 2011; Paolillo
2008). Following this sampling technique, we collected a data set of interrelated
YouTube videos dealing with the Momo Challenge, which we then subjected to con-
tent and network analyses.
Sample anddata evaluation
We performed our data crawl with NodeXL, an add-in for Microsoft Excel that
uses the YouTube API to collect videos automatically. NodeXL can handle
medium-sized data sets of a few thousand videos and less than 200,000 edges
before overstraining (Smith etal. 2010). Using the keywords ‘Momo Challenge
English’, we ran a search of 2000 videos that contain these keywords in their
titles, descriptions, or tags. The YouTube API was able to identify 487 videos
that met our specifications, making NodeXL’s data size limitations irrelevant.
We extracted the descriptive statistics of the videos, i.e., the videos’ creators,
views, comments, likes, and dislikes. For each video, we downloaded 1000
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 10 of 30
top-level user comments as well as the first 100 replies to these top-level com-
ments. Then, NodeXL created an edge between each two YouTube videos that
were commented on by the same user. Overall, we identified 137,955 edges.
In order to enhance data quality, we conducted an editing process in which
we controlled for English language (including subtitles) and thematic proxim-
ity (e.g., videos presenting pictures, songs, or movies about Momo, showing
WhatsApp interactions with Momo accounts, or YouTubers talking about sui-
cide games). In a time-consuming but thorough process, we watched every sin-
gle video, checked its content and deleted all videos (as well as their related
ties) that did not fulfill the above-mentioned criteria (e.g., videos about an eat-
ing contest of Indian noodles called ‘Momo’). This process led to the deletion of
376 videos, leaving only 111 videos in the data set.
Although it has been speculated that samples drawn from the YouTube API
should be a rather accurate representation of the YouTube search results (Rieder
etal. 2018), APIs never reproduce the exact same selection and order of vid-
eos found in a user’s search results. For the Twitter API, González-Bailón etal.
(2012) showed that the API misses 2.5% of all tweets, 1% of the authors, and
1.3% of the hashtags found by the search query. We therefore assumed a similar
limitation for the YouTube API. To address this limitation, we complemented
the automated data crawl with a manual search by using the keyword ‘Momo
Challenge’ and setting the order parameter to ‘relevance’, which is the default
option for users. We then scrolled down the search list until it showed “No more
results” and added every single video (and all related ties to videos in our exist-
ing data set) that the NodeXL crawl had missed. Of course, these manual addi-
tions are also dependent on the YouTube search algorithm and we cannot be
sure that the search really captures all videos related to the Momo Challenge
that YouTube has to offer. However, a manual complement to the automated
data crawl is the most extensive data acquisition method to date. The procedure
proved to be worthwhile because we could identify 98 videos that the crawl had
originally missed and added them and their edges to the existing data. Just as
suggested by González-Bailón etal. (2012), the videos detected by the search
were more densely connected and central to the network than those of the API
crawl, thus adding more edges to the network than were deleted beforehand. In
the end, the revised data set included 209 videos and 169,519 ties.
Although it is highly likely that the collected videos were mainly watched by
adolescents, as a substantial proportion of the related YouTube channels pro-
duce content for children, we needed a more reliable indicator. We therefore
randomly sampled 1000 YouTube comments, either top-level or reply. To find
out if it is really adolescents who engaged with the video content, we measured
an age proxy: we recorded the date on which the authors of these comments
joined YouTube. We assumed that more recent dates of joining, e.g., after 2014,
are more likely to be associated with a younger audience, while less recent join
dates definitely point to an audience of advanced age. Since the average com-
mentator did not join YouTube until 2015 (M = 2015, SD = 2.6), we feel reas-
sured that we are actually investigating a young audience.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 11 of 30 86
Data analysis
Content analysis
After data collection, we randomly sampled 50% of the videos for coding
(n = 105). One coder conducted all of the manual coding by following our code-
book. The codebook was pre-tested on 10 videos, discussed, and then tested on
5 more videos, which were finally excluded from the analysis, i.e., they were
treated as uncoded. Whenever uncertainties arose, we discussed the coding
guidelines until consensus was found. In the end, the codebook contained a nom-
inal measurement of the video’s genre (live demonstration, information video,
art production, reference video, or other). In addition, we used the analytical
scheme presented in the theory section of this paper to build a content analyti-
cal category of characteristics that indicate harmful or helpful depictions of the
Momo Challenge. First, we dummy-coded all harmful depictions. This included
whether or not the video showed interactions with Momo accounts (text-based
and voice-based interactions were coded separately), verbalized that Momo is an
entity to be feared, and/or provided contact details of Momo accounts. Second,
we coded all danger-related characteristics that counteract these possible harm-
ful effects, namely the prompt of trigger warnings (automated or by YouTuber)
and the provision of counseling contacts. Finally, we calculated the intra-rater
reliability by reassessing 10 videos after two months had passed. Intra-rater reli-
ability (Holsti) was more than 0.9 on all variablesexcept for automated trig-
ger warnings (0.72). However, this exception is easily explained by the fact
that YouTube added additional trigger warnings to their videos after celebrities
demanded the platform to take action.
After the coding procedure, we combined the count of these depictions to form
a danger index. To this end, we added up all helpful depictions that may reduce the
likelihood of suffering from negative effects (trigger warnings, counseling) and sub-
tracted this number from the sum of harmful depictions (text- and voice-based inter-
actions, fear, contact details). The asymmetric index scales from −2 (very unprob-
lematic) to 4 (very problematic). By subtracting the count of helpful depictions we
assume that harmful content is compensated by helpful content, which reduces the
overall extent of harmful depictions. This is especially important with regard to edu-
cational content that might portray harmful depictions of NSSI and suicidal idea-
tion but also provides trigger warnings, information on counseling, and encourages
adolescents to talk to their parents about what they have just watched. For exam-
ple, an educational video that portrays a young adult engaging in text interactions
with “Momo” would be rated 1 (unproblematic) if it also displays trigger warnings
and counseling information on whom to turn to in case of involvement in the chal-
lenge or NSSI. We believe that with an asymmetric sum index the assessment of the
extent of harmful depictions becomes more intuitive and accurate, which prevents
alarmism.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 12 of 30
Network analysis
For the network analysis, we imported our data files to Gephi, an open-source net-
work visualization software. Gephi offers the application of a modularity algorithm
that outperforms its fellow clustering algorithms in terms of simplicity, speed, and
precision. Blondel etal. (2008) developed this algorithm to optimize community
identification within large social networks. By employing the modularity algorithm
to our data (for a complete documentation, please refer to the supplementary mate-
rial), we were able to distinguish seven communities within the Momo Challenge
network. For network visualization, we unfolded the Momo Challenge network
using the common Force Atlas 2 algorithm, while filtering isolated network com-
ponents consisting of three or less nodes. Filtering these marginal components has
proven to be a vital technique among network researchers because otherwise the net-
work would clutter (Hansen etal. 2011).
Results
We found 209 Momo Challenge videos, with an average reach of 524,671 views and
3,242 comments (568,108 views and 3851 comments in the sample of our content
analysis, n = 105). The most watched video attracts more than 11 million users. Even
for YouTube standards, these are impressive engagement rates (Lewis etal. 2011).
Of the videos, 34% are information videos (news clips or talk shows), which serve
a fact-based, informative purpose. Another 31% of the videos present art produc-
tions (e.g., short films, animated storylines, Let’s Plays, and make-up tutorials). Live
demonstrations of how YouTubers try to establish contact with Momo accounts also
make up an essential part of the content (24%). Additionally, we identified refer-
ence videos, which focus on YouTube celebrities reacting to other channels’ Momo-
related content (8%) and three videos that did not fit into any of these categories
(other).
The distribution of views and comments is highly skewed within the Momo Chal-
lenge network, with one striking extreme value (#1 in Fig.1). This extreme value
#1 represents a video in which a YouTuber presents a set of distinct ‘cursed’ phone
numbers that are supposed to cause the death of the caller. Because the video deals
with a wide range of cursed numbers in general, it reaches a wider audience than
those that focus exclusively on the Momo Challenge. For comparison, the second
and the third placed video attract approximately 3 to 4 million views. Since the
video #1 is an extreme outlier in terms of the number of views and comments, it was
excluded from subsequent calculations. After filtering, the average number of views
and comments for the content analysis sample decreased to 471,899 views and 3076
comments.
Figure1 also shows that few videos receive the lion’s share of the views and com-
ments, while most videos struggle to attract any audience at all. This trend is rein-
forced by a strong, positive correlation between engagement rates (see Table1) such
as views and comments (ρ = 0.619, p < 0.001), views and likes (ρ = 0.666, p < 0.001),
and views and dislikes (ρ = 0.728, p < 0.001), suggesting that Momo Challenge
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 13 of 30 86
videos that attract a high number of viewers are also more likely to be engaged with.
User engagement in turn makes the video more likely to be viewed, leading to a
rich-get-richer effect (Borghol etal. 2013; Welbourne and Grant 2016).
The extent ofharmful andhelpful depictions oftheMomo Challenge (RQ1)
In order to answer the first research question of this study, namely the extent to
which the YouTube videos studied feature harmful and helpful depictions of the
Momo Challenge, we have to consider descriptive statistics. Table2 presents the
proportion of videos that feature harmful or helpful depictions, broken down by
video genre.
Let’s first look at harmful depictions. 47% of YouTubers are exchanging text mes-
sages with alleged Momo accounts. Additionally, phone or video calls are shown in
15% of the videos. In sum, about 53% of the videos under study demonstrate at least
some kind of interaction and 57% verbalize that Momo is an entity to be feared. Yet,
this does not stop the videos’ creators from further promoting the challenge by pass-
ing on alleged Momo phone numbers (25%).
Fig. 1 View and comment count
of videos related to the Momo
Challenge. The extreme value
#1 was excluded from all further
calculations that related to
viewer and comment count
Table 1 Bivariate correlation for
danger predicting engagement
(n = 104)
**p ≤ .01. ***p ≤ .001. One-tailed
The danger index ranges from −2 for ‘very unproblematic’ to 4 for
‘very problematic’
The extreme value #1 from Fig.1 was excluded prior to the analysis
12345
1 Danger Index 1
2 Views 0.232** 1
3 Comments 0.373*** 0.619*** 1
4 Likes 0.397*** 0.666*** 0.759*** 1
5 Dislikes 0.363*** 0.728*** 0.652*** 0.831*** 1
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 14 of 30
Helpful depictions, on the other hand, are rare. Only 11% of the videos prompt
an automated trigger warning that is aimed at preventing these media depictions
from harming viewers. In addition, less than a quarter of YouTubers include a per-
sonal (oral or written) trigger warning that the following content could cause dis-
tress to vulnerable people and/or that viewers should not try the challenge them-
selves. Taken together, about 29% of all videos present some form of warning to
their viewers. Besides taking a video down or flagging it with a trigger warning, the
provision of counseling contacts (e.g., lifelines, specialized websites) can also help
to deal with potentially harmful media depictions. Nonetheless, this kind of support
is rather unpopular within the community of YouTubers, as only about 5% of the
Momo Challenge videos refer to further counseling.
Not surprisingly, live demonstrations tend to show the greatest extent of harmful
depictions and therefore can be considered as the most problematic of all genres.
Almost every live demonstration shows interactions with Momo accounts (96%) and
about half of them also feature contact details of these accounts (50%). Importantly,
not a single live demonstration offers counseling numbers for its viewers, which can
only be stumbled upon in information (8%) and reference videos (25%). As a conse-
quence, live demonstrations are most likely to be flagged as inappropriate by users
(24%). The creators of live demonstrations seem to be aware of this development,
causing about a quarter of them to give an (additional) oral/written trigger warning
(28%). However, live demonstrations are not the only place for YouTubers to verbal-
ize that Momo is an entity to be feared (50%); in fact, it is more likely done in art
productions (79%). In art productions, the protagonists’ inner monologs are often
presented to the viewer, giving direct insights into their emotions, i.e., fears.
If we combine these harmful and helpful depictions, we can see that Momo
Challenge videos score an average of 1.14 points on the danger index (SD = 1.09,
scale ranges from −2 to 4, Table3). As Fig.2 shows, videos that exclusively show
helpful depictions of the Momo Challenge (danger index of −2 or −1) are sparse
(4.8%), while more than two thirds of the videos include at least some form of harm-
ful depiction (danger index of 1 to 4). This result suggests that young people are
at some risk when they navigate Momo Challenge content on YouTube. However,
a quarter of these videos are safe (danger index of 0), some even helpful. From a
Table 2 Proportion of videos featuring harmful and helpful depictions of the Momo Challenge, by video
genre (n = 104)
Coded as 1 for yes and 0 for no. The extreme value #1 from Fig.1 was excluded prior to calculation
Genre Interac-
tions
(%)
Contact
details
(%)
Fear (%) Trigger warning (%) Counseling (%)
Automated By YouTuber
Live demonstration 96 50 50 25 25 0
Art production 58 6 79 6 24 0
Information video 28 28 39 8 14 8
Reference video 38 25 63 13 25 25
Overall 53 25 57 11 20 5
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 15 of 30 86
policy perspective, it would therefore be unjustified to place publishers of self-harm
or suicide game content under general suspicion.
Danger level andengagement rates (H1)
We assumed that more harmful depictions of the Momo Challenge will be positively
related to engagement rates, even when controlling for other content characteris-
tics such as genre (H1). From Table1, we can already deduce that all engagement
rates show a positive correlation with the danger level of a video. View count and
danger index share a rather weak relationship (ρ = 0.232, p < 0.01), but comments
(ρ = 0.373, p < 0.001), dislikes (ρ = 0.363, p < 0.001), and likes (ρ = 0.397, p < 0.001)
reach a weak to moderate association. To test H1, we ran Ordinary Least Square
(OLS) regressions that treated each engagement rate as separate dependent variable
and the danger index as the independent variable. As a control, the video genre was
introduced into the regression in the form of dummy variables. However, since we
manually coded the danger index based on the video content, we have to ensure that
our prediction is not limited by a high correlation between video genre and dan-
ger index, i.e., multicollinearity. The tolerance of all variables in the regression is
greater than 0.10 and the variance inflation factor (VIF) values are well below 10.0
(see Table4), so the collinearity diagnostics indicate no cause for concern (Pituch
Table 3 Average danger and engagement rates, by video genre (n = 104)
The danger index ranges from −2 (very unproblematic) to 4 (very problematic)
The extreme value #1 from Fig.1 was excluded prior to calculation
Genre Danger index
M ± SD Views
M ± SD Comments
M ± SD Likes
M ± SD Dislikes
M ± SD
Live demonstration 1.83 ± 1.007 546,513 ± 878,490 5037 ± 9265 25,053 ± 49,731 1571 ± 2965
Art production 1.24 ± .792 745,660 ± 955,917 3831 ± 8819 12,644 ± 24,843 877 ± 1326
Information video 0.69 ± 1.117 157,902 ± 411,520 395 ± 957 1371 ± 3857 183 ± 461
Reference video 0.88 ± 1.458 648,701 ± 1.127,778 6953 ± 13,169 14,524 ± 26,620 656 ± 1125
Overall 1.14 ± 1.092 471,899 ± 815,878 3076 ± 7767 13,485 ± 35,363 836 ± 1895
Fig. 2 Number of videos related
to the Momo Challenge, sorted
by danger level ranging from -2
(very unproblematic) to 4 (very
problematic)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 16 of 30
and Stevens 2016, p.77; Tabachnick and Fidell 2014, p.124). The results of all
regressions are shown in Table5.
With an adjusted R2 of 0.085, a statistically significant regression equation
was found for the model comprising video views (F(4,101) = 3.317, p = 0.014).
Similarly, the regression equation for video comments was statistically sig-
nificant (F(4,101) = 5.260, p = 0.001) with an adjusted R2 of 0.146. A statisti-
cally significant regression equation was also found for likes (F(4,101) = 5.403,
p = 0.001) with an adjusted R2 of 0.150, and for dislikes (F(4,101) = 4.604,
p = 0.002) with an adjusted R2 of 0.126. For all of these models, the dan-
ger level proved to be a significant positive predictor with weak to moderate
strength (p < 0.01 and p < 0.001), even when controlling for video genre. The
only exception is the model comprising video views, where the danger index
merely becomes significant at a 0.10 significance level. Although the danger
index is significantly correlated with views in the bivariate analysis (p < 0.01,
see Table1), this association decreases when controlling for other video charac-
teristics like genre. Why are views the only engagement rates that can no longer
be predicted at a 0.05 significance level when introducing controls to the anal-
ysis? Our best guess is that the differences between the engagement rates can
be explained by the fact that the decision to watch a video is made before the
content of the video—and thus its danger level—is even known. Most likely,
viewers are only to a limited extent able to anticipate the danger level of a video
when they make the decision whether to watch it or not. As the danger level of
the video becomes less important for the decision-making process, more mani-
fest content characteristics, such as the video genre, become influential and,
thus, disturb the relationship between danger index and views. In contrast, com-
ments, likes, and dislikes can only be published after the video has been clicked
and—at least in parts—viewed. Therefore, a verdict about the video’s danger
level has already been reached when a comment is posted or the like/dislike but-
ton is pressed. From this perspective, it seems logical that the danger index is
better suited as a predictor for comments, likes, and dislikes than for views.
Despite this exception for video views, a higher extent of harmful depictions of
the Momo Challenge predicts higher engagement rates, thus confirming H1. As
YouTube’s recommendation system rewards high engagement rates, there is an even
greater likelihood that such harmful depictions will later be offered to new viewers.
Consolidating this argument, Smith etal. (2018) were also able to find that about
Table 4 Multicollinearity
analysis of independent
variables (n = 104)
Measure Tolerance VIFs
Controls
Live demonstration 0.311 3.214
Art production 0.287 3.480
Information video 0.282 3.547
Danger index 0.839 1.192
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 17 of 30 86
Table 5 Regressions for danger predicting engagement (n = 101)
p ≤ .10. *p ≤ .05. **p ≤ .01. ***p ≤ .001
Video genre was represented as four dummy variables with reference videos serving as the reference group
The danger index ranges from −2 for ‘very unproblematic’ to 4 for ‘very problematic’
The extreme value #1 from Fig.1 as well as the three videos that were coded as ‘other’ were excluded prior to the analysis
Views Comments Likes Dislikes
B SE B β t B SE B β T B SE B β t B SE B β t
Controls
Live demon-
stration
−232,531 331,183 −0.120 −0.702 −4289 3050 −0.233 −1.406 1835 11,615 0.026 0.652 484 682 0.119 0.711
Art production 46,985 312,711 0.027 0.150 −4031 2880 −0.241 −1.400 −5212 10,967 −0.081 0.158 56 644 0.015 0.087
Information
video
−466,241 309,167 −0.272 −1.508 −6111* 2847 −0.373 −2.146 −11,515 10,843 −0.184 −0.475 −392 636 −0.109 −0.617
Danger index 136,01078,271 0.181 1.738 2475*** 720 0.346 3.434 9071*** 2745 0.332 −1.061 448** 161 0.284 2.781
Constant 529,691 287,632 1.842 47862649 1.807 6586 10,088 3.304 264 592 0.446
Adj. R20.085 0.146 0.150 0.126
F F(4,101) = 3.317*, p = 0.014 F(4,101) = 5.260***, p = 0.001 F(4,101) = 5.403***, p = 0.001 F(4,101) = 4.604**, p = 0.002
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 18 of 30
60% of the YouTube users come across content promoting dangerous behavior
through the recommendation system at least sometimes.
Momo Challenge network visualization andcommunity identification (RQ2)
Up to this point, we analyzed the extent of harmful and helpful depictions of the
Momo Challenge that can be encountered when watching a single video. However,
this approach does not really capture the reality of YouTube use. In reality, users
move from video to video. Regarding the Momo Challenge, community analysis
helps to improve the understanding of what media diet and content preferences spe-
cific audiences expose themselves to. If these media diets show a great extent of
harmful depictions, their respective communities should be avoided. We asked what
Fig. 3 Visualization of the Momo Challenge on YouTube. Each node represents a video. Each edge rep-
resents a shared commenter between two videos. Node size represents the video’s number of views. Edge
size represents the number of shared commenters. Color indicates community membership. Videos with-
out a shared commenter were excluded from the visualization. Created in Gephi using Force Atlas 2
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 19 of 30 86
communities have formed around the Momo Challenge videos and what content
preferences they show (RQ2).
In Figs. 3 and 4, we visualized the Momo Challenge network with regard to
shared commenters, respective communities, and danger levels. Each tie between
two videos indicates content similarity and a greater likelihood for a video being
recommended after having watched the other. The graph shows that the Momo
Challenge network comes with seven communities, visualized by their correspond-
ing color.
1. The yellow community is the most dominant within the Momo Challenge net-
work, which means that the lion’s share of video content about the Momo Chal-
lenge comes from this community. It predominantly consists of live demonstration
content with high engagement rates (as indicated by the nodes’ size) and dense
Fig. 4 Second visualization of the Momo Challenge on YouTube. Color indicates danger level from -2
(very unproblematic) to 4 (very problematic)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 20 of 30
connections. As we have stated before, live demonstrations show a greater extent
of harmful depictions. Thus, users who engage with videos from the yellow com-
munity increase the possibility of being (repeatedly) confronted with harmful
media depictions.
2. In contrast, viewers moving within the green community expose themselves
to more helpful depictions. This community is the second most dominant and
consists primarily of information videos (and some educational art productions).
They have a small audience and score low to medium on the danger index. Yet,
the green community provides strikingly many boundary-spanning connections
to other communities. On the one hand, this may indicate that users who watch
Momo-related content also want to inform themselves further about the challenge
and therefore seek out information videos. On the other hand, it may also be the
case that users who hear about the Momo Challenge first through information
videos are prone to checking out the original content.
3. The red community, which consists of video game material, is somewhat sepa-
rated from the rest of the network. Some of these videos show how gamers try to
establish contact with Momo accounts. A reference video of a gaming vlogger
reacting to Momo videos connects this community to the rest of the network.
Overall, the red community depicts Momo as a pop cultural character and not
as a potentially harmful challenge. Nevertheless, adolescents could get curious
about where the horror figurine originated from and search for more informa-
tion.
4. The other four communities, orange, cyan, blue, and gray are small and not very
dominant within the network:
The orange community consists of Indian short films that are often flagged
with trigger warnings. Overall, they serve an educational purpose, but do
include troublesome media depictions of self-inflicted injuries that should
only be watched under supervision of an educator.
The cyan community is based on videos uploaded by a single YouTube chan-
nel, a gaming vlog. In the course of a month, the YouTuber had been playing
a Momo horror game, then went on to present several Momo chatbots and
apps, and finally uploaded an information video explaining that the Momo
Challenge has caused the death of a child, thereby warning his viewers to stay
away from the challenge. Generally, this video series is an accurate example
of how many YouTubers handled the Momo Challenge, treating it as a funny
dare at first and, after the release of media reports on teen deaths linked to the
challenge, warning their viewers.
The blue community consists of the most problematic videos within the
Momo Challenge network. They claim to show real WhatsApp chats with
Momo, often split into several sequels, which include disturbing pictures of
horror figures and self-harm. The titles of these videos indicate that they are
18 + content, an information voluntarily provided by the author. These videos
were the first to be removed by YouTube when the social networking service
took action against Momo-related content.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 21 of 30 86
Finally, the gray community consists of isolates that are neither connected to
the network nor receive a lot of views.
Obviously, the two most dominant communities on YouTube could hardly be
more incongruent. While the yellow community prefers harmful depictions and
live demonstrations of the Momo Challenge, the second most dominant community
(green) is interested in fact-based, less engaging, and also more helpful depictions.
This finding reinforces the impression that it would be unjustified to place publish-
ers of self-harm or suicide game content under general suspicion, but that a more
differentiated assessment is necessary before demonetizing or deleting content.
Discussion
As with many social media challenges before, the Momo Challenge was just a tem-
porary phenomenon. However, suicide games still expose adolescents to inappropri-
ate video content and self-harming ideation, scare youngsters that horror figures like
Momo could turn up at night and hurt them, and keep parents worried. Copycats
may spread telephone numbers and fake conversations to scare youngsters or steal
private information. In a conirmed case, a 14-year-old Romanian girl even faked
conversations with a Blue Whale curator while engaging in self-harming behavior.
She wanted to show the conversations to her classmates to attract attention (Lupari-
ello etal. 2019). If adolescents and parents are made to believe that suicide games
are real, they have real consequences. Thus, Khasawneh etal. (2019) have argued
“that it is urgent to monitor social media posts related to BWC [Blue Whale Chal-
lenge] and similar self-harm challenges (e.g., the Momo Challenge)” (p.888).
Of course, the Momo Challenge is neither the first suicide game nor is it likely
to be the last on YouTube (or similar short video apps), “where challenge videos
have become a weird trope of doing the craziest thing to get views” (Mazhari 2018).
This trend is problematic because many of these YouTubers appeal to a young audi-
ence that does not understand that these challenges are made for views and likes
only (Mazhari 2018). In 2020, only two years after the peak of the Momo Chal-
lenge, another self-harm and another suicide game have already been launched:
The Skull Breaker Challenge and the Jonathan Galindo Challenge. While the Skull
Breaker Challenge can be considered the heir of the Tide Pod Challenge, the Jona-
than Galindo Challenge is the direct successor of the Momo Challenge. Both, the
Skull Breaker and the Tide Pod Challenge are self-harm challenges where danger-
ous stunts are posted to look cool or funny, in this case when two children kick the
legs out from under a third to make him fall over, or when children eat toxic deter-
gent pods. The Jonathan Galindo Challenge is a suicide game that mimics all the
characteristics of the Momo Challenge, except that the name and profile pictures of
the predatory accounts have been exchanged. Now the Jonathan Galindo accounts
show an image of a man wearing a dog mask reminiscent of a twisted version of the
Disney character Goofy.
Comparing the Momo Challenge with its predecessor, the Blue Whale Chal-
lenge, and its successor, the Jonathan Galindo Challenge, the Momo Challenge can
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 22 of 30
be described as a turning point in how suicide games are organized. While the Blue
Whale Game targeted introverts, who had already developed depression or suicidal
tendencies (Lupariello etal. 2019; Mukhra etal. 2017), the Momo Challenge aimed
at no homogenous audience. The participants of the Blue Whale Challenge did not
aim at a homogenous were recruited and encouraged to perform self-harming tasks
through social media messages from strangers (Lupariello etal. 2019; Mukhra etal.
2017). As confirmed cases show, some children even posted blue whales on their
social media accounts to attract the attention of unknown curators (Lupariello etal.
2019). A confirmed case of an Indian boy also involved a download link to a mobile
phone app that provided him with self-harming tasks and, thus, replaced the curator
for future interactions (Balhara etal. 2018). As hospital reports show, curiosity and
the desire to attract attention were the main drivers for young people to participate
in the Blue Whale Challenge, in addition to prevalence of depression (Balhara etal.
2018; Lupariello etal. 2019). Building on these motivations, the Momo Challenge
optimized the dissemination mechanism of suicide games. This new challenge used
snowball effects to gain reach, i.e., the adolescents no longer came into contact with
the challenge through strangers but through people they trusted, e.g., peers, close
friends, or influencers on YouTube. Through this type of distribution, even adoles-
cents who have never suffered from psychiatric disorders (e.g., depression) in the
past are confronted with self-harming ideation. This mechanism also allows suicide
games to spread virally, which makes them much more difficult to handle. After all,
reactions from social networking sites, educators, and parents only occur with delay.
For the same reason, the successor to the Momo Challenge, the Jonathan Galindo
Challenge, was able to spread virally again, although YouTube had already taken
action against its predecessor.
Implications forpractice
If these challenges do not disappear and only tend to reappear under a new name
and become viral again, how should we react in the future? When YouTube became
aware of the problem of the Momo Challenge in March 2019, the online video-
sharing platform decided to demonetarize all videos on the subject (Alexander
2019). About two years after the data collection, 41% of the videos in our sample
were deleted and another 13% became access-restricted, which means that viewers
must give their consent to watch the possibly disturbing following content before
they can access the video. Live demonstrations were most affected. While 72% of
all live demonstration videos have disappeared from the platform and another 16%
got access-restricted, 70% of all information videos are still available without any
restriction. However, these videos were also demonetized, even information videos
from well-known news companies (Alexander 2019).
Overall, YouTube’s approach to restricting content is very much in line with our
findings. Remember that in our content analysis (n = 105) we found that about 30%
of the videos related to the Momo Challenge can be viewed without safety con-
cerns, and 5% of them even offer exclusively helpful depictions of the challenge
that are likely to trigger positive Papageno effects. By preferentially banning live
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 23 of 30 86
demonstrations from the platform, YouTube has focused its efforts on content that
often presents harmful depictions of the challenge, we welcome the fact that You-
Tube has taken action against the Momo Challenge. Only two points of criticism
remain. The first one is that YouTube needed half a year before becoming aware
of the challenge and taking action. In order to prevent future suicide games from
becoming viral in the first place, faster action is required. Second, the decision to
demonetize all videos related to the Momo Challenge seems disproportionate. If the
content of suicide games is demonetarized as a matter of principle, then opportuni-
ties to educate young people about suicide games and show them ways they can
seek help are missed, because the creation of such content is no longer financially
viable. YouTube’s machine learning algorithms, which are currently taking over the
task of automatically identifying and demonetarizing problematic content, are prone
to ambiguous decisions and mistakes, a problem that has already been discussed
by Kain (2017). For example, YouTubers have found creative loopholes to outsmart
the machine learning algorithms in Jonathan Galindo Challenge videos. By para-
phrasing suicide games, e.g., “the challenge for the whales who just so happen to be
blue”, YouTube’s speech recognition cannot automatically demonetarize the videos
or even ban the entire channel.
We do not have the ideal solution to this problem. However, videos could be
checked for specific features that indicate more harmful depictions of a suicide
game. In addition to depictions of violence or verbal cues like challenge names, it
could be examined whether these videos show or read out telephone numbers, pro-
vide chat transcripts, or whether they show facial expressions that suggest fear. The
source/creator of the content and the provision of counseling contacts could also
be considered (though the latter should not be confused with telephone numbers of
predatory accounts), so that information videos are not demonetized just because
they feature the challenge. In the end, it is also in the interest of an online video-
sharing platform like YouTube to keep as many videos online as possible and to
monetize them. Similarly, parents should not panic when they discover that their
children have been exposed to suicide game content. Instead, they should engage
with their children and try to fathom what they saw and how it was presented. The
knowledge that not every social media content about suicide games has to be harm-
ful enables parents to have open and informed discussions with their children and
provide appropriate support based on the specific content.
Implications forliterature andtheory
On YouTube, over 200 videos dealt with the Momo Challenge during the turn of
2018/2019. Typical for the platform, few of these videos received the lion’s share
of the views and comments, thus being further promoted. As we have shown, the
videos under study show depictions that are under suspicion of eliciting harmful
effects. In fact, more harmful depictions of the suicide game are significantly corre-
lated with the videos’ engagement rates. In addition, most of the videos demonstrate
interactions with alleged Momo accounts and even pass on contact details. By not
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 24 of 30
only providing contact data, but also showing how role models establish contact,
these videos might dare teens to imitate their role models and test their own cour-
age. It is also regrettable that only 29% of all videos under survey have implemented
some kind of trigger warning and less than 5% refer to counseling contacts, two
characteristics often theorized to be associated with helpful Papageno effects. Thus,
viewers may not only be unprepared for what content will follow but are also not
provided with references on where to get support when feeling distressed.
Besides these empirical findings, this paper also provides a theoretical contribu-
tion to the research on suicide games. The W.H.O. has already released guidelines
on how to report on suicide both for non-fictitious media depictions (W.H.O. 2008)
and for fictional media depictions (W.H.O. 2019). Similar guidelines for social
media environments were released by SPRC (2014). By combining the literature on
Werther and Papageno effects with these guidelines (RRS), we derive an analyti-
cal scheme to assess the extent to which social media portray harmful and helpful
depictions of sensitive topics like suicidal ideation and suicide games. In this study,
we used this scheme to analyze YouTube content, but it could also be used to ana-
lyze various social media content. To date, it still poses a challenge to identify con-
tent on social media dealing with self-harm and/or suicide (George 2019). This is
also evident in the context of suicide games, where short, explorative reports have
only just begun to assess the dissemination of related social media content (Kha-
sawneh etal. 2019; Sumner etal. 2019). We underpin these analyses theoretically by
drawing assumptions about the consequences of these depictions based on the litera-
ture on Werther and Papageno effects. We are aware that our scheme merely allows
identifying content characteristics that are under suspicion to cause harmful or help-
ful effects; we have no proof that these effects will actually be evoked. Nevertheless,
all of our assumptions rest on well-established theory and empirical evidence.
Limitations
In this study, we had to rely on the YouTube API. With few studies analyzing the
inner workings of such APIs (e.g., Diakopoulos 2014; Driscoll and Walker 2014;
González-Bailón etal. 2012; Rieder etal. 2018), little is known about the bias with
which the YouTube algorithm produces search results. At the very least, Rieder
etal. (2018) showed that the ordering of search results is quite stable over time. It
is assumed that these natural search results are fairly well reflected by crawls with
the API, with only small variations (Driscoll and Walker 2014; Rieder etal. 2018).
These variations will most likely manifest themselves in a skew toward central,
densely connected network hubs for search results, while data crawls will include
peripheral users more accurately (González-Bailón etal. 2012). In the light of these
uncertainties, we decided to complement our crawl data with data from a manual
search. This proved to be not only worthwhile, but necessary, as we were able to
identify 98 videos that the crawl missed, which is a much greater deviation than
anticipated. Through this two-step procedure, we created a data set that reflects the
video selection viewed by users on YouTube at the time of data collection (turn of
2018/2019) as adequately as possible. During the course of our analysis, numerous
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 25 of 30 86
videos have been provided with trigger warnings, deleted, or newly uploaded. As a
result, this analysis represents a snapshot of how the network was laid out at the turn
of 2018/2019.
Another limitation of our study is that while we can demonstrate the effect of a
video’s danger level on its engagement rates, we cannot provide any insight into how
these effects come about. Adolescents may process the depiction of harmful content
in very different ways. While some may express their thoughts in comments, others
may dislike the video to express themselves. And a third group might do nothing
or even like the video. Our study cannot shed light on the emotional and cognitive
processes that lead to active engagement with content, nor can it explain why ado-
lescents turn to a particular form of engagement. As is so often the case in explora-
tory research, this study merely contributes to the literature by providing evidence of
an association between harmful depictions of a suicide game and engagement rates.
Future research might address how and when this association occurs, leading to a
deeper understanding of the mechanisms by which the effect operates.
As a final limitation, it should be noted that we cannot provide definitive proof
that the audience of Momo Challenge videos is young. However, our proxy measure
and a look at the platform’s overall demographic structure and media usage trends of
teenagers (e.g., Anderson and Jiang 2018) has given us reason to believe that we are
indeed investigating a young audience.
Future prospects
The present study relies on basic media effects approaches (Werther effect and
Papageno effect). Actual effect sizes are hard to measure in this context since they
are usually traced by intra-extra media analyses. Therefore, future research could
focus on surveys, experiments, or qualitative interviews asking for how participants
would assess the impact of harmful media content and how they feel when engaging
with it. What consequences does (prolonged) exposure have? What coping strate-
gies do young viewers develop? It is important that ethical aspects are taken into
account when conducting these studies and that the approval of an ethics committee
is obtained in advance.
Future studies might also address the responsibility of social media influencers.
What is their self-image and their perception of professional responsibility? How
can they be effectively incentivized for not further promoting and popularizing
highly problematic media depictions of sensitive topics? Researchers might also
take on the challenge to seek out alternative solutions, such as creating awareness
material and guidelines for the content creators of online video-sharing platforms on
how to produce more favorable media depictions of sensitive content.
Supplementary Information The online version contains supplementary material available at https:// doi.
org/ 10. 1007/ s43545- 021- 00065-1.
Funding Open Access funding provided by University of Zurich.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 26 of 30
Data availability The data that support the findings of this study are openly available in the Har-
vard Dataverse at https:// doi. org/ 10. 7910/ DVN/ 6ESJEF, reference number UNF:6:E6MifV5V4sB
d8xqWyzBFww = = [fileUNF].
Compliance with ethical standards
Conflict of interest This research received no specific grant from any funding agency in the public, com-
mercial, or not-for-profit sectors. We declare that we have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is
not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission
directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen
ses/ by/4. 0/.
References
Adeane A (2019) Blue Whale: What is the truth behind an online ’suicide challenge’? Bbc. Com
https:// www. bbc. com/ news/ blogs- trend ing- 46505 722
Alexander J (2019) YouTube is demonetizing all videos about Momo. Theverge.Com. https:// www.
theve rge. com/ 2019/3/ 1/ 18244 890/ momo- youtu be- news- hoax- demon etiza tion- comme nts- kids
Alhabash S, Baek J, Cunningham C, Hagerstrom A (2015) To comment or not to comment? How
virality, arousal level, and commenting behavior on Youtube videos affect civic behavioral inten-
tions. Comput Hum Behav 51:520–531. https:// doi. org/ 10. 1016/j. chb. 2015. 05. 036
Alluri A (2017) Why is Blue Whale Hysteria Gripping India? Bbc.Com. https:// www. bbc. com/ news/
world- asia- india- 40960 593
Anderson M, Jiang J (2018) Teens, social media & technology. https:// www. pewin ternet. org/ 2018/ 05/
31/ teens- social- media- techn ology- 2018/
Arendt F (2019) Suicide on Instagram—content analysis of a German suicide-related hashtag. Crisis
40(1):36–41. https:// doi. org/ 10. 1027/ 0227- 5910/ a0005 29
Arendt F, Scherr S (2017) The impact of a highly publicized celebrity suicide on suicide-related
online information seeking. Crisis 38(3):207–209. https:// doi. org/ 10. 1027/ 0227- 5910/ a0004 55
Arendt F, Till B, Niederkrotenthaler T (2016) Effects of suicide awareness material on implicit suicide
cognition: a laboratory experiment. Health Commun 31(6):718–726. https:// doi. org/ 10. 1080/
10410 236. 2014. 993495
Balhara YPS, Bhargava R, Pakhre A, Bhati N (2018) The Blue Whale challenge: the first report on a
consultation from a health care setting for carrying out tasks accessed through a mobile phone
application. Asia-Pacific Psychiatry 10(3):12317
Bastone N (2019) YouTube criticized for recommending self-harm videos with graphic images. Busi-
nessinsider.Com. https:// www. busin essin sider. com/ youtu be- criti cized- for- recom mendi ng- self-
harm- videos- in- searc hes- 2019-2? r= US& IR=T
Batimes.com.ar (2018) Police suspect 12-year-old girl’s suicide linked to WhatsApp terror game
Momo. Batimes.Com.Ar. http:// www. batim es. com. ar/ news/ argen tina/ police- suspe ct- 12- year- old-
girls- suici de- linked- to- whats app- terror- game- momo. phtml
Beam RA, John SL, Yaqub MM (2018) We don’t cover suicide … (except when we do cover suicide).
Journal Stud 19(10):1447–1465. https:// doi. org/ 10. 1080/ 14616 70X. 2017. 12795 63
Bednarek M (2016) Investigating evaluation and news values in news items that are shared through
social media. Corpora 11(2):227–257
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 27 of 30 86
Blondel VD, Guillaume J-L, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large
networks. J Stat Mech: Theory Exp P10008(10):1–12
Borghol Y, Ardon S, Carlsson N, Eager D, Mahanti A (2013) The untold story of the clones: content-
agnostic factors that impact YouTube video popularity. In: Proceedings of the 18th ACM SIG-
KDD international conference on knowledge discovery and data mining, Beijing
Buelens T, Luyckx K, Kiekens G, Gandhi A, Muehlenkamp J, Claes L (2019) Investigating the
DSM-5 criteria for non-suicidal self-injury disorder in a community sample of adolescents. J
Affect Disorders. https:// doi. org/ 10. 1016/j. jad. 2019. 09. 009
Chanda A (2018). Momo Challenge spooks bengal cops. Newindianexpress.Com. http:// www. newin
diane xpress. com/ thesu ndays tanda rd/ 2018/ sep/ 02/ momo- chall enge- spooks- bengal- cops- 18662
73. html
Cooper L (2019) Children targeted by terrifying viral momo images in peppa pig and fortnite videos.
9news.Com.Au. https:// www. 9news. com. au/ 2019/ 02/ 28/ 12/ 14/ momo- game- online- child ren- targe
ted- peppa- pig- fortn ite- youtu be- videos- viral- threa ts- viole nce- safety- warni ngs
Davidson T (2018) Momo suicide challenge: deaths of boy, 16, and girl, 12, linked to sick WhatsApp
game. Mirror.Co.Uk. https:// www. mirror. co. uk/ news/ world- news/ momo- suici de- chall enge- deaths-
boy- 13185 367
Diakopoulos N (2014) Algorithmic accountability reporting: on the investigation of black boxes. New
York City. Tow Center for Digital Journalism website: https://academiccommons.columbia.edu/
doi/https:// doi. org/ 10. 7916/ D8ZK5 TW2, https://doi.org/https:// doi. org/ 10. 7916/ D8ZK5 TW2
Driscoll K, Walker S (2014) Working within a black box: transparency in the collection and production of
big twitter data. Int J Commun 8:1745–1764
Ernst M, Kallenbach-Kaminski L, Kaufhold J, Negele A, Bahrke U, Hautzinger M, Leuzinger-Bohleber
M (2019) Suicide attempts in chronically depressed individuals: what are the risk factors? Psychia-
try Res. https:// doi. org/ 10. 1016/j. psych res. 2019. 112481
Evon D (2017) Is the ‘Blue Whale’ game responsible for dozens of suicides in Russia? Snopes.Com.
https:// www. snopes. com/ fact- check/ blue- whale- game- suici des- russia/
Farokhmanesh M (2018) YouTube is the preferred platform of today’s teens. Theverge.Com. https:// www.
theve rge. com/ 2018/5/ 31/ 17382 058/ youtu be- teens- prefe rred- platf orm
Ferber R (2018) Teenage suicide: the first victim of the Momo Challenge in France? Today.Rtl.Lu. https://
today. rtl. lu/ news/ world/ 12553 99. html
Frei A, Schenker T, Finzen A, Dittmann V, Kraeuchi K, Hoffmann-Richter U (2003) The Werther effect
and assisted suicide. Suicide Life-Threat Behav 33(2):192–200
Gartland F (2017) No proven link between blue whale game and suicides, says expert. Irishtimes.Com.
https:// www. irish times. com/ news/ social- affai rs/ no- proven- link- betwe en- blue- whale- game- and- suici
des- says- expert- 1. 30842 51
George M (2019) The importance of social media content for teens’ risks for self-harm. J Adolesc Health
65(1):9–10. https:// doi. org/ 10. 1016/j. jadoh ealth. 2019. 04. 022
González-Bailón S, Wang N, Rivero A, Borge-Holthoefer J, Moreno Y (2012) Assessing the bias in com-
munication networks sampled from Twitter. SSRN Electr J. https:// arxiv. org/ ftp/ arxiv/ papers/ 1212/
1212. 1684. pdf
Gould M, Jamieson P, Romer D (2003) Media contagion and suicide among the young. Am Behav Sci
46(9):1269–1284
Greer CRH (2007) News media, victims and crime. In: Davies P, Francis P, Greer CRH (eds) Victims,
crime and society. SAGE Publications, London, pp 21–49
Hagerstrom A, Alhabash S, Kononova A (2014) Emotional dimensionality and online ad virality: inves-
tigating the effects of affective valence and content arousingness on processing and effectiveness of
viral ads. In: Proceedings of the 2014 conference of the American Academy of Advertising
Hansen D, Shneiderman B, Smith MA (2011) Analyzing social media networks with NodeXL. Insights
from a connected world. MK, Burlington
Hoffner C, Buchanan M (2005) Young adults’ wishful identification with television characters: the role of
perceived similarity and character attributes. Media Psychol 7(4):325–351
John A, Hawton K, Gunnell D, Lloyd K, Scourfield J, Jones PA, Dennis MS (2017) Newspaper reporting
on a cluster of suicides in the UK. Crisis 38(1):17–25. https:// doi. org/ 10. 1027/ 0227- 5910/ a0004 10
Johnston, & Warkentin, (2010) Fear appeals and information security behaviors: an empirical study. MIS
Q 34(3):549. https:// doi. org/ 10. 2307/ 25750 691
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 28 of 30
Jonas K, Brömer P (2002) Die sozial-kognitive Theorie von Bandura [The Social-Cognitive Theory by
Bandura]. In: Frey D, Irle M (eds) Theorien der Sozialpsychologie: Band 2: Gruppen-, Interaktions-
und Lerntheorien, vol 2. Huber, Bern, pp 277–299
Kain E (2017) YouTube wants content creators to appeal demonetization, but it’s not always that easy.
Forbes.Com. https:// www. forbes. com/ sites/ erikk ain/ 2017/ 09/ 18/ adpoc alypse- 2017- heres- what- you-
need- to- know- about- youtu bes- demon etiza tion- troub les/
Khan ML (2017) Social media engagement: what motivates user participation and consumption on You-
Tube? Comput Hum Behav 66:236–247. https:// doi. org/ 10. 1016/j. chb. 2016. 09. 024
Khasawneh A, Chalil Madathil K, Dixon E, Wisniewski P, Zinzow H, Roth R (2019) An investigation on
the portrayal of blue whale challenge on YouTube and Twitter. Proc Human Factors Ergon Soc Ann
Meet 63(1):887–888. https:// doi. org/ 10. 1177/ 10711 81319 631179
Kitching C (2018) Momo Suicide challenge: teen’s death linked to sick game that orders players to do
dangerous tasks. Mirror.Co.Uk. https:// www. mirror. co. uk/ news/ world- news/ momo- suici de- chall
enge- teens- death- 13136 275
Ladhari R, Massa E, Skandrani H (2020) YouTube Vloggers’ popularity and influence: the roles of
homophily, emotional attachment, and expertise. J Retail Consum Serv 54:1–11. https:// doi. org/ 10.
1016/j. jretc onser. 2019. 102027
Lewis SP, Heath NL, St Denis JM, Noble R (2011) The scope of nonsuicidal self-injury on YouTube.
Pediatrics 127(3):e552-557. https:// doi. org/ 10. 1542/ peds. 2010- 2317
Lupariello F, Curti SM, Coppo E, Racalbuto SS, Di Vella G (2019) Self-harm risk among adolescents
and the phenomenon of the “Blue Whale Challenge”: case series and review of the literature. J
Forensic Sci 64(2):638–642. https:// doi. org/ 10. 1111/ 1556- 4029. 13880
Luxton DD, June JD, Fairall JM (2012) Social media and suicide. A public health perspective. American
Journal of Public Health 102(S2):195–200. https:// doi. org/ 10. 2105/ AJPH. 2011. 300608
Maheshwari S (2017) On YouTube kids, startling videos slip past filters. Nytimes.Com. https:// www.
nytim es. com/ 2017/ 11/ 04/ busin ess/ media/ youtu be- kids- paw- patrol. html?_r=0
Mann JJ, Apter A, Bertolote J, Beautrais A, Currier D, Haas A, Hendin H (2005) Suicide prevention
strategies: a systematic review. JAMA 294(16):2064–2074. https:// doi. org/ 10. 1001/ jama. 294. 16.
2064
Markiewitz A, Arendt F, Scherr S (2020a) #suizid: Zur Darstellung von Suizid in sozialen Netzwerken
und den möglichen Auswirkungen auf Jugendliche [#suicide: On the depiction of suicide in social
network sites and the possible consequences on children and adolescents]. Kinder- Und Jugends-
chutz in Wissenschaft Und Praxis 65(1):19–25
Markiewitz A, Arendt F, Scherr S (2020b) Increasing adherence to media guidelines on responsible
reporting on suicide: suggestions from qualitative interviews with German journalists. Journalism
Studies 21(4):494–511. https:// doi. org/ 10. 1080/ 14616 70X. 2019. 16864 12
Mazhari E (2018) The strange story of how tide pod eating went viral. Forbes.Com. https:// www. forbes.
com/ sites/ quora/ 2018/ 02/ 05/ the- stran ge- story- of- how- tide- pod- eating- went- viral/
Miller AB, Prinstein MJ (2019) Adolescent suicide as a failure of acute stress-response systems. Ann Rev
Clin Psychol 15:425–450. https:// doi. org/ 10. 1146/ annur ev- clinp sy- 050718- 095625
Mueller AS (2017) Does the Media matter to suicide? Examining the social dynamics surrounding media
reporting on suicide in a suicide-prone community. Soc Sci Med 180:152–159. https:// doi. org/ 10.
1016/j. socsc imed. 2017. 03. 019
Mukhra R, Baryah N, Krishan K, Kanchan T (2017) Blue whale challenge: a game or crime? Sci Eng
Ethics 25:285
Needham L (2019) Momo challenge: kim kardashian pleads with youtube to help end the craze. Mir-
ror.Co.Uk. https:// www. mirror. co. uk/ 3am/ celeb rity- news/ momo- chall enge- kim- karda shian- pleads-
14063 549
Newindianexpress.com (2018). Man gets death threat for refusing to play Momo Challenge in West
Bengal. Newindianexpress.Com. http:// www. newin diane xpress. com/ nation/ 2018/ aug/ 28/ man- gets-
death- threat- for- refus ing- to- play- momo- chall enge- in- west- bengal- 18641 05. html
Niederkrotenthaler T, Voracek M, Herberth A, Till B, Strauss M, Etzersdorfer E etal (2010) Role of
media reports in completed and prevented suicide: Werther v. Papageno effects. Br J Psychiatry
197(3):234–243. https:// doi. org/ 10. 1192/ bjp. bp. 109. 074633
Niederkrotenthaler T, Arendt F, Till B (2015) Predicting intentions to read suicide awareness stories the
role of depression and characteristics of the suicidal role model. Crisis 36(6):399–406. https:// doi.
org/ 10. 1027/ 0227- 5910/ a0003 44
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86 Page 29 of 30 86
O’Keeffe GS, Clarke-Pearson K (2011) The impact of social media on children, adolescents, and fami-
lies. Pediatrics 127(4):800–804. https:// doi. org/ 10. 1542/ peds. 2011- 0054
Paolillo JC (2008) Structure and Network in the YouTube Core. In: Proceedings of the 41st annual
Hawaii international conference on system sciences (HICSS), Washington, DC
Phillips DP (1974) The influence of suggestion on suicide: substantive and theoretical implications of the
Werther effect. Am Sociol Rev 39(3):340–354. https:// doi. org/ 10. 2307/ 20942 94
Pirkis J, Blood RW, Beautrais A, Burgess P, Skehans J (2006) Media guidelines on the reporting of sui-
cide. Crisis 27(2):82–87. https:// doi. org/ 10. 1027/ 0227- 5910. 27.2. 82
Pituch KA, Stevens JP (2016) Applied multivariate statistics for the social sciences. Analyses with SAS
and IBM’s SPSS. Routledge Taylor and Francis Group, New York
Qustodio (2020) Connected more than ever. Apps and Digital Natives: The New Normal. https://
qweb. cdn. prism ic. io/ qweb/ e59c2 e0f- ef4f- 4598- b330- 10c43 0e2ec 71_ Qusto dio+ 2020+ Annual+
Report+ on+ Child ren% 27s+ Digit al+ Habits. pdf
Rieder B, Matamoros-Fernández A, Coromina Ò (2018) From ranking algorithms to ‘ranking cul-
tures’: investigating the modulation of visibility in YouTube search results. Convergence
24(1):50–68. https:// doi. org/ 10. 1177/ 13548 56517 736982
Schäfer M, Quiring O (2015) The press coverage of celebrity suicide and the development of suicide
frequencies in Germany. Health Commun 30(11):1149–1158. https:// doi. org/ 10. 1080/ 10410 236.
2014. 923273
Schäfer R, Althaus D, Brosius H-B, Hegerl U (2006) Suizidberichte in Nürnberger Printmedien. Häu-
figkeit und Form der Berichterstattung vor und nach der Implementierung eines Medienguides
[Media Coverage on Suicide in Nuremberg’s Daily Papers. Frequency and Form of the Reporting
Before and During Media-Intervention With Guidelines]. Psychiatrische Praxis 33(3):132–137.
https:// doi. org/ 10. 1055/s- 2005- 915474
Scherr S (2016) Depression – Medien – Suizid [Depression - Media - Suicide]. Springer Fachmedien
Wiesbaden, Wiesbaden. https:// doi. org/ 10. 1007/ 978-3- 658- 11162-5
Schimmack U, Derryberry D (2005) Attentional interference effects of emotional pictures: threat,
negativity, or arousal? Emotion 5(1):55–66. https:// doi. org/ 10. 1037/ 1528- 3542.5. 1. 55
Schneider M (2018) 13 year old dies after days in coma. Today.Rtl.Lu. https:// today. rtl. lu/ news/ world/
12629 48. html
Smith MA, Ceni A, Milic-Frayling N, Shneiderman B, Mendes Rodrigues E, Leskovec J, Dunne C
(2010) NodeXL: a free and open network overview, discovery and exploration add-in for excel
2007/2010/2013/2016. https:// www. smrfo undat ion. org
Smith A, Toor S, van Kessel P (2018). Many turn to YouTube for children’s content, news, how-to les-
sons. http:// www. pewin ternet. org/ 2018/ 11/ 07/ many- turn- to- youtu be- for- child rens- conte nt- news-
how- to- lesso ns/
Stack S (2000) Media impacts on suicide: a quantitative review of 293 findings. Social Sci Q
81(4):957–971
Sumner SA, Galik S, Mathieu J, Ward M, Kiley T, Bartholow B, Mork P (2019) Temporal and
geographic patterns of social media posts about an emerging suicide game. J Adolesc Health
65(1):94–100
Tabachnick BG, Fidell LS (2014) Using multivariate statistics. Pearson Education, Harlow, Essex
Takeuchi S, Mochizuki Y, Masaki H, Takasawa N, Yamazaki K (2005) Stimulus preceding negativity
represents arousal induced by affective picture. Int Congr Ser 1278:385–388. https:// doi. org/ 10.
1016/j. ics. 2004. 11. 135
Tarlton A (2019) Kim Kardashian publicly pleas for YouTube to address videos targeting kids. It’s
instructing kids to kill themselves. Fatherly.Com. https:// www. fathe rly. com/ news/ kim- karda
shian- youtu be- momo- chall enge- kids- videos/
Tellis GJ, MacInnis DJ, Tirunillai S, Zhang Y (2019) What drives virality (sharing) of online digital
content? The critical role of information, emotion, and brand prominence. J Market 83(4):1–20.
https:// doi. org/ 10. 1177/ 00222 42919 841034
Térrasse M, Gorin M, Sisti D (2019) Social media, e-health, and medical ethics. Hastings Center
Report 49(1):24–33. https:// doi. org/ 10. 1002/ hast. 975
The Suicide Prevention Resource Center (2014). Social Media Guidelines for Mental Health Promo-
tion and Suicide Prevention. http:// www. sprc. org/ resou rces- progr ams/ social- media- guide lines-
mental- health- promo tion- and- suici de- preve ntion
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
SN Soc Sci (2021) 1:86
86 Page 30 of 30
Theweek.co.uk (2018) What is the Momo Suicide challenge? Police forces warn children and par-
ents about the popular online game. https:// www. thewe ek. co. uk/ 96248/ what- is- the- momo- suici
de- chall enge- and- is- it- dange rous
Till B, Niederkrotenthaler T, Herberth A, Vitouch P, Sonneck G (2010) Suicide in films: the impact
of suicide portrayals on nonsuicidal viewers’ well-being and the effectiveness of censorship. Sui-
cide Life-Threat Behav 40(4):319–327. https:// doi. org/ 10. 1521/ suli. 2010. 40.4. 319
Timm-Garcia J, Hartung K (2017) Family finds clues to teen’s suicide in blue whale paintings. Cnn.
Com https:// editi on. cnn. com/ 2017/ 07/ 17/ health/ blue- whale- suici de- game/ index. html
Twenge JM, Joiner TE, Rogers ML, Martin GN (2018) Increases in depressive symptoms, suicide-
related outcomes, and suicide rates among U.S. adolescents after 2010 and links to increased
new media screen time. Clin Psychol Sci 6(1):3–17. https:// doi. org/ 10. 1177/ 21677 02617 723376
Uvais NA (2019) Obsessive-compulsive disorder with suicide obsessions triggered by news reports about
the blue whale game. Adolesc Psychiatry 8(3):241–244. https:// doi. org/ 10. 2174/ 22106 76608 66618
08201 52633
Valkenburg PM, Peter J (2013) The differential susceptibility to media effects model. J Commun
63(2):221–243. https:// doi. org/ 10. 1111/ jcom. 12024
Wasserman D, Wasserman C (2009) Oxford textbook of suicidology and suicide prevention. Oxford Uni-
versity Press, Oxford
Waterson J (2019) Viral Momo Challenge is a malicious hoax, say charities. Theguardian.Com. https://
amp. thegu ardian. com/ techn ology/ 2019/ feb/ 28/ viral- momo- chall enge- is-a- malic ious- hoax- say- chari
ties
Webb S (2018a) What is Momo? Parents warned over sick WhatsApp suicide game that could be next
blue whale. Mirror.Co.Uk. https:// www. mirror. co. uk/ news/ world- news/ what- momo- paren ts- warned-
over- 13018 367
Webb S (2018b). Momo horror: teen self-harms after family is threatened through sick social media sui-
cide game. Foxnews.Com. https:// www. foxne ws. com/ tech/ momo- horror- teen- self- harms- after- fam-
ily- is- threa tened- throu gh- sick- social- media- suici de- game
Welbourne DJ, Grant WJ (2016) Science communication on YouTube: factors that affect channel and
video popularity. Public Understand Sci 25(6):706–718
World Health Organization (2008) Preventing suicide. A resource for media professionals. http:// www.
who. int/ mental_ health/ preve ntion/ suici de/ resou rce_ media. pdf
World Health Organization (2017) World Health Statistics 2017: monitoring health for the SDGs
World Health Organization (2019) Preventing suicide. A resource for filmmakers and others working on
stage and screen. https:// www. who. int/ publi catio ns- detail/ preve nting- suici de-a- resou rce- for- filmm
akers- and- others- worki ng- on- stage- and- screen
Yaqub MM, Beam RA, John SL (2017) We report the world as it is, not as we want it to be: journal-
ists’ negotiation of professional practices and responsibilities when reporting on suicide. Journalism
32(3):1–17. https:// doi. org/ 10. 1177/ 14648 84917 731957
Yoon S, Kleinman M, Mertz J, Brannick M (2019) Is social network site usage related to depression?
A meta-analysis of facebook-depression relations. J Affect Disorders. https:// doi. org/ 10. 1016/j. jad.
2019. 01. 026
YouTube (2019) We want to clear something up regarding the Momo Challenge. https:// twitt er. com/
YouTu be/ status/ 11008 20993 67149 5680
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:
use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com
... The most watched video attracts more than 11 million adults. Even for YouTube standards, these are very high engagement rates" (Kobilke & Markiewitz, 2021). For this reason, the role of school and family is crucial in protecting the new generation from temptation. ...
... This can be achieved on the one hand, by setting a relevant filter on the gadgets and on the other hand, through open communication about using social media and navigating properly. All this would be significant for preventing the threats mentioned above (Kobilke & Markiewitz, 2021). ...
Article
Full-text available
The Council of Europe’s (CoE) model on Digital Citizenship Education (DCE) has at its basis the ‘Competencies of Democratic Culture’ model, which considers being an active and responsible citizen and implies the development of a set of lifelong competencies both online and offline at various levels, as well. Taking the ‘Competencies of Democratic Culture’ the CoE experts elaborated on the DCE domains, which are the appropriate way to develop democratic culture competences in the digital environment. Georgia, as a member country, signed the agreement on DCE to be applied at the K-12 curriculum level and joined the DCE project of the CoE in July 2020. Thus, as an educational concept, DCE is relatively new to the education system of Georgia. The aim of the study was to describe the current situation concerning DCE in Georgia, namely on awareness of the concept and its foundations, as well as identifying self-reported DCE competences by teachers, students and parents, in the light of the Ribble’s and the CoE’s models. Data was collected in five schools in 2020. The issue was studied using the quantitative method, an online survey, 1954 respondents were involved in the research. It was demonstrated that most of the respondents had a lack of awareness about information-communication technologies. It can be concluded that communication among school teachers, students and parents about the issues of digital security has not started yet. The correlation between the geographic location of a school and teachers’ digital competences is not confirmed. Keywords: digital citizenship education, digital domains, digital competences, digital school culture, exploratory sequential design
... This may have negative consequences but it may lead to positive outcomes as well. Negative impacts of depiction of NSSI in the media might cover encouragement, guidance or instruction that possibly leads to contagion effects (Brown et al., 2018;Kobilke & Markiewitz, 2021). Theoretically, this can be explained by applying Bandura's social cognitive learning theory: the depiction of NSSI may result in disinhibition and weakening of social norms by framing NSSI as an acceptable way of dealing with mental health problems. ...
... Furthermore, it seems plausible that adolescents as a particularly susceptible and vulnerable group to NSSI might be especially drawn to looking for role models in the media they can identify with and relate to. Thus, identification with the person engaging in NSSI that is shown in the media -be it in traditional media environments or in social media -might well play a crucial role in the connection between media depiction of NSSI and possible imitating behavior as well (Kobilke & Markiewitz, 2021). Furthermore, media depiction of NSSI can result in reinforcement of existing problems or can entail triggering effects (Brown et al., 2018). ...
Chapter
This chapter provides an overview of the current state of research on non-suicidal self-injury (NSSI), starting with a definition, then continuing to elaborate on the vital role of the media in possibly enhancing or reducing the non-suicidal self-injurious behavior. The role of machine learning is discussed.
... It is an internet hoax that claimed a user named Momo would harass children online into performing dangerous acts. Though the challenge was debunked as a hoax, concerned parents still viewed videos about this topic on YouTube as reported in prior works in this field [61,62]. In an analysis of YouTube videos about polycystic ovarian syndrome (PCOS), Malhotra et al. [63] analyzed comments using the Benjamini-Hochberg Procedure and sentiments with SentiStrength. ...
Article
Full-text available
The work presented in this paper makes multiple scientific contributions with a specific focus on the analysis of misinformation about COVID-19 on YouTube. First, the results of topic modeling performed on the video descriptions of YouTube videos containing misinformation about COVID-19 revealed four distinct themes or focus areas—Promotion and Outreach Efforts, Treatment for COVID-19, Conspiracy Theories Regarding COVID-19, and COVID-19 and Politics. Second, the results of topic-specific sentiment analysis revealed the sentiment associated with each of these themes. For the videos belonging to the theme of Promotion and Outreach Efforts, 45.8% were neutral, 39.8% were positive, and 14.4% were negative. For the videos belonging to the theme of Treatment for COVID-19, 38.113% were positive, 31.343% were neutral, and 30.544% were negative. For the videos belonging to the theme of Conspiracy Theories Regarding COVID-19, 46.9% were positive, 31.0% were neutral, and 22.1% were negative. For the videos belonging to the theme of COVID-19 and Politics, 35.70% were positive, 32.86% were negative, and 31.44% were neutral. Third, topic-specific language analysis was performed to detect the various languages in which the video descriptions for each topic were published on YouTube. This analysis revealed multiple novel insights. For instance, for all the themes, English and Spanish were the most widely used and second most widely used languages, respectively. Fourth, the patterns of sharing these videos on other social media channels, such as Facebook and Twitter, were also investigated. The results revealed that videos containing video descriptions in English were shared the highest number of times on Facebook and Twitter. Finally, correlation analysis was performed by taking into account multiple characteristics of these videos. The results revealed that the correlation between the length of the video title and the number of tweets and the correlation between the length of the video title and the number of Facebook posts were statistically significant.
... It is an internet hoax that claimed a user named Momo would harass children online into performing dangerous acts. Though the challenge was debunked as a hoax, concerned parents still viewed videos about this topic on YouTube as reported in prior works in this field [61,62]. In an analysis of YouTube videos about polycystic ovarian syndrome (PCOS), Malhotra et al. [63] analyzed comments using the Benjamini-Hochberg Procedure and sentiments with SentiStrength. ...
Preprint
Full-text available
The work presented in this paper makes multiple scientific contributions with a specific focus on the analysis of misinformation about COVID-19 on YouTube. First, the results of topic modeling performed on the video descriptions of YouTube videos containing misinformation about COVID-19 revealed four distinct themes or focus areas - Promotion and Outreach Efforts, Treatment for COVID-19, Conspiracy Theories regarding COVID-19, and COVID-19 and Politics. Second, the results of topic-specific sentiment analysis revealed the sentiment associated with each of these themes. For the videos belonging to the theme of Promotion and Outreach Efforts, 45.8% were neutral, 39.8% were positive, and 14.4% were negative, for the videos belonging to the theme of Treatment for COVID-19, 38.113% were positive, 31.343% were neutral, and 30.544% were negative, for the videos belonging to the theme of Conspiracy Theories regarding COVID-19, 46.9% were positive, 31.0% were neutral, and 22.1% were negative, and for the videos belonging to the theme of COVID-19 and Politics, 35.70% were positive, 32.86% were negative, and 31.44% were negative. Third, topic-specific language analysis was performed to detect the various languages in which the video descriptions per topic were published on YouTube. This analysis revealed multiple novel insights. For instance, for all the themes, English and Spanish were the most widely used and second-most widely used languages, respectively. Fourth, the patterns of sharing these videos on other social media channels such as Facebook and Twitter were also investigated. The results revealed that videos containing video descriptions in English were shared the highest number of times on Facebook and Twitter. Finally, correlation analysis was performed by taking into account multiple characteristics of these videos. The results revealed that the correlation between the length of the video title and the number of Tweets as well as the correlation between the length of the video title and the number of Facebook posts was statistically significant.
... Furthermore, unrestricted access to harmful and inappropriate content on YouTube videos is increasing among minors [7]. YouTube spends significant time to recognize harmful content, but its machine-learning algorithms that automatically identify problematic content find it challenging to recognize ambiguities, resulting in misclassification [8]. ...
... Goodbye letters, fast-cut "mash-up videos," videos containing self-harm-and suicide-related subliminal messages [29] Self-harm-or suicide-related artwork or memes [33] Self-harm-or suicide-related "online challenges" [34] Triggering imagery, including cuts, wounds, and other forms of self-infliction [32] Inspirational quotes, pictures, or memes [35] Implicit ...
Article
Self-harm- and suicide-related (SHS) content on social media is not clearly defined and, therefore, audience effects remain scattered. This paper makes three contributions: First, it offers a definition and taxonomy for SHS content on social media with potentially negative audience effects. SHS content on social media is either explicit, implicit or ambivalent in nature, which makes it hard to regulate, and challenging for media effects research. Second, different forms of social media use are discussed as antecedents to self-harm and suicide. And third, functional social media affordances that shape the exposure to problematic SHS content on social media are reviewed. The regulation of the shape-diverse, problematic SHS content on social media remains a pressing future challenge.
... Philipp Budeiki, the 22-year-old creator of the game from Russia, was sentenced to 3 years in jail for inciting Russian youths to kill themselves. The BWC also appears to have morphed into new online challenges such as Momo and Jonathan Galindo, which also end with the victim's suicide (Kobilke & Markiewitz, 2021). In this study, we sought to investigate real-world examples of BWC victims to find clues that might aid in preventing future victims of these deadly online challenges. ...
Article
Full-text available
Since 2015, there have been numerous reports, mostly unsubstantiated, of teen suicides associated with an online contest – the Blue Whale Challenge (BWC) in Russia, Europe and India. Recently, reports emerged of possible BWC cases in China. 7 Cases were selected from Chinese media reports by online searching. Multiple sources of information (e.g., published reports, social media entries) were searched and examined for detailed information about cases, to collect information on pre-game situations, game activities, and post-game conditions. Thematic analysis was used to determine themes in BWC victim antecedents, behaviors, and consequences. Two of seven cases were female. Ages ranged 11-19 years (M = 15.57, SD = 2.94). Thematic analysis of the seven Chinese cases revealed a predisposition phase (low mood, interpersonal problems, poor school performance) followed by a five-stage process of BWC involvement: 1) contact with a death-oriented game (BWC); 2) acceptance of the game’s rules and the escalating challenges; 3) BWC incidents conclusion – a suicide attempt/completion; 4) discovery by others and/or game rejection, followed by 5) personal recovery. Analyses revealed interventions are required at each victim stage, and the necessity for increased online efforts. As most adolescent cases showed school problems (e.g., poor school performance, absenteeism), we recommend increased efforts at primary to tertiary schools to assess for and address personal difficulties of students demonstrating poor school performance. Results call for a new wave of revised suicide prevention methods led by digital natives but in collaboration with government, media and communities to address unique contemporary risks.
Chapter
This entry introduces the complex and theoretically diverse concept of nonsuicidal self‐injury (NSSI). NSSI is defined as a form of self‐harm that does not aim for fatal outcome and must therefore be delimited from suicidal behavior with fatal intentions. Nevertheless, NSSI is considered a strong predictor for actual suicidal behavior. As a public health concern which affects adolescents in particular, NSSI is subject to efforts that aim at prevention and provision of help resources. Importantly, the media plays an integral role in the context of self‐harming behavior as it can have both detrimental and beneficial effects: on a continuum of effects, the media can (i) serve as a negative reinforcer promoting harmful behavior or (ii) act as a positive contributor by supporting favorable coping strategies and prevention. Among other determinants, such as intraindividual predisposition or susceptibility, the respective outcome largely depends on how NSSI is portrayed in the media. Thus, attention should be drawn to the role of the media in regard to NSSI – in terms of both research efforts and practical handling of NSSI content. In this, the focus is now largely on social media in particular, and algorithms and machine learning seem to be promising approaches.
Article
Full-text available
Introduction Social media has created opportunities for children to gather social support online (Blackwell et al., 2016; Gonzales, 2017; Jackson, Bailey, & Foucault Welles, 2018; Khasawneh, Rogers, Bertrand, Madathil, & Gramopadhye, 2019; Ponathil, Agnisarman, Khasawneh, Narasimha, & Madathil, 2017). However, social media also has the potential to expose children and adolescents to undesirable behaviors. Research showed that social media can be used to harass, discriminate (Fritz & Gonzales, 2018), dox (Wood, Rose, & Thompson, 2018), and socially disenfranchise children (Page, Wisniewski, Knijnenburg, & Namara, 2018). Other research proposes that social media use might be correlated to the significant increase in suicide rates and depressive symptoms among children and adolescents in the past ten years (Mitchell, Wells, Priebe, & Ybarra, 2014). Evidence based research suggests that suicidal and unwanted behaviors can be promulgated through social contagion effects, which model, normalize, and reinforce self-harming behavior (Hilton, 2017). These harmful behaviors and social contagion effects may occur more frequently through repetitive exposure and modelling via social media, especially when such content goes “viral” (Hilton, 2017). One example of viral self-harming behavior that has generated significant media attention is the Blue Whale Challenge (BWC). The hearsay about this challenge is that individuals at all ages are persuaded to participate in self-harm and eventually kill themselves (Mukhra, Baryah, Krishan, & Kanchan, 2017). Research is needed specifically concerning BWC ethical concerns, the effects the game may have on teenagers, and potential governmental interventions. To address this gap in the literature, the current study uses qualitative and content analysis research techniques to illustrate the risk of self-harm and suicide contagion through the portrayal of BWC on YouTube and Twitter Posts. The purpose of this study is to analyze the portrayal of BWC on YouTube and Twitter in order to identify the themes that are presented on YouTube and Twitter posts that share and discuss BWC. In addition, we want to explore to what extent are YouTube videos compliant with safe and effective suicide messaging guidelines proposed by the Suicide Prevention Resource Center (SPRC). Method Two social media websites were used to gather the data: 60 videos and 1,112 comments from YouTube and 150 posts from Twitter. The common themes of the YouTube videos, comments on those videos, and the Twitter posts were identified using grounded, thematic content analysis on the collected data (Padgett, 2001). Three codebooks were built, one for each type of data. The data for each site were analyzed, and the common themes were identified. A deductive coding analysis was conducted on the YouTube videos based on the nine SPRC safe and effective messaging guidelines (Suicide Prevention Resource Center, 2006). The analysis explored the number of videos that violated these guidelines and which guidelines were violated the most. The inter-rater reliabilities between the coders ranged from 0.61 – 0.81 based on Cohen’s kappa. Then the coders conducted consensus coding. Results & Findings Three common themes were identified among all the posts in the three social media platforms included in this study. The first theme included posts where social media users were trying to raise awareness and warning parents about this dangerous phenomenon in order to reduce the risk of any potential participation in BWC. This was the most common theme in the videos and posts. Additionally, the posts claimed that there are more than 100 people who have played BWC worldwide and provided detailed description of what each individual did while playing the game. These videos also described the tasks and different names of the game. Only few videos provided recommendations to teenagers who might be playing or thinking of playing the game and fewer videos mentioned that the provided statistics were not confirmed by reliable sources. The second theme included posts of people that either criticized the teenagers who participated in BWC or made fun of them for a couple of reasons: they agreed with the purpose of BWC of “cleaning the society of people with mental issues,” or they misunderstood why teenagers participate in these kind of challenges, such as thinking they mainly participate due to peer pressure or to “show off”. The last theme we identified was that most of these users tend to speak in detail about someone who already participated in BWC. These videos and posts provided information about their demographics and interviews with their parents or acquaintances, who also provide more details about the participant’s personal life. The evaluation of the videos based on the SPRC safe messaging guidelines showed that 37% of the YouTube videos met fewer than 3 of the 9 safe messaging guidelines. Around 50% of them met only 4 to 6 of the guidelines, while the remaining 13% met 7 or more of the guidelines. Discussion This study is the first to systematically investigate the quality, portrayal, and reach of BWC on social media. Based on our findings from the emerging themes and the evaluation of the SPRC safe messaging guidelines we suggest that these videos could contribute to the spread of these deadly challenges (or suicide in general since the game might be a hoax) instead of raising awareness. Our suggestion is parallel with similar studies conducted on the portrait of suicide in traditional media (Fekete & Macsai, 1990; Fekete & Schmidtke, 1995). Most posts on social media romanticized people who have died by following this challenge, and younger vulnerable teens may see the victims as role models, leading them to end their lives in the same way (Fekete & Schmidtke, 1995). The videos presented statistics about the number of suicides believed to be related to this challenge in a way that made suicide seem common (Cialdini, 2003). In addition, the videos presented extensive personal information about the people who have died by suicide while playing the BWC. These videos also provided detailed descriptions of the final task, including pictures of self-harm, material that may encourage vulnerable teens to consider ending their lives and provide them with methods on how to do so (Fekete & Macsai, 1990). On the other hand, these videos both failed to emphasize prevention by highlighting effective treatments for mental health problems and failed to encourage teenagers with mental health problems to seek help and providing information on where to find it. YouTube and Twitter are capable of influencing a large number of teenagers (Khasawneh, Ponathil, Firat Ozkan, & Chalil Madathil, 2018; Pater & Mynatt, 2017). We suggest that it is urgent to monitor social media posts related to BWC and similar self-harm challenges (e.g., the Momo Challenge). Additionally, the SPRC should properly educate social media users, particularly those with more influence (e.g., celebrities) on elements that boost negative contagion effects. While the veracity of these challenges is doubted by some, posting about the challenges in unsafe manners can contribute to contagion regardless of the challlenges’ true nature.
Article
Full-text available
The authors test five theoretically derived hypotheses about what drives video ad sharing across multiple social media platforms. Two independent field studies test these hypotheses using 11 emotions and over 60 ad characteristics. The results are consistent with theory and robust across studies. Information-focused content has a significantly negative effect on sharing, except in risky contexts. Positive emotions of amusement, excitement, inspiration, and warmth positively affect sharing. Various drama elements such as surprise, plot, and characters, including babies, animals, and celebrities arouse emotions. Prominent (early vs. late, long vs. short duration, persistent vs. pulsing) placement of brand names hurts sharing. Emotional ads are shared more on general platforms (Facebook, Google+, Twitter) than on LinkedIn, and the reverse holds for informational ads. Sharing is also greatest when ad length is moderate (1.2 to 1.7 minutes). Contrary to these findings, ads use information more than emotions, celebrities more than babies or animals, prominent brand placement, little surprise, and very short or very long ads. A third study shows that the identified drivers predict sharing accurately in an entirely independent sample.
Article
This study examines how homophily, emotional attachment, and credibility influence the popularity of a video blogger (hereinafter referred to as vlogger) and his/her viewers' purchase decision in the context of the beauty product industry. More specifically, the research investigates the effects of four dimensions of the homophily construct (i.e., attitude, value, background, and appearance), vlogger's expertise, and emotional attachment to the vlogger on his/her popularity. In turn, the vlogger's popularity influences viewers' purchase of recommended products. Data were collected online among a sample of 501 US women about beauty product vloggers. The results show that three dimensions of homophily (attitude, values, and appearance) have a significant effect on the vlogger's popularity. Emotional attachment has a significant effect whereas expertise has no significant effect. Vloggers' popularity has a significant effect on viewers' purchase of recommended beauty products. Overall, our findings highlight the role of homophily and emotional attachment for the study of vloggers' popularity.
Article
Die Thematisierung von Suizidalität und Suizid in den Medien kann je nach Art der Darstellung Imitationshandlungen auslösen (»Werther-Effekt«) oder protektiv wirken (»Papageno-Effekt«). Diese für traditionelle Medienumgebungen bestätigten Effekte gewinnen in sozialen Medien aufgrund der veränderten Rahmenbedingungen erneut an Brisanz. Diskutiert wird der Zusammenhang von (sozialen) Medien und Suiziden, inwieweit Suiziddarstellungen in sozialen Netzwerken negative, aber auch positive Effekte insbesondere auf Kinder und Jugendliche haben können und welche Lehren sich daraus potentiell für die Suizidprävention ziehen lassen.
Article
Responsible reporting on suicide (RRS) is a cornerstone of suicide prevention. Scholars have developed media guidelines facilitating RRS, but there are barriers to accepting and implementing these suggestions, alongside obstacles for journalists’ comprehensive adherence to them. For example, journalists could perceive media guidelines as a threat to their autonomy or to the freedom of the press. However, there is scant evidence on how journalists actually evaluate RRS media guidelines, leaving it unclear as to how journalists perceive them and how willing they are to adhere to them. The present study addresses this research gap and explores potential barriers to guideline adherence using 30 qualitative, semi-structured interviews with journalists in Germany. Journalists expected that their freedom of speech would remain untouched, which mostly referred to the non-restrictive tone of the guidelines, to persuasive, evidence-based explanations, and to clear reporting examples. Practical implications for increasing journalists’ adherence to media guidelines are discussed.
Article
Chronically depressed individuals have a high suicide risk. However, it is an open question whether previously observed risk factors for suicide attempts also apply to chronic depression or whether there are specific risk factors related to chronic-recurrent illness. We drew from a large group of chronically depressed individuals seeking psychotherapy to investigate demographic and psychological factors related to previous suicide attempts. Participants took part in the SCID and filled out established questionnaires. Among 368 chronically depressed individuals (68.7% women; Mage = 40.95 years), 75 participants (19.4%) reported previous suicide attempts. Men were more likely to have used violent methods. We tested the links of having attempted suicide with different variables using logistic regression analyses. Our findings corroborate previously observed risk factors (e.g. sexual abuse, personality disorders) and suggest other risk factors which could be especially relevant in chronic depression (e.g. depression severity, interpersonal problems, self-injurious behavior, and overall years of depression). Other risk factors from previous studies were not related to suicidal behavior within our sample (e.g. anxiety disorders, PTSD). Thus, mental health professionals should be aware that risk factors for suicidal behavior might vary between diagnosis groups and that chronic illness might be a risk factor in itself.
Article
Purpose: Rates of suicide are increasing rapidly among youth. Social media messages and online games promoting suicide are a concern for parents and clinicians. We examined the timing and location of social media posts about one alleged youth suicide game to better understand the degree to which social media data can provide earlier public health awareness. Methods: We conducted a search of all public social media posts and news articles on the Blue Whale Challenge (BWC), an alleged suicide game, from January 1, 2013, through June 30, 2017. Data were retrieved through multiple keyword search; sources included social media platforms Twitter, YouTube, Reddit, Tumblr, as well as blogs, forums, and news articles. Posts were classified into three categories: individual "pro"-BWC posts (support for game), individual "anti"-BWC posts (opposition to game), and media reports. Timing and location of posts were assessed. Results: Overall, 95,555 social media posts and articles about the BWC were collected. In total, over one-quarter (28.3%) were "pro"-BWC. The first U.S. news article related to the BWC was published approximately 4 months after the first English language U.S. social media post about the BWC and 9 months after the first U.S. social media post in any language. By the close of the study period, "pro"-BWC posts had spread to 127 countries. Conclusions: Novel online risks to mental health, such as prosuicide games or messages, can spread rapidly and globally. Better understanding social media and Web data may allow for detection of such threats earlier than is currently possible.