Conference PaperPDF Available

Meaningful Noise: Understanding Sound Effects in Computer Games

Authors:
Meaningful Noise: Understanding Sound Effects in
Computer Games
Inger Ekman
Hypermedia Laboratory, University of Tampere, Finland
inger.ekman@uta.fi
ABSTRACT
The various roles of sound in computer games are yet not very
well understood and sound is an underused artistic potential in
many games. This study presents a framework for understanding
game sounds to remedy the situation. The framework is based on
examining sound effects and distinguishing between diegetic and
non-diegetic sound as signals and referents. In games, diegetic is
something that that belongs to the game world, whereas non-
diegetic is something from outside the game’s fictive
environment. The framework provides a tool for classifying game
sounds, but may also be used as a design tool.
Keywords
Sound effects; sound design; computer games.
1. INTRODUCTION
Computer games are often thought of as a principally visual
medium. Sound, on the other hand is regularly given only
minimal attention compared to other forms of content. Moreover,
the role of sound is often that of being a mere decoration, and
sound is seldom used as an element relevant for playing. In fact,
many games are even fully playable with the sounds turned off.
Compared to the extensive use of visual information, sound
remains an underused potential.
There has also been remarkably little discussion on game audio
design and game sound research. Consequently, the function of
sound in computer games is still not well understood. This can
be seen as a distinct gap in the game development literature: to
date there are only a few books dealing specifically with game
audio [11, 14] and even these focus not so much on design as on
sound and music production, tools and career strategies.
The purpose of this paper is to investigate the role of sound
effects in computer games and provide a framework that
highlights the different signifying functions handled by sounds.
The investigation will revolve strongly around the notion of
diegetic sound. In film studies, the term diegetic refers to
something that belongs to the story world. Diegetic sound, thus,
is a sound that belongs to the diegesis, the fictive world. [2] In
computer games, a sound can similarly be thought of as diegetic
if it is interpreted as being real in the game world. However,
computer game sound is only seldom purely diegetic. In this
paper, I will demonstrate how analyzing sounds in relation to the
game's diegesis can help create an understanding of the different
roles that sound effects can have in a game.
The structure of this paper is as follows: First is a section on
related work in the area of game sound, sound design and game
sound typologies. Based on the notion of diegetic sound, I will
then construct a framework that displays the various
relationships possible between a sound signals and referents.
Finally, I conclude with a discussion on using the framework in
design and discuss its impact with regard to game sound theory.
2. RELATED RESEARCH
To date, there has been remarkably little research on game audio.
Studying sound in games has mostly been motivated by game
development for visually impaired players; see for example [4, 5,
16]. These studies investigate the usability of sound for playing,
but primarily focus on sound-only games. Even if some studies
[8, 16] also consider the use of sound as an alternative form of
interaction for sighted players, they do not consider game sound
in combination to the visual components of gaming. Neither do
these studies discuss why nor by what means is sound understood
in games and what meaning sound is given in different contexts.
Consequently, sound design is mostly viewed as an accessibility
issue - a view also reflected in the IGDA accessibility white
paper [9] - rather than an aesthetic realm of its own.
In addition to academic papers, there also exists a handful of
essays about interactive sound, written by composers and game
sound designers. However, these papers are for the most part
concerned with interactive music composition [7, 12, 17], and
only briefly touch on the issue sound effect design. Prince [13]
also discusses sound effects, but mainly in relation to sound
manipulation techniques. Bernstein’s [1] classification of sounds
according to their function in games is a rare exception, and I
will return to it in the next subsection.
2.1 The Meaning of Sound in Games
There have been few attempts at systematically describing the
roles sound can take in a game, and part of these has taken place
outside academic publication forums. In his blog on audio
gaming, Folmann [6] presents a game sound typology. Due to the
writers focus on sound production, the typology does, however,
not serve to categorize sounds according to how they are
interpreted in a game.
Stockburger [15] provides a more detailed categorization of game
sounds. Nevertheless, apart from the interesting analysis of the
spatial information functions of sound, this typology also remains
focused on technical issues related to audio production rather
than on how players understand sounds.
A more suitable distinction is the one by game sound designer
and composer Bernstein [1], which defines three sorts of game
sound types: Sounds can either directly signify the event that is
causing them, as when a ball hits the ground with a thwack.
Sounds can also be indirect. In this case, a certain sound signifies
an event in the game, but the link is indirect. Third, sounds can
also be environmental, in which case they provide to convey a
sense of game world presence. This categorization is closer to the
player’s perspective, since it attempts to define sounds in terms
of how they relate to a player’s actions and game-world events.
The categorization is not attempted as a full typology, but it will
serve as a starting point for the framework in this paper.
3. CONSTRUCTING A FRAMEWORK
FOR UNDERSTANDING SOUND
From a semiotic viewpoint, the meaning of signs can be
examined by looking at two aspects: the sign, that is the sound
signal and the thing the signal points to, the signified event. In
game sound, the signal is the sound itself, as heard by the player,
whereas the referent is the thing that is being told by the sound.
As Bernstein's [1] classification suggests, one way of analyzing
the various types of game sounds is to look at what kinds of
events sounds signify, or simply put, what information the
sounds give to the players. We will therefore proceed with a
closer look at the sound signals, and, on the other hand, the
signified game events.
The means for doing this will be the notion of diegesis
mentioned earlier. In the introduction, the term diegetic was
defined as something that is real within the game world. Non-
diegetic, on the other hand, is something that is not part of the
fictive world of the game. Examples of non-diegetic parts of
gaming are the physical environment where a player is situated
while playing as well as (in most cases) the game’s interface.
Table 1 Outline of a framework of sounds in computer
games. There are four main types of signal–referent
relationships.
Diegetic signal Non-diegetic
signal
Diegetic referent Diegetic sounds Symbolic sounds
Non-diegetic
referent Masking sounds Non-diegetic
sounds
We can start by sketching a framework based on the signal–
referent distinction, as shown in Table 1. The framework shows
four main types of relationships that can exist between the signal
and the referent of a sound. The function of a sound depends on
the nature of this relation; of how the signal and the referent
stand in relation to the game world's diegesis. Next, the
framework will be explained after which I will conclude with a
discussion on how the framework can be used to guide design
decisions in the sound design process.
3.1 Diegetic and Non-diegetic Signals
As noted earlier, the basis for the framework is the distinction
between diegetic and non-diegetic, and, on the other hand signal
and referent. How can we, then, define whether a sound is
diegetic or not? To answer: the issue is mostly one of
interpretation. The question that has to be answered is whether
the sound that the player hears is to be considered real within the
story and whether it is a sound that exists in the game world.
One approach to deciding whether a sound is real is to evaluate
whether a sound has a source within the game. Another approach
is to evaluate, whether the sound behaves in a realistic manner in
the game. One way to decide whether or not this is the case is to
study the reactions of other characters to determine if they can
hear the sound or not. If the sound is real, then it should be
coming from somewhere and other characters in the game world
should, in theory, be able to hear it (assuming, of course, that
they are able to hear at all and close enough to the source).
Consequently, if a sound has no in-game source, then the sound
signal is non-diegetic. Also, sounds should be considered non-
diegetic if they are treated in such a way, that they do not seem
real within the game world.
3.2 Diegetic and Non-diegetic Referents
The referent of a certain sound is the meaning it carries. In
games the referent will often be a certain event within the game.
But it can also be something less clearly identified, such as
ongoing processes like the presence of a character or place. The
referent can also be information about internal state, as when a
certain sound is signifying emotions experienced by some
character. The relevant point is whether the event or information
signified by a certain sound is real within the diegesis, that is,
whether it is concerned with things that exist as real in the game
world. Whereas a change in the state of the game engine is not
diegetic, the emotions of a character are as real as anything
fictive can ever be, and thus should be considered diegetic.
3.3 Different Game Sound Types
Depending on the sound signal and the referent’s relations to the
game world (diegetic or non-diegetic), four different
combinations of signal–referent relationships can arise. I will
refer to these sound types as diegetic, symbolic, masking and
non-diegetic.
3.3.1 Diegetic Sounds
With diegetic sounds, both the sound signal and referent are
diegetic. These sounds are real within the game world and they
signify events or information that is real in the game. This
seemingly trivial definition has one crucial implication for
design: it requires that the sound has real effects within the game
world. This is currently not very common in computer games,
and not many games actually use this kind of functional sound.
However, herein lays the power of the notion of diegetic sound.
When a sound is diegetic in the sense we have defined above, it
becomes part of the game structure instead of being merely a
decoration.
To name an example, in the Thief series (Eidos Interactive 1998-
2004) the sounds of a player character’s footsteps are diegetic
and form a crucial role in playability. Guards can hear the player
move and there is a comparatively stable and understandable
relationship between how loud the footsteps sound to the player
and how well they can be heard by guards at different distances.
This way, the game uses the properties of diegetic sound to
create interesting new playing experiences. The design solution
has created an interesting game mechanic: the player has to
sneak silently along corridors in order not to get caught.
Sneaking as a concept would in fact be impossible without an
understanding of whether a sound can be heard by other
characters or not. [10]
3.3.2 Symbolic Sounds
Symbolic sounds have diegetic referents, but the actual sound
signals are non-diegetic. These kinds of sounds are very common
in computer games. One example is the use of music to
accompany the player’s actions in the game. These sounds relate
to events within the game, while the signals remain non-diegetic.
Sometimes, the distinction between diegetic and non-diegetic
signal is not as easy to draw. Without knowledge about the game
designer’s view of the world, it may sometimes be impossible to
decide whether sounds belong to the world, or whether they are
sounds from outside the game world. The decision is somewhat
alike determining if the squeaking sound of two cartoon
characters shaking hands is the actual sound of their hands
rubbing against each other (diegetic), or, if it is rather a (non-
diegetic) sound glued upon their otherwise silent handshake.
Similarly, many of the sounds in Pac-Man (Namco 1980) may be
considered either diegetic or non-diegetic, depending on how we
choose to interpret the maze-world in which Pac-Man exists.
This discussion is parallel to a situation in film studies, where
the sounds of events on screen have been substituted to make
them sound ‘bigger than life’. For example, a common practice
in film is to use cornstarch to create the sound of walking in
snow, since in reality, walking on snow does not make much
sound. Film sound theorist Chion calls these ‘rendered’ sounds,
the embellished and boosted variants of the ‘real’ sounds of the
events [3]. Nevertheless, even if some sounds are artificial, they
can be perceived as real within the story. More importantly, the
way we interpret them lays not only in the sound signals, but
much in how the sounds are related to. In a game, part of this
decision is how other characters react to sounds. In most current
games, non-player characters show no reaction to sounds heard
by the player. Thus, at the moment, most games use non-diegetic
signals, which can be inferred from the way they are treated.
3.3.3 Masking Sounds
Sometimes a sound signal is diegetic, but it signifies a non-
diegetic event. In this kind of relationship, the sound is used to
mask a non-diegetic message with a diegetic signal, hence the
name. A common example of masking sound is when a player
triggers a monster in the game and is notified of this by, for
example, a growl or shout from the monster in question. The
sound is, essentially, played because the player has entered a
certain hot spot. In many games, the reason for the sound is not
related to whether the monster actually can see the player, or vice
versa, so the signified event is non-diegetic. However, the sound
is masking this technicality and notifying the player of the event
with a diegetic, in-game growl.
Masking sound is often used to make something that is
essentially non-diegetic fit in with the rest of the gaming
experience. However, this way of thinking about sounds
highlights an interesting part of game expression. In a sense,
game sounds (or visuals, for that matter) are never signifiers of
real’ events, but are always constructs covering up the technical
functionality of the game engine. Thus, essentially, all game-
initiated sounds could be thought of as masking sounds, since the
sounds are constructions. The exception is the player’s actions,
since these events are the only ones that ‘truly take place within
the diegesis.
Nevertheless, some game-initiated events are more diegetic than
others. Consider for example the sound of a door creaking open
or the footsteps of a character in the stairwell. Both events are
game-initiated and technically they have nothing to do with the
fictive world of the game. However, these events can be
interpreted as belonging to the diegesis, at least much more so
than the event of monster proximity triggering mentioned before.
Thus, when comparing sounds within a game, some events are
closer to the diegesis than other.
3.3.4 Non-diegetic Sounds
The last alternative in the framework is that neither the signal
nor the referent is diegetic. Here, the sound effect is shown to
signal an event that is not real within the game world. However,
instead of masking this with a diegetic signal, as in the previous
category, now also the chosen signal is non-diegetic.
The most common example of this kind of sound is the sound of
the game interface. With interface sounds, neither the referent
(e.g. a menu-choice) nor the sound signifying it (dry pop) claims
to exist in the game world. The sounds are communicating
information from outside the game. Naturally, they may be
designed in a manner as to suit the general mood of the game.
Such would be choosing hollow metallic sounds in the menu of a
horror game. Regardless of the general design of these sounds,
they are nevertheless, unconcerned with the story world,
disconnected from the game environment itself.
Despite this disconnection, non-diegetic sounds are also often
part of the sounds of playing. For example, in most games,
background music is non-diegetic. Like in movies, the player
accepts the symphonic sounds as something from outside the
story and does not anticipate finding an orchestra perched on a
nearby hilltop or balcony. Another common form of non-diegetic
sound is the narrator’s voice. This is not so often used in games,
but it does exist. For example, the game Broken Sword: The
Sleeping Dragon (THQ 2003) features characters, who will
narrate the player's actions. The narration is also in past tense,
which takes it further from the now-and-here of playing. For
example, in reply to something that cannot be done in the game,
the female character will say, "I thought about doing that, but
decided against it".
4. CONCLUSIONS
The potential of sounds have recently been recognized, and
games are starting to use sounds in new and interesting ways.
One direction has been to use sounds so that they are relevant for
playing the game. This development involves both games in
which listening is important for playing and games where players
produce or manipulate sound as part of the gaming activity.
There are several ways in which a sound can become relevant for
playing. For example, sound could be used as a primary source of
in-game information. This would allow creating games in which
part of playing is to actively gather information by listening to
the sounds of the game (for comparison’s sake, think of the many
games in which the player visually scanning the environment for
objects of interest). On the other hand, games can also use sound
manipulation and the production of sound as a core game
mechanic. This way of thinking about sound facilitates
interesting new gaming forms, in which a player plays by making
sound, or as a contrast, by being quiet.
One reason for the unrealistic reactions to game sound lies in the
difficulty in making game characters with perceptive abilities,
such as hearing. However, despite technical limitations this
question is becoming more important with the recent increase in
online gaming. The distinction can be thought of more as a
philosophical one – is the sound such that if there were hearing
characters, it could be heard? Since online games allow players
to play with (or against) other players instead of computers, there
is a possibility to create games in which the sounds of a player’s
actions do matter without having to implement complex
procedures for handling perception.
For example, in multiplayer gaming environments, game sounds
can become important for playing simply by making them
available for other players to hear. Conversely, if sounds are not
transmitted, they will remain non-diegetic and meaningless from
a playing perspective. Finally, from a functional viewpoint it is
enough if there is even one character in the game world that
potentially can hear the player’s sounds. The fact that there is at
least someone to hear the player’s actions (and use this
information as a basis for their actions, whether for collaboration
or opposition) is enough to make the sounds relevant for playing.
This function can be seen especially within the genre of first-
person shooter games like, for example, Quake (1996-2001),
Doom (1993-2002) and Unreal Tournament (1999-2004). In
these games sounds are already relevant for playing, calling for
game activities such as stealth moving and performing silent hits.
This paper presents a framework for classifying game sounds.
The framework illustrates how game sounds function, and shows
what elements affect the meaning attributed with a sound. By
highlighting how sounds are interpreted, the framework can be
used to support the sound design process. Since the framework
builds heavily upon the notion of diegesis, it is most suitable for
analyzing and designing games involving fictive worlds.
Understanding the ways in which sound functions in games is
also important for the emerging field of academic game sound
studies. This paper redefines the notion of diegetic sound to fit
the active-participation frame of computer gaming by addressing
the way players make sense of game sound as they interact with
the game. At the same time, examining game sound from the
viewpoint of a game's diegesis will hopefully provide common
ground for discussion between film studies and game studies.
5. REFERENCES
[1] Bernstein, D. 1997. Creating an Interactive Audio
Environment. Game Developer Magazine, October.
[2] Bordwell, D. & Thompson, K. 1985. Fundamental
Aesthetics of Sound in the Cinema. In Weis, E. & Belton, J.
(eds.) Film Sound, Theory and Practice. Columbia
University Press. 181–199.
[3] Chion, M. 1994. Audio-vision: Sound on Screen. Columbia
University Press.
[4] Eriksson, Y. & Gärdenfors, D. 2004. Computer Games for
Children with Visual Impairments. Proc. 5th Intl Conference
of Disability, Virtual Reality & Associated Technologies.
79–86.
[5] Friberg, J. & Gärdenfors, D. 2004. Audio Games: New
Perspectives on Game Audio. Proc. Advances in Computer
Entertainment Technology.
[6] Folmann, T. 2004. Dimensions of Game Audio.
Unpublished. Available at http://www.itu.dk/people/
folmann/2004/11/dimensions-of-game-audio.html [Accessed
October 28, 2005.]
[7] Griffin, D. 1998. Musical Techniques for Interactivity.
Gamasutra, 2 (18).
[8] Hadrup, R., Jakobsen, P. S., Juul, M. S., Lings, D.,
Magnúsdóttir, Å. 2004. Designing an Auditory W-LAN
based Game. http://www2.hku.nl/~audiogam/ag/articles/
Designing%20an%20Auditory%20W-LAN%20based%20
Game.pdf [Accessed October 28, 2005.]
[9] IGDA. 2004. Accessibility in Games: Motivations and
Approaches. International Game Developers Association
white paper. http://www.igda.org/accessibility/IGDA_
Accessibility_WhitePaper.pdf [Accessed October 28, 2005.]
[10] Leonard, T. 1999. Postmortem: Looking Glass’s Thief: The
Dark Project. Game Developer Magazine, July.
[11] Marks, A. 2001. Game Audio. CMP Books, Lawrence.
[12] Miller, M. 1997. Producing Interactive Audio: Thoughts,
Tools, and Techniques. Game Developer Magazine,
October.
[13] Prince, B. Tricks and Techniques for Sound Effect Design.
Computer Game Developers Conference 1996. Available at
http://www.gamasutra.com/features/sound_and_music/0819
97/sound_effect.htm [Accessed October 28, 2005.]
[14] Sanger, A. 2003. The Fat Man on Game Audio: Tasty
Morsels of Sonic Goodness. New Riders.
[15] Stockburger, A. 2003. The Game Environment from an
Auditive Perspective. In Proc. Level Up, DIGRA Utrecht
Universiteit.
[16] Targett, S. & Fernström, M. 2003. Audio Games: Fun for
All? All for Fun? Proc. International Conference on
Auditory Display.
[17] Whitmore, G. 2003. Design With Music In Mind: A Guide
to Adaptive Audio for Game Designers. Gamasutra, May
29. Available at http://www.gamasutra.com/resource_guide/
20030528/whitmore_pfv.htm [Accessed October 28, 2005.]
... Sound in video games can be divided into two parts: (1) diegetic sounds which reflect events in the game world and (2) non-diegetic sounds which do not refer to an event in the game world [10]. In a study, Grimshaw et al. [12] found that both types of sound have a significant but different effect on the players' game experience: While diegetic sounds increased the immersion, non-diegetic sounds decreased tension and the negative effect associated with the game. ...
... In video games visual latency is known to negatively influence player performance and experience [4,6,9] starting at 25 ms [23]. While it is clear that audio is an essential part of video games to increase immersion [10,12] and performance [30], the effects of auditory latency are unknown. Therefor, currently, it is unclear if standalone high auditory latency in video games leads to the same systematic decrease of game experience and performance as visual latency. ...
... While visual discrimination between non-diegetic user interfaces and diegetic elements of a virtual world seen through a Virtual Reality (VR) headset is aided by depth perception, the aural dimension presents a challenge for the player to assess whether a sound is diegetic or not. Building on Bernstein's [1] analysis of audio in terms of what information sounds provide the player, Ekman classifies sounds in relationship to the diegesis of their referent as 'the thing being told by the sound' [2]. She presents two approaches to such an assessment: (i) whether its apparent source is itself diegetic; or (ii) whether the non-player inhabitants of the virtual world react to the sound. ...
... Should these expectations fail to manifest themselves, such as when a voice is heard by the player but no speaker is seen or no reactions to it come from diegetic characters, the player's understanding of the virtual space is challenged, undermining narrative engagement [8]. Following Ekman [2], this disembodied voice is thus perceived to be non-diegetic, which negatively influences the VR player's immersion, as they are reminded of their non-diegetic existence. Embodied speech, on the other hand, supports the player's (tele)presence, especially through the use of the second-person 'you' which helps create the feeling of being addressed through aesthetic-reflexive involvement [9] and thus present in the virtual world, which in turn increases immersion [10][11][12]. ...
Chapter
Self-identification is a key factor for the immersion of the VR interactive narrative player. Diegetic non-protagonist narrators, touched-up heterodiegetic narrations with internal focalization, and casting the player in a ‘virtual sidekick’ role are suggested by the literature to support self-identification. This paper analyses the use of second-person voice and level of interactivity in two VR productions. In one, minimal use of the second person to address the player and negligible agency results in limited telepresence in a 360-video VR tour of a concentration camp accompanying a Holocaust survivor. In the second, use of a touched-up heterodiegetic narration with internal focalization heightens immersion levels but self-identification of the player as sidekick suffers as the narrative’s forward drive shifts between narrator, protagonist and antagonist. Future empirical work should explore the impact of second-person voice and interaction on the resultant self-identification and immersion.
... They can be used to give a feedback to the students about their own actions (Rogers et al., 2018). Sound effects can be used as a primary source for in-game information (Ekman, 2005), like the player-character's health status (Robb et al., 2017). ...
... Sound effects and background music have a positive impact on presence and engagement (Rogers et al., 2018). Sound effects and background music also affect time perception and user's performance and behavior (Cassidy & MacDonald, 2010;Ekman (2005);. showed that sound effects have a stronger impact on the immersive characteristics of virtual reality environments, comparing to visual detail. ...
Article
Full-text available
Students’ engagement in E-learning applications is considered an important factor for learning. There is an evidence in the literature on the influence of students’ engagement on their learning outcomes and achievement. Sound utilization in E-learning applications is expected to influence the students’ engagement in such applications. However, research is required to provide more evidences on such influence. The current study adopted the arousal theory, to check whether adding sound’s elements (voiceovers, background music and sound effects) to the E-learning applications would improve the students’ engagement in such applications. The participants are non-English speaking undergraduate students (N = 272) from a public university in Jordan. They were randomly assigned to the study groups to use one of the eight E-learning applications with different levels of sound’s elements to measure the students’ engagement in these groups. The study has employed the Analysis of Covariance, ANCOVA, to investigate the proposed hypotheses, and compare the students’ engagement among the study groups. The results showed that the sound’s elements, especially voiceovers, positively influence the students' engagement, when controlling for students’ age, gender, and their prior experience with E-learning applications. These results contribute to theory by confirming the arousal theory in the context of students' engagement in E-learning applications. Practitioners and designers can also be informed, by our results, in designing more engaging educational applications.
... The corpus also contains only three studies focusing on games (Bombeke et al. 2018;Rogers et al. 2018;Wedoff et al. 2019) which is surprising given not only the current size of the market for VR games (Grand View Research 2020) but also considering that most of the studies were published between 2017 and 2020, by which point the VR games market was already worth several million USD (Statista 2022). The role of audio in games is often noted as being underutilised (Ekman 2005;Ribeiro et al. 2020) and research is disproportionately slanted toward certain genres, especially horror games (Ribeiro et al. 2020;Rogers et al. 2018). While the influence of our choice of databases must be acknowledged as a potential influence on the representation of this field in the corpus, there is still a notable discrepancy between the size of the market (as a proxy measure for interest in the field) and the number of studies focusing on audio. ...
Article
Full-text available
The use of virtual reality (VR) has seen significant recent growth and presents opportunities for use in many domain areas. The use of head-mounted displays (HMDs) also presents unique opportunities for the implementation of audio feedback congruent with head and body movements, thus matching intuitive expectations. However, the use of audio in VR is still undervalued and there is a lack of consistency within audio-centedd research in VR. To address this shortcoming and present an overview of this area of research, we conducted a scoping review (n = 121) focusing on the use of audio in HMD-based VR and its effects on user/player experience. Results show a lack of standardisation for common measures such as pleasantness and emphasize the context-specific ability of audio to influence a variety of affective, cognitive, and motivational measures, but are mixed for presence and generally lacking for social experiences and descriptive research.
... While work on auditory latency in games is limited, research on auditory latency in the field of digital instruments showed that a low latency of 20 ms negatively influences musicians' performances [26]. It is clear, that audio is an essential part of video games to increase immersion [17,21] and performance [39] but the effects of auditory latency are unknown. In particular, it is unclear if high auditory latency in video games leads to the same systematic decrease of game experience and performance as visual latency. ...
Conference Paper
Full-text available
Latency is inherently part of every interactive system and is particularly critical in video games. Previous work shows that visual latency above 25 ms reduces game experience and player performance. However, latency does not only affect visual perception but also may influence auditory elements of video games. It is unclear if auditory latency impairs the gaming experience and player performance with the same magnitude as visual latency. Therefore, we conducted an experiment with 24 participants playing a first-person shooter game. Participants played with four levels (0 ms, 40 ms, 270 ms, and 500 ms) of controlled auditory latency to reveal effects on game experience and player performance. Our analysis shows that auditory latency in video games increases the perceived tension, decreases positive feelings towards the game, and on its highest tested level (500 ms), even causes significantly stronger associations with negative feelings towards the game. Furthermore, we found that the negative effects of auditory latency are particularly pronounced for high-skilled players. We conclude that auditory latency negatively affects video games and their players. Therefore, researchers should investigate it with the same rigor as visual latency
... The games explained in this study are based on simple responsive systems, but some advanced and intelligent sound engines are also available to produce sound-based interactive games. Ekman (2005) has presented a study on the taxonomy of different sounds used in computer games. The architecture proposed in the study focuses on the working of sounds and the components affecting the meaning of sound used in a game. ...
Article
Full-text available
The development of music in games brings attractiveness and spreads the gaming industry. Music plays a vital role in the games to attract user attitudes toward gaming. The gaming industry is more beneficial than any other industry, like the film industry using music in-game. The evolution of music in games started a few decades ago and brought a lot of enhancement in gaming. Nowadays, a player likes music-based games more than any other game because while playing music-based games, it releases stress and keeps comfortable of player’s mind. The music in games changes the previous gaming style and advances, which help the young generation, learn something new from different games. Music is used to attract user attitudes and ultimately involve users in gaming during playing games. Music is utilized in-game to develop a more attractive and memorable game to enhance users’ interest in playing games. In addition, the usage of background music in games is also a primary part of successful games because, at a similar time, game developers and those who play games hope that video games can be more persuasive. After studying the previous paper, it identified that music in games positively impacts players’ performance and achievement, improves the player’s skills, and keeps players relaxed during playing games. The current research has considered the decision support system (DSS) for evaluating the role of music in games for sustained effectiveness. Results of the study have shown the efficacy of the study.
... Game music has been found to influence immersion [135,170,197], tension/anxiety [31], risk-taking behavior [163], and concentration [94]. Game sound effects, often an important source of feedback [53,91,96,147,161], affect immersion [73] and performance [32]. Additionally, the effects of audio are often contextually dependent on game genre [95], device type [164], and preferences [170]. ...
Article
Full-text available
Avatar identification is one of the most promising research areas in games user research. Greater identification with one's avatar has been associated with improved outcomes in the domains of health, entertainment, and education. However, existing studies have focused almost exclusively on the visual appearance of avatars. Yet audio is known to influence immersion/presence, performance, and physiological responses. We perform one of the first studies to date on avatar self-similar audio. We conducted a 2 x 3 (similar/dissimilar x modulation upwards/downwards/none) study in a Java programming game. We find that voice similarity leads to a significant increase in performance, time spent, similarity identification, competence, relatedness, and immersion. Similarity identification acts as a significant mediator variable between voice similarity and all measured outcomes. Our study demonstrates the importance of avatar audio and has implications for avatar design more generally across digital applications.
Conference Paper
Kawaii is the Japanese concept of cute++, a global export with local characteristics. Recent work has explored kawaii as a feature of user experience (UX) with social robots, virtual characters, and voice assistants, i.e., kawaii vocalics. Games have a long history of incorporating characters that use voice as a means of expressing kawaii. Nevertheless, no work to date has evaluated kawaii game voices or mapped out a model of kawaii game vocalics. In this work, we explored whether and how a model of kawaii vocalics maps onto game character voices. We conducted an online perceptions study (N=157) using 18 voices from kawaii characters in Japanese games. We replicated the results for computer voice and discovered nuanced relationships between gender and age, especially youthfulness, agelessness, gender ambiguity, and gender neutrality. We provide our initial model and advocate for future work on character visuals and within play contexts.
Article
Full-text available
With the development and advancement of information technology, artificial intelligence (AI) and machine learning are applied in every sector of life. Among these applications, music is one that has gained attention in the last couple of years. AI-based innovative and intelligent techniques are revolutionising the music industry. It is very convenient for composers to compose music of high quality using these technologies. Artificial intelligence and music (AIM) is one of the emerging fields used to generate and manage sounds for different media like the Internet, games, etc. Sound effects in games are very effective and can be made more attractive by implementing AI approaches. The quality of the sounds in the game directly impacts the productivity and experience of the player. With computer-assisted technologies, game designers can create sounds for different scenarios or situations like horror and suspense and provide gamers with information. The practical and productive audio of a game can guide visually impaired people during other events in the game. For the better creation and composition of music, a good quality of knowledge about musicology is essential. Due to AIM, there are a lot of intelligent and interactive tools available for the efficient and effective learning of music. Learners can be provided with a very reliable and interactive environment based on artificial intelligence. The current study has considered presenting a detailed overview of the literature available in the area of research. The study has demonstrated literature analysis from various perspectives, which will provide evidence for researchers to devise novel solutions in the field.
Chapter
Juiciness describes exaggerated redundant audio/visual feedback in games, creating a better player experience. As computer games are principally a visual medium, sound is an underused potential for creating juiciness. This study aims to explore juicy audio. A mixed-methods approach is used to investigate the influence of juicy audio on the experience of presence in the player, and how players affectively experience and evaluate the juicy audio. Two versions of a game were created. One containing juicy audio effects, and the other without juicy audio effects. Results show a significant effect of juicy audio on presence as expressed in immersion and sensory fidelity, where participants experienced more presence in the juicy audio condition. Regarding the affective evaluation of juicy audio, three themes are identified; association & expectation, pragmatic quality, and describing sounds. The latter is an interesting direction for future research, as we appear to lack a shared, intuitive vocabulary for game sounds.
Article
Full-text available
In this paper we investigate if it is possible to create entertaining computer games that use only non-speech aural feedback and if such games could be used for skills acquisition or in therapeutic applications. To answer these questions we developed two computer games, Os & Xs (Tic Tac Toe) and Mastermind, representing all necessary information through auditory display. User testing confirmed that the games were playable and early indications are that the games can be entertaining, particularly for the blind community. Testing also suggested that playing audio games could assist in increasing both memory and ability to concentrate, thus showing potential for both skills acquisition and therapeutic applications.
Article
The Swedish Library of Talking Books and Braille (TPB) has published web-based computer games for children with different kinds of visual impairments. As the target groups have very different needs when it comes to the use of graphics and sound, TPB have developed two kinds of games. Image-based games aim to encourage children with partial sight to practise recognising visual objects, while sound-based games also intend to be accessible without relying on vision. Based on the results of two pilot studies, this paper discusses central design issues of the graphical and sound-based interfaces for this type of applications.
Article
This paper is based on a thesis written by the five students above from the department Design, Communication and Media at the IT University of Copenhagen. The aim of this paper is to examine the potential of sound to create new user experiences and alternative modes of interaction. Our analysis is based upon the design process of an auditory location-based game we have designed and the different theories in the fields of aesthetics and interaction design our game is affected by. Our main focus is how to create an immersive game universe through the use of sound only. We explore this by using sound as the only parameter in the game. We have constructed a multi-player game that mainly uses sound for interface and creation of atmosphere and suspense. Also, the player is physically present instead of being represented by an avatar. The game is called Dark Circus and employs a mobile setup and a multiple speaker system. Dark Circus is intended for playing wherever these are made available. The sound system is based on adaptive audio, and designed for a generic context – that is the sounds can be exchanged according to the context for the particular game. Firstly, the paper will address the design of the game, basic technological requirements for implementation and the gameplay. Secondly we will discuss more general aspects of sound and premises for sounds design and how sound effects the user experience.
Conference Paper
This paper discusses the design of audio games, a quite new computer game category that originates from games for players with visual impairments as well as from mainstream music games. In the TiM project (Tactile Interactive Multimedia), SITREC develops three sound-based games that point out new directions for game audio design. The TiM games demonstrate different ways in which games can be designed around an auditory experience. Several unique features of audio games are presented emphasising unexplored potentials for interactivity and future development areas are suggested. SITREC proposes an approach to the design of auditory interfaces that takes three listening modes into consideration: casual listening, semantic listening and reduced listening. A semiotic model is presented that illustrates this view on sound object design and ways in which sounds can be combined. The discourse focuses on issues of continuous display, musicality and clarity, and introduces the notion of ”spatialised game soundtracks, ” as opposed to separated background music and game effect sounds. The main challenge when developing auditory interfaces is to balance functionality and aesthetics. Other important issues are the inclusion of meta-level information in order to achieve a high level of complexity and to provide elements of open-endedness. This refers to planning the overall gameplay, as well as to designing individual sound objects and combining them into complex, interactive soundscapes.
Game Audio. CMP Books
  • A Marks
Marks, A. 2001. Game Audio. CMP Books, Lawrence.
Fundamental Aesthetics of Sound in the Cinema
  • D Bordwell
  • K Thompson
Bordwell, D. & Thompson, K. 1985. Fundamental Aesthetics of Sound in the Cinema. In Weis, E. & Belton, J. (eds.) Film Sound, Theory and Practice. Columbia University Press. 181-199.
Postmortem: Looking Glass' s Thief: The Dark Project. Game Developer Magazine
  • T Leonard
Leonard, T. 1999. Postmortem: Looking Glass' s Thief: The Dark Project. Game Developer Magazine, July.