Figure 3 - uploaded by Stephen Anthony Brewster
Content may be subject to copyright.
The Phase II menu screen 

The Phase II menu screen 

Source publication
Conference Paper
Full-text available
An evaluation of earcons was carried out to seee whether they are an effective means of communicating information in sound. An initial experiment showed that earcons were better than unstructured bursts of sound and that musical timbres were more effective than simple tones. A second experiment was then carried out which improved upon some of the w...

Context in source publication

Context 1
... testing the subjects the screen was cleared and some of the earcons were played back. The subject had to supply what information they could about type, family and number of the file of the earcon they heard. When scoring, a mark was given for each correct piece of information supplied. This time earcons were created for menus. Each menu had its own timbre and the items on each menu were differentiated by rhythm, pitch or intensity. The screen shown to the users to learn the earcons is given in Figure 3. The subjects were tested in the same way as before but this time had to supply information about menu and ...

Similar publications

Conference Paper
Full-text available
This paper describes a novel form of display using crossmodal output. A crossmodal icon is an abstract icon that can be instantiated in one of two equivalent forms (auditory or tactile). These can be used in interfaces as a means of non-visual output. This paper discusses how crossmodal icons can be constructed and the potential benefits they bring...

Citations

... and its length was approximately 1900 ms. The earcon used in the present study followed the design guidelines of Brewster et al. (1993). It was a single pair of 240 ms beeps at 2700 Hz, with a 100 ms interval between the two beeps, the effectiveness as a TOR of which was demonstrated in the study of Petermeijer et al. (2017b). ...
Article
With the era of automated driving approaching, designing an effective auditory takeover request (TOR) is critical to ensure automated driving safety. The present study investigated the effects of speech-based (speech and spearcon) and non-speech-based (earcon and auditory icon) TORs on takeover performance and subjective preferences. The potential impact of the non-driving-related task (NDRT) modality on auditory TORs was considered. Thirty-two participants were recruited in the present study and assigned to two groups, with one group performing the visual N-back task and another performing the auditory N-back task during automated driving. They were required to complete four simulated driving blocks corresponding to four auditory TOR types. The earcon TOR was found to be the most suitable for alerting drivers to return to the control loop because of its advantageous takeover time, lane change time, and minimum time to collision. Although participants preferred the speech TOR, it led to relatively poor takeover performance. In addition, the auditory NDRT was found to have a detrimental impact on auditory TORs. When drivers were engaged in the auditory NDRT, the takeover time and lane change time advantages of earcon TORs no longer existed. These findings highlight the importance of considering the influence of auditory NDRTs when designing an auditory takeover interface. The present study also has some practical implications for researchers and designers when designing an auditory takeover system in automated vehicles.
... Specifically, there was an HMI prompt at the point of switching status indicating actions need to be taken. In the auditory user interfaces (UIs), non-speech auditory cues were used, including alerts (see Section 2.4) and other earcons (concise abstract sounds representing certain events, Brewster et al. (1993)). ...
Article
This study investigated how drivers can manage the takeover when silent or alerted failure of automated lateral control occurs after monotonous hands-off driving with partial automation. Twenty-two drivers with varying levels of prior ADAS experience participated in the driving simulator experiment. The failures were injected into the driving scenario on curved road segments, accompanied by either a visual-auditory alert or no change in HMI. Results indicated that drivers could rarely maintain lane-keeping when automated steering was disabled silently, but most drivers safely managed the alerted failure situation within the ego-lane. The silent failure yielded significantly longer takeover time and generally worse lateral control quality. In contrast, poor longitudinal control performance was observed in alerted conditions due to more brake usage. An expert-based controllability assessment method was introduced to this study. The silent lateral failure situation during monotonous hands-off driving was rated as uncontrollable, while the alerted situation was basically controllable. Participants showed their preferences for the TORs, and the importance of conveying TOR reasons was also demonstrated. Relevance to industry: The results and implications of this study provided insights into the design and development of automated driving systems to prevent critical consequences. The comprehensive method of controlla-bility assessment can benefit the automated driving system evaluation.
... Komatsu et al. [1] proposed a method to change the number and time interval of auditory stimuli to reduce the waiting time in computer task processing. There is also a method to add sounds to the progress bar to present the current processing state [23] and a method to use characteristic sounds [24][25][26]. ...
Article
Full-text available
There are situations where manipulating subjective time would be desirable, such as reducing waiting time, and there are many studies to manipulate subjective time. However, it is not easy to use previous methods in various situations because most of them use visual and auditory information. This study proposes a method to manipulate the subjective time by the tactile stimuli from wrist-worn devices. We designed three types of tactile stimuli presentation methods that change the number, the duration, and the time interval of the stimuli. The evaluation result clarified the elements of the tactile stimuli that intentionally changed the subjective time and confirmed that our method can change the subjective time by about 23% (from -6% to +17%). Since few studies have focused on the phenomenon in which the subjective time changes depending on the tactile stimuli from information devices, our findings can contribute to designing information devices and user experiences.
... Earcon is a concise sound that symbolically represents an object or event, and the sound has no connection with the object it represents (Brewster et al., 1993). Earcon is flexible in design by manipulating sound parameters (such as pitch and timbre) (Cao et al., 2010). ...
Article
With the development of connected vehicles, in-vehicle auditory alerts enable drivers to effectively avoid hazards by quickly presenting critical information in advance. Auditory icons can be understood quickly, evoking a better user experience. However, as collision warnings, the design and application of auditory icons still need further exploration. Thus, this study aims to investigate the effects of internal semantic mapping and external acoustic characteristics (compression and dynamics design) on driver performance and subjective experience. Thirty-two participants (17 females) experienced 15 types of warnings — (3 dynamics: mapping 0 vs. 1 vs. 2) × (5 warning types: original iconic vs. original metaphorical vs. compressed iconic vs. compressed metaphorical auditory icon vs. earcon) — in a simulator. We found that compression design was effective for rapid risk avoidance, which was more effective in iconic and highly pitch-dynamic sounds. This study provides additional ideas and principles for the design of auditory icon warnings.
... Based on our findings, we summarize the information required by PVI: (1) Existence of the robot: whether there is a robot nearby; (2) Proximity: how far away the robot is; (3) Movement status: whether the robot is moving or when it will start moving; (4) Moving information: in which direction the robot is moving to or intends to move, and at what speed the robot is moving. Unlike prior work that focused on visualizing the robot's intents, robot designers should explore how to effectively convey this information to PVI in an accessible and unobtrusive manner, for example, using earcons [17] to present the existence of different robots. ...
Preprint
Full-text available
Mobile service robots have become increasingly ubiquitous. However, these robots can pose potential accessibility issues and safety concerns to people with visual impairments (PVI). We sought to explore the challenges faced by PVI around mainstream mobile service robots and identify their needs. Seventeen PVI were interviewed about their experiences with three emerging robots: vacuum robots, delivery robots, and drones. We comprehensively investigated PVI's robot experiences by considering their different roles around robots -- direct users and bystanders. Our study highlighted participants' challenges and concerns about the accessibility, safety, and privacy issues around mobile service robots. We found that the lack of accessible feedback made it difficult for PVI to precisely control, locate, and track the status of the robots. Moreover, encountering mobile robots as bystanders confused and even scared the participants, presenting safety and privacy barriers. We further distilled design considerations for more accessible and safe robots for PVI.
... When an avatar entered or exited a bubble, the user heard an earcon with a verbal description of the avatar's information. Earcons were utilized as they are brief, abstract, and distinctive (C3) sounds that encodes particular information [4]. We used different earcons to represents an avatar's moving dynamics between different bubbles. ...
... We used a two-beat earcon with the tone of the last beat increased 1 (or decreased 2 ) to indicate an avatar entering (or leaving) the Social Bubble. For the Conversation Bubble, since avatars in this bubble probably required more immediate attention, we used four-beat earcons with the tone of last beat increased 3 or decreased 4 to represent an avatar entering or leaving this bubble. To distinguish friend and stranger avatars, we adjusted the pitch and speed of the earcons, so that higher pitched, faster earcons indicated friends, while normal pitch and speed indicated strangers 5 6 7 8 . ...
Preprint
Full-text available
Social Virtual Reality (VR) is growing for remote socialization and collaboration. However, current social VR applications are not accessible to people with visual impairments (PVI) due to their focus on visual experiences. We aim to facilitate social VR accessibility by enhancing PVI's peripheral awareness of surrounding avatar dynamics. We designed VRBubble, an audio-based VR technique that provides surrounding avatar information based on social distances. Based on Hall's proxemic theory, VRBubble divides the social space with three Bubbles -- Intimate, Conversation, and Social Bubble -- generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information. We provide three audio alternatives: earcons, verbal notifications, and real-world sound effects. PVI can select and combine their preferred feedback alternatives for different avatars, bubbles, and social contexts. We evaluated VRBubble and an audio beacon baseline with 12 PVI in a navigation and a conversation context. We found that VRBubble significantly enhanced participants' avatar awareness during navigation and enabled avatar identification in both contexts. However, VRBubble was shown to be more distracting in crowded environments.
... The focal point of the research community has been generating non-visual representations for visual content. This is done either in the form of rich and descriptive alt-text [19] or using auditory cues such as earcons and their derivatives (spearcons, lyricons, etc.) [12]. ...
Preprint
Full-text available
Visual cues such as structure, emphasis, and icons play an important role in efficient information foraging by sighted individuals and make for a pleasurable reading experience. Blind, low-vision and other print-disabled individuals miss out on these cues since current OCR and text-to-speech software ignore them, resulting in a tedious reading experience. We identify four semantic goals for an enjoyable listening experience, and identify syntactic visual cues that help make progress towards these goals. Empirically, we find that preserving even one or two visual cues in aural form significantly enhances the experience for listening to print content.
... Sound and music have a long history in HCI research, with early work investigating the use of sound to present information [5,6], followed by the coupling of auditory stimuli to user interfaces [7][8][9], and more recently an increasing focus on sound and user experiences [37,38]. As an illustrative example, Brewster et al. describe in their seminal work how short sounds in a user interface are effective for communicating information [9]. ...
... Sound and music have a long history in HCI research, with early work investigating the use of sound to present information [5,6], followed by the coupling of auditory stimuli to user interfaces [7][8][9], and more recently an increasing focus on sound and user experiences [37,38]. As an illustrative example, Brewster et al. describe in their seminal work how short sounds in a user interface are effective for communicating information [9]. In recent years, we have seen an increase in research concerning the lack of tangibility and unexplored spatial characteristics of sound, for example utilising users' prior experiences with balls as a metaphor for music creation [13], enabling mid-air gestural interaction with virtual sound sources [40], or providing interaction with music through a tangible desktop interface [32]. ...
... Within HCI, we have seen a trend of going from sound used to enhance interface experiences (e.g. [7][8][9]21]) towards considering sound and music as objects for interaction [13,32,40]. Where prior work studied particular qualities of sound or interaction modalities, sound zones transcends into the 'behaviour' of sound in a confined space and how users may perceive and experience this modality [34]. ...
... Because these sounds were short in duration and musical in nature, they closely resemble previous designs similar to earcons [4,7]. By examining the animation of each Reaction as they occurred in time, we identified the exact moments of change in their gestural trajectories. ...
Article
Full-text available
Facebook Reactions are a collection of animated icons that enable users to share and express their emotions when interacting with Facebook content. The current design of Facebook Reactions utilizes visual stimuli (animated graphics and text) to convey affective information, which presents usability and accessibility barriers for visually-impaired Facebook users. In this paper, we investigate the use of sonification as a universally-accessible modality to aid in the conveyance of affect for blind and sighted social media users. We discuss the design and evaluation of 48 sonifications, leveraging Facebook Reactions as a conceptual framework. We conducted an online sound-matching study with 75 participants (11 blind, 64 sighted) to evaluate the performance of these sonifications. We found that sonification is an effective tool for conveying emotion for blind and sighted participants, and we highlight sonification design strategies that contribute to improved efficacy. Finally, we contextualize these findings and discuss the implications of this research with respect to HCI and the accessibility of online communities and platforms.
... Earcons are abstract short non-verbal auditory information with musical nature used to provide information and feedback on computer operations or interactions (Blattner et al., 1989;Brewster et al., 1993;Amer et al., 2013;Larsson and Niemand, 2015). For example, the rising "login" melody and the descending "logout" melody in the Windows operating system are formed by different combinations of high and low tones. ...
Article
Full-text available
Auditory warnings have been shown to interfere with verbal working memory. However, the impact of different types of auditory warnings on working memory tasks must be further researched. This study investigated how different kinds of auditory warnings interfered with verbal and spatial working memory. Experiment 1 tested the potential interference of auditory warnings with verbal working memory. Experiment 2 tested the potential interference of auditory warnings with spatial working memory. Both experiments used a 3 × 3 mixed design: auditory warning type (auditory icons, earcons, or spearcons) was between groups, and task condition (no-warning, identify-warning, or ignore-warning) was within groups. In Experiment 1, earcons and spearcons but not auditory icons worsened the performance on the verbal serial recall task in the identify-warning condition, compared with that in the no-warning or ignore-warning conditions. In Experiment 2, only identifying earcons worsened the performance on the location recall task compared with performance without auditory warnings or when auditory warnings were ignored. Results are discussed from the perspective of working memory resource interference, and their practical application in the selection and design of auditory warning signals is involved.