Figure 1 - available from: Universal Access in the Information Society
This content is subject to copyright. Terms and conditions apply.
Users’ performance average time (in seconds) per Web search engine

Users’ performance average time (in seconds) per Web search engine

Source publication
Article
Full-text available
This paper presents a case study regarding the usability evaluation of navigation tasks by people with intellectual disabilities. The aim was to investigate the factors affecting usability, by comparing their user-Web interactions and underline the difficulties observed. For that purpose, two distinct study phases were performed: the first consiste...

Citations

... Due to a limited vocabulary and impaired writing and reading skills, many search systems are inaccessible (Sitbon et al., 2014). Speech recognition is not an option because of her unclear pronunciation (Rocha et al., 2017). ...
Article
Full-text available
Introduction. In studies of information searching, people are often characterised with perpetual physical, sensory and cognitive abilities. Situational factors that may affect human abilities are less considered, although they may have an impact on information searching behaviour. For example, a person with dyslexia may struggle with inputting correctly spelled queries. Spelling skills, however, may also be influenced by fatigue, illness, or using a mobile phone while walking. All types of users can therefore at certain times experience challenges related to query input. Method. This theoretical paper explores the concept of situated abilities in the context of information searching. Eight personas are constructed and used in a discussion of how a change of perspective on human abilities can provide a valuable contribution to research. Analysis. The personas are based upon empirical findings of user behaviour and are discussed in relation to theoretical frameworks and models. Results. Human abilities are dynamic and affected by a variety of situational factors. All people experience temporary impairments during their lives. It would therefore be purposeful to reorient information searching research by applying a situated abilities perspective. Conclusion. A situated abilities perspective may result in more inclusive search systems for all types of users.
... However, while global landmarks are very prominent, a different perceptions and relations are required for navigation especially the creation and usage of a mental map. While people with cognitive disabilities are highly motivated to engage with the digital world [27] usability largely impacts the success. Trainings in the area of wayfinding even including landmarks exist and are promising, but mainly focus on people with visual impairments. ...
... Some literature report that everyday search engines, such as Google, can be used as a good starting point to search for health information (Lopes and Ribeiro, 2011), and indeed some of our participants reported that they used Google for searching online health information. It has been identified that people with intellectual disability perform better with Google search, which has a simple interface that does not distract the user (Rocha et al., 2017). Our participants' responses imply that characteristics of the disseminated information itself, such as presentation and credibility, would be also important factors in promoting the accessibility of online health information. ...
Article
Purpose This study explored the current and desired use of web-search, particularly for health information, by adults with intellectual disability. Design/methodology/approach The authors surveyed 39 participants who were in supported employment or attending day centers in Australia. The survey, delivered through structured interviews, increased participation with data in a form of the participants' narratives. The responses were analyzed through a form of thematic analysis. Findings This study's results present the participants' daily health information interests, approaches to finding information and expectations for self-sufficiency. Participants' interest was in information to stay healthy rather than purely clinical information. The participants were keen to use online information in, accessible as well as entertaining and engaging formats. Supporting others close to the participants was the prominent intention of participants' health information access. Participants showed aspirations for an autonomous life by wanting to learn how to search. Research limitations/implications The findings of this study provide some avenues for consumer health information access to be respectful and inclusive of users with intellectual disability, both from an accessible design perspective as well as from a learning and support standpoint. Originality/value This study complements other human–computer interaction (HCI) studies which observe how adults with intellectual disability can be supported to engage with web search; this study offers the adults' verbalized perspectives on how adults wish to interact with web searching for health information, nuanced by adults' existing abilities and support needs.
... Indeed, voice has previously been used to facilitate HCI between learning systems and users, including those with intellectual disabilities. Example domains include navigation (Rocha et al., 2017), robotics (Gustavsson et al., 2017), and self-driving cars (Hu et al., 2019). Voice has also been considered as a supplementary mode of HCI alongside other modalities (Lee et al., 2017). ...
Article
As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the ‘how’ and ‘why’ of human-computer interaction (HCI) within these frameworks, both current and expected. Such a discussion is necessary for optimal system design, leveraging advanced data-processing capabilities to support decision-making involving humans, but it is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy. Within this context, we focus on the following questions: (i) What does HCI currently look like for state-of-the-art AutoML algorithms, especially during the stages of development, deployment, and maintenance? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex openended environments, will the fundamental nature of HCI evolve? To consider these questions, we project existing literature in HCI into the space of AutoML; this connection has, to date, largely been unexplored. In so doing, we review topics including user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, we contemplate how AutoML may manifest in effectively open-ended environments. This discussion necessarily reviews projected developmental pathways for AutoML, such as the incorporation of highlevel reasoning, although the focus remains on how and why HCI may occur in such a framework rather than on any implementational details. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.
... Indeed, voice has previously been used to facilitate HCI between learning systems and users, including those with 28 T.T. Khuat, D.J. Kedziora, B. Gabrys intellectual disabilities. Example domains include navigation [240], robotics [114], and self-driving cars [130]. Voice has also been considered as a supplementary mode of HCI alongside other modalities [169]. ...
Preprint
Full-text available
As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the `how' and `why' of human-computer interaction (HCI) within these frameworks, both current and expected. Such a discussion is necessary for optimal system design, leveraging advanced data-processing capabilities to support decision-making involving humans, but it is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy. Within this context, we focus on the following questions: (i) How does HCI currently look like for state-of-the-art AutoML algorithms, especially during the stages of development, deployment, and maintenance? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex open-ended environments, will the fundamental nature of HCI evolve? To consider these questions, we project existing literature in HCI into the space of AutoML; this connection has, to date, largely been unexplored. In so doing, we review topics including user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, we contemplate how AutoML may manifest in effectively open-ended environments. This discussion necessarily reviews projected developmental pathways for AutoML, such as the incorporation of reasoning, although the focus remains on how and why HCI may occur in such a framework rather than on any implementational details. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.
... Lastly the usability for people with disabilities is considered important. While they are highly motivated to interact with the digital world (Rocha et al., 2017), usability is essential and dependent on the quality of the application as well. ...
... About 14.29% of returned technologies evaluated the usability and/or UX of software using voice-based interfaces (such as a search engine that performs a search according to the word spoken by user). Rocha et al. [34] presented an experiment that the main task was to perform a voice search. One of the evaluation technologies used was a direct observation. ...
... Chatzidaki and Xenos [73] used tasks to evaluate efficiency (quantitative) and interview to evaluate participants' opinions (qualitative). a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak [23] x x x [24] x x [25] x x [26] x x x x x [27] x x x [28] x x x [29] x [30] x [31] x [32] x [33] x [34] x x x [35] x ...
Article
Full-text available
Natural user interface (NUI) is considered a recent topic in human–computer interaction (HCI) and provides innovative forms of interaction, which are performed through natural movements of the human body like gestures, voice, and gaze. In the software development process, usability and user eXperience (UX) evaluations are a relevant step, since they evaluate several aspects of the system, such as efficiency, effectiveness, user satisfaction, and immersion. Thus, the goal of the authors’ systematic mapping study (SMS) is to identify usability and UX evaluation technologies used by researchers and developers in software with NUIs. Their SMS selected 56 papers containing evaluation technologies for NUI. Overall, the authors identified 30 different usability and UX evaluation technologies for NUI. The analysis of these technologies reveals most of them are used to evaluate software in general, without considering the specificities of NUI. Besides, most technologies evaluate only one aspect, Usability or UX. In other words, these technologies do not consider Usability and UX together. For future work, they intend to develop an evaluation technology for NUIs that fills the gaps identified in their SMS and combining Usability and UX.
... Considering also that some educators and therapists were starting to use the new console inputs and computer games during their sessions with the children [5][6][7][8], like Nintendo Wii Fit, there were Information 2020, 11, 159 2 of 28 only a few studies explaining how to evaluate the usability of the devices with children with Trisomy 21 and which presented accessibility guidelines [9][10][11][12][13][14][15]. ...
Article
Full-text available
After a literature review published by Nascimento et. al. (2017), the research team noticed the lack of studies focused on game controllers’ accessibility during use by children with Down syndrome. In view of that, this research describes a mobile game development and its usability analyses, which were created to evaluate the accessibility of touchscreen gestural interfaces. The methodology was organized into three steps: bibliographic research and the definition of the project guidelines, the game development, and its evaluation. The guidelines used were based on a study made by Nascimento et. al. (2019) of the impairments that children can have, their game preferences found on Prena’s article (2014), games accessibility guidelines for people with intellectual deficiency from the Includification Book (2012), a manual of touchscreen gestural interfaces from Android and iOS and a game development framework from Schuytema (2008). Then, for the usability analyses, the team decided to first submit the game to a group of experts in order to make some improvements before submitting it to the audience. In this way, two evaluations were done, a heuristic test with usability specialists and a cognitive walkthrough with health professionals. The list of heuristics used on the tests was created by a mash up of the Breyer evaluation (2008) and the recommendations of the Able Games Association (2012) and the cognitive one followed the Preece, Sharp and Rogers (2007) recommendations. The results found reveal some challenges in the field and adjustments, mainly in the narrative, game goals and interface feedback, that should be addressed as soon as possible.
... Voice is completely natural because, in addition to being a human characteristic, users generally do not know the restricted features of the software and interact as if they were talking to another person [3]. Voice-based interaction has been used to facilitate the communication of people with disabilities [1,13], to interact with robots [6,12], or combined with other types of interaction [9,11]. ...
... Some studies that employ the voice to interact focus their work on people who have any deficiency be it physical or mental [1,13]. For these authors, it is important that these people can perform tasks on the internet ordinarily, such as searching on Google. ...
Conference Paper
Full-text available
Natural User Interface provides the possibility of interaction with software through direct actions of human body, such as gesture and voice. In the case of voice, it is possible for the user to interact with the system quickly and directly, making the interaction simple and thus providing accessibility. Several studies were conducted with voice as a form of interaction between the user and system. Nevertheless, these studies are generally analyzed from a quantitative perspective and few perform a detailed qualitative analysis. The goal of this paper was to evaluate qualitatively a exploratory study with voice-based interaction. For this, two steps of the Grounded Theory (GT) method were used to obtain solid results. From these results, guidelines were proposed for the creation of digital text by voice so that possible users have this possibility of interaction when they write their texts.
... Voice and speech technologies have potential to overcome barriers presented by typing and spelling, however, research exploring how persons with ID access information using interactive voice activated services or search engines is limited [23,6]. In this study, we explore how voice activated technologies can be used by people with ID to access online information. ...
... Other research has suggested that layouts need to be simple and minimize distractions and clutter [23], and that participants with ID clicked on image results over text results. Rocha et al [22] found that participants were better guided by cartoonish images than text, as the text was not able to capture their attention, but rather it confused them. ...
... Rocha et al. [23] found that participants had more success when typing their queries on Google rather than using their voice to search Google. They were told to search for simple things like cat, dog and bread (in Portuguese). ...
Conference Paper
People with intellectual disability are keen users of information technology, but the need for spelling and typing skills often presents a barrier to information and media search and access. The paper presents a study to understand how people with intellectual disabilities can use Voice Activated Interfaces (VAIs) to access information and assist in daily activities. The study involves observations and video analysis of 18 adults with intellectual disability using VAIs and performing 4 tasks: calibrating the VAIs, using voice assistant (Siri or Google) to search images, using voice to query Youtube, and using the voice assistant to perform a daily task (managing calendar, finding directions, etc.). 72% of participants stated that this was their preferred form of input. 50% could perform all four tasks they attempted with successful outcomes, and 55% three of the tasks. We identify the main barriers and opportunities for existing VAIs and suggest future improvements mainly around audio feedback given to participants. Notably, we found that participants' mental model of the VAIs was that of a person, implications for which include the user having to speak in long polite sentences and expecting voice responses and feedback about the state of the device. We suggest ways that VAIs can be adjusted so that they are more inclusive.