Article

Learnability Evaluation of the Markup Language for Designing Applications Controlled by Gaze

Authors:
  • Nicolaus Copernicus University in Toruń
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We present the Gaze Interaction Markup Language (GIML) and its interpreter, the Gaze-Controlled Application Framework, making the development of personalized gaze-based software available to people with no programming background. This software can be used by caregivers to find new methods of communicating with people who suffer from various physical disabilities, such as designing personalized alternative communication boards. It also allows psychologists and cognitive scientists to easily develop experiments using eye trackers. In this paper, we present a study assessing the learnability of GIML. The aim is to check whether it is possible to learn the basics of the language during a one-hour hands-on-keyboard session. To confirm this, two groups of novices with no prior experience of GIML were asked to learn about GIML and complete a tutorial. Then we asked them to perform a set of tasks in order to test that they had mastered the basics of GIML. The results show that even people who have no programming experience can learn the basics of the proposed language.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The perspective of our review evaluated the interaction of cerebral palsy with eye-gaze interaction, the main tool of assistive technology, supporting communication and personal development for degrees of disability that involve motor impairment. Purpose: Bringing in the main field, alternative possibilities from the literature for better integration of the disabled. Methods: Systematic review. Results: We revealed the substantial impact of assistive technology on cerebral palsy patients, grade of integration, easing caregiver's dedication, the devotion of training and the companionship being vital to reduce the level of abandonment. Conclusion: Primordial eye-gaze interaction initiated the idea of infrared eye-trackers for better solutions in the field of communication, personal interaction with others, personal development and even employment. The eye-tracking industry has its popularity cost-depending, for the present, being in the range of expensive for disabled people. For cerebral palsy, eye-gaze has little steps, but with a crucial impact on quality of life. INTRODUCTION Cerebral palsy according to an accepted report in 2007 is the most common cause of childhood "permanent disorder, of the development of movement and posture, causing activity limitation, that are attributed to non-progressive disturbances that occurred in the developing fatal or infant brain."(1) It is not a disease in the traditional sense, but describes a clinical aspect of children who share the aspects non-progressive brain injury, lesion acquired ante-, perinatal or in the early postnatal, on the infant's brain. Affection causing limitation in activities, because of the motor disorders, accompanied by disturbances of communication, coordination, sensation, perception, cognition, behaviour, epilepsy, musculoskeletal and respiratory problems. All these factors and distribution classify cerebral palsy as a functional disability. Reaching the needs of disabled people especially those with cerebral palsy is not a resource-full domain. It involves however several outstanding technologies but with a high-cost making it hard to get. So the situation is challenging.(2) The management main goals in of cerebral palsy are enhancing children's neurological development to maximize their mobility, reducing spasticity, hypertonia, speech therapy for better communication, physiotherapy for scoliosis and respiratory deficiencies because of musculoskeletal problems and other co-morbidities. In the multidisciplinary part we must take action and consider the rapid evolution of technology, especially assistive technology. Tools like wheelchairs/electric wheelchairs, AAC technology, Text-to-speech devices, were highlighted in our review, the actual resources in the literature.
Conference Paper
Full-text available
Several methods of gaze control of video playback were implemented in MovEye application. Two versions of MovEye are almost ready: for watching online movies from the YouTube service and for watching movies from the files stored on local drives. We have two goals: the social one is to help people with physical disabilities to control and enrich their immediate environment; the scientific one is to compare the usability of several gaze control methods for video playback in case of healthy and disabled users. This paper aims to our gaze control applications. Our next step will be conducting the accessibility and user experience (UX) tests for both healthy and disabled users. The long-time perspective of this research could lead to the implementation of gaze control in TV sets and other video playback devices.
Conference Paper
Full-text available
Web is essential for most people, and its accessibility should not be limited to conventional input sources like mouse and keyboard. In recent years, eye tracking systems have greatly improved, beginning to play an important role as input medium. In this work, we present GazeTheWeb, a Web browser accessible solely by eye gaze input. It effectively supports all browsing operations like search, navigation and bookmarks. GazeTheWeb is based on a Chromium powered framework, comprising Web extraction to classify interactive elements, and application of gaze interaction paradigms to represent these elements.
Article
Full-text available
Objective: To establish the impact of a gaze-based assistive technology (AT) intervention on activity repertoire, autonomous use, and goal attainment in children with severe physical impairments, and to examine parents' satisfaction with the gaze-based AT and with services related to the gaze-based AT intervention. Methods: Non-experimental multiple case study with before, after, and follow-up design. Ten children with severe physical impairments without speaking ability (aged 1-15 years) participated in gaze-based AT intervention for 9-10 months, during which period the gaze-based AT was implemented in daily activities. Results: Repertoire of computer activities increased for seven children. All children had sustained usage of gaze-based AT in daily activities at follow-up, all had attained goals, and parents' satisfaction with the AT and with services was high. Discussion: The gaze-based AT intervention was effective in guiding parents and teachers to continue supporting the children to perform activities with the AT after the intervention program.
Article
Full-text available
Thanks to advances in electric wheelchair design, persons with motor impairments due to diseases, such as amyotrophic lateral sclerosis (ALS), have tools to become more independent and mobile. However, an electric wheelchair generally requires considerable skill to learn how to use and operate. Moreover, some persons with motor disabilities cannot drive an electric wheelchair manually (even with a joystick), because they lack the physical ability to control their hand movement (such is the case with people with ALS). In this paper, we propose a novel system that enables a person with motor disability to control a wheelchair via eye-gaze and to provide a continuous, real-time navigation in unknown environments. The system comprises a Permobile M400 wheelchair, eye tracking glasses, a depth camera to capture the geometry of the ambient space, a set of ultrasound and infrared sensors to detect obstacles with low proximity that are out of the field of view for the depth camera, a laptop placed on a flexible mount for maximized comfort, and a safety off switch to turn off the system whenever needed. First, a novel algorithm is proposed to support continuous, real-time target identification, path planning, and navigation in unknown environments. Second, the system utilizes a novel N-cell grid-based graphical user interface that adapts to input/output interfaces specifications. Third, a calibration method for the eye tracking system is implemented to minimize the calibration overheads. A case study with a person with ALS is presented, and interesting findings are discussed. The participant showed improved performance in terms of calibration time, task completion time, and navigation speed for a navigation trips between office, dining room, and bedroom. Furthermore, debriefing the caregiver has also shown promising results: the participant enjoyed higher level of confidence driving the wheelchair and experienced no collisions through all the experiment.
Article
Full-text available
This paper provides a brief report on families’ experiences of eye gaze technology as one form of augmentative and alternative communication (AAC) for individuals with Rett syndrome (RTT), and the advice, training and support they receive in relation to this. An online survey exploring communication and AAC was circulated to 190 Dutch families; of the 67 questionnaires that were returned, 63 had answered questions relating to eye gaze technology. These 63 were analysed according to parameters including: experiences during trial periods and longer-term use; expert knowledge, advice and support; funding; communicative progress; and family satisfaction. 20 respondents were using or had previous experience of using an eye gaze system at the time of the survey, 28 of those with no prior experience wanted to try a system in the future. Following a trial period, 11 systems had been funded through health insurance for long-term use and two families had decided a system was not appropriate for them. Levels of support during trials and following long-term provision varied. Despite frustrations with the technology, satisfaction with the systems was higher than satisfaction with the support. The majority of families reported progress in their child’s skills with longer term use. These findings suggest that although eye gaze technologies offer potential to individuals with RTT and their families, greater input from suppliers and knowledgeable AAC professionals is essential for individuals and families to benefit maximally. Higher levels of training and support should be part of the ‘package’ when an eye gaze system is provided.
Article
Full-text available
Assessment of mother-child interactions is a core issue of early child development and psychopathology. This paper focuses on the concept of "synchrony" and examines (1) how synchrony in mother-child interaction is defined and operationalized; (2) the contribution that the concept of synchrony has brought to understanding the nature of mother-child interactions. Between 1977 and 2013, we searched several databases using the following key-words: « synchrony » « interaction » and « mother-child ». We focused on studies examining parent-child interactions among children aged 2 months to 5 years. From the 63 relevant studies, we extracted study description variables (authors, year, design, number of subjects, age); assessment conditions and modalities; and main findings. The most common terms referring to synchrony were mutuality, reciprocity, rhythmicity, harmonious interaction, turn-taking and shared affect; all terms were used to characterize the mother-child dyad. As a consequence, we propose defining synchrony as a dynamic and reciprocal adaptation of the temporal structure of behaviors and shared affect between interactive partners. Three main types of assessment methods for studying synchrony emerged: (1) global interaction scales with dyadic items; (2) specific synchrony scales; and (3) micro-coded time-series analyses. It appears that synchrony should be regarded as a social signal per se as it has been shown to be valid in both normal and pathological populations. Better mother-child synchrony is associated with familiarity (vs. unknown partner), a healthy mother (vs. pathological mother), typical development (vs. psychopathological development), and a more positive child outcomes. Synchrony is a key feature of mother-infant interactions. Adopting an objective approach in studying synchrony is not a simple task given available assessment tools and due to its temporality and multimodal expression. We propose an integrative approach combining clinical observation and engineering techniques to improve the quality of synchrony analysis.
Chapter
Full-text available
This section considers the application of eye movements to user interfaces—both for analyzing interfaces, measuring usability, and gaining insight into human performance—and as an actual control medium within a human-computer dialogue. The two areas have generally been reported separately; but this book seeks to tie them together. For usability analysis, the user’s eye movements while using the system are recorded and later analyzed retrospectively, but the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user-computer dialogue. They might be the sole input, typically for disabled users or hands-busy applications, or they might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices. Interestingly, the principal challenges for both retrospective and real time eye tracking in humancomputer interaction (HCI) turn out to be analogous. For retrospective analysis, the problem is to find appropriate ways to use and interpret the data; it is not nearly as straightforward as it is with more typical task performance, speed, or error data. For real time use, the problem is to find appropriate ways to respond judiciously to eye movement input, and avoid over-responding; it is not nearly as straightforward as responding to well-defined, intentional mouse or keyboard input. We will see in this chapter how these two problems are closely related. These uses of eye tracking in HCI have been highly promising for many years, but progress in making good use of eye movements in HCI has been slow to date. We see promising research work, but we have not yet seen wide use of these approaches in practice or in the marketplace. We will describe the promises of this technology, its limitations, and the obstacles that must still be overcome. Work presented in this book and elsewhere shows that the field is indeed beginning to flourish.
Article
Full-text available
In this paper we present the development of an augmentative system for people with movement disabilities (mostly cerebral palsy people) to communicate with the people that surround them, through a human-computer interaction mechanism. We developed an assistive technology application based on gaze tracking in order to select symbols in communication boards, which represent words or ideas, so that they could easily create phrases for the patient's daily needs. This kind of communication boards is already used by cerebral palsy people, here we intend to extend their use for people with absolute no motor coordination. An important improvement of the proposed system with respect to the existing solutions is the ability to work in the presence of voluntary or involuntary head movements.
Article
Full-text available
Article
Full-text available
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards mouth in articulating faces, potentially to benefit from intersensory redundancy of audio-visual (AV) cues. Using eye-tracking, we investigated whether 6- to 9-month-olds show a similar age-related increase of looking to the mouth, while observing congruent and/or redundant vs. mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audio-visual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age.
Article
Full-text available
This paper introduces the work of the COGAIN “communication by gaze interaction” European Network of Excellence that is working toward giving people with profound disabilities the opportunity to communicate and control their environment by eye gaze control. It shows the need for developing eye gaze based communication systems, and illustrates the effectiveness of newly developed COGAIN eye gaze control systems with a series of case studies, each showing differing aspects of the benefits offered by gaze control. Finally, the paper puts forward a strong case for users, professionals and researchers to collaborate towards developing gaze based communication systems to enable and empower people with disabilities.
Article
Full-text available
Eye movement-based analysis can enhance traditional performance, protocol, and walk-through evaluations of computer interfaces. Despite a substantial history of eye movement data collection in tasks, there is still a great need for an organized definition and evaluation of appropriate measures. Several measures based upon eye movement locations and scanpaths were evaluated here, to assess their validity for assessment of interface quality. Good and poor interfaces for a drawing tool selection program were developed by manipulating the grouping of tool icons. These were subsequently evaluated by a collection of 50 interface designers and typical users. Twelve subjects used the interfaces while their eye movements were collected. Compared with a randomly organized set of component buttons, well-organized functional grouping resulted in shorter scanpaths, covering smaller areas. The poorer interface resulted in more, but similar duration, fixations than the better interface. Whereas the poor interface produced less efficient search behavior, the layout of component representations did not influence their interpretability. Overall, data obtained from eye movements can significantly enhance the observation of users' strategies while using computer interfaces, which can subsequently improve the precision of computer interface evaluations.Relevance to industryThe software development industry requires improved methods for the objective analysis and design of software interfaces. This study provides a foundation for using eye movement analysis as part of an objective evaluation tool for many phases of interface analysis. The present approach is instructional in its definition of eye movement-based measures, and is evaluative with respect to the utility of these measures.
Conference Paper
Full-text available
In this paper, we present an attentive windowing technique that uses eye tracking, rather than manual pointing, for focus window selection. We evaluated the performance of 4 focus selection techniques: eye tracking with key activation, eye tracking with automatic activation, mouse and hotkeys in a typing task with many open windows. We also evaluated a zooming windowing technique designed specifically for eye-based control, comparing its performance to that of a stan-dard tiled windowing environment. Results indicated that eye tracking with automatic activation was, on average, about twice as fast as mouse and hotkeys. Eye tracking with key activation was about 72% faster than manual conditions, and preferred by most participants. We believe eye input performed well because it allows manual input to be provided in parallel to focus selection tasks. Results also suggested that zooming windows outperform static tiled windows by about 30%. Furthermore, this performance gain scaled with the number of windows used. We conclude that eye-controlled zooming windows with key activation pro-vides an efficient and effective alternative to current focus window selection techniques.
Conference Paper
Full-text available
Bookmarks are a valuable webpage re-visitation technique, but it is often difficult to find desired items in extensive bookmark collections. This experiment used response-time measures and eye-movement tracking to investigate how different information structures within bookmarks influence their salience and recognizability. Participants were presented with a series of news websites. The task following presentation of each site was to find the bookmark indexing the previously-seen page as quickly as possible. The Informational Structure of bookmarks was manipulated (top-down vs. bottom-up verbal organizations), together with the Number of Informational Cues present (one, two or three). Only this latter factor affected gross search times: Two cues were optimal, one cue was highly sub-optimal. However, more detailed eye-movement analyses of fixation behaviour on target items revealed interactive effects of both experimental factors, suggesting that the efficacy of bookmark recognition is crucially dependent on having an optimal combination of information quantity and information organization.
Conference Paper
Full-text available
Chronic neurological disorders in their advanced phase are characterized by a progressive loss of mobility (use of upper and lower limbs), speech and social life. Some of these pathologies, such as amyotrophic lateral sclerosis and multiple sclerosis, are paradigmatic of these deficits. High technology communication instruments, such as eye tracking, can be an extremely important possibility to reintroduce these patients in their family and social life, in particular when they suffer severe disability. This paper reports and describes the results of an ongoing experimentation about Eye Tracking impact on the quality of life of amyotrophic lateral sclerosis patients. The aim of the experimentation is to evaluate if and when eye tracking technologies have a positive impact on patients’ lives.
Conference Paper
Full-text available
We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the user's eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.
Conference Paper
Full-text available
Eye typing provides a means of communication for severely handicapped people, even those who are only capable of moving their eyes. This paper considers the features, functionality and methods used in the eye typing systems developed in the last twenty years. Primary concerned with text production, the paper also addresses other communication related issues, among them customization and voice output.
Conference Paper
Full-text available
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
Conference Paper
Full-text available
This paper presents StarGazer - a new D interface for gaze-based interaction and target selection using continuous pan and zoom. Through StarGazer we address the issues of interacting with graph structured data and applications (i.e. gaze typing systems) using low resolution eye trackers or small-size displays. We show that it is possible to make robust selection even with a large number of selectable items on the screen and noisy gaze trackers. A test with 48 subjects demonstrated that users who have never tried gaze interaction before could rapidly adapt to the navigation principles of StarGazer. We tested three different display sizes (down to PDA-sized displays) and found that large screens are faster to navigate than small displays and that the error rate is higher for the smallest display. Half of the subjects were exposed to severe noise deliberately added on the cursor positions. We found that this had a negative impact on efficiency. However, the user remained in control and the noise did not seem to effect the error rate. Additionally, three subjects tested the effects of temporally adding noise to simulate latency in the gaze tracker. Even with a significant latency (about 200 ms) the subjects were able to type at acceptable rates. In a second test, seven subjects were allowed to adjust the zooming speed themselves. They achieved typing rates of more than eight words per minute without using language modeling. We conclude that the StarGazer application is an intuitive D interface for gaze navigation, allowing more selectable objects to be displayed on the screen than the accuracy of the gaze trackers would otherwise permit.
Conference Paper
Full-text available
We investigate the usability of an eye controlled writing interface that matches the nature of human eye gaze, which always moves and is not immediately able to trigger the selection of a button. Such an interface allows the eye continuously to move and it is not necessary to dwell upon a specific position to trigger a command. We classify writing into three categories (typing, gesturing, and continuous writing) and explain why continuous writing comes closest to the nature of human eye gaze. We propose Quikwriting, which was originally designed for handhelds, as a method for text input that meets the requirements of eye gaze controlled input best. We adapt its design for the usage with eye gaze. Based on the results of a first study, we formulate some guidelines for the design of future Quikwriting-based eye gaze controlled applications.
Article
Full-text available
Research on gaze and eye contact was organized within the framework of Patterson's (1982) sequential functional model of nonverbal exchange. Studies were reviewed showing how gaze functions to (a) provide information, (b) regulate interaction, (c) express intimacy, (d) exercise social control, and (e) facilitate service and task goals. Research was also summarized that describes personal, experiential, relational, and situational antecedents of gaze and reactions to gaze. Directions were given for a functional analysis of the relation between gaze and physiological responses. Attribution theories were integrated into the sequential model for making predictions about people's perceptions of their own gazing behavior and the gazing behavior of others. Data on people's accuracy in reporting their own and others' gaze were presented and integrated with related findings in attribution research. The sequential model was used to analyze research studies measuring the interaction between gaze and personal and contextual variables. Methodological and measurement issues were discussed and directions were outlined for future research.
Article
Full-text available
Infants acquire language with remarkable speed, although little is known about the mechanisms that underlie the acquisition process. Studies of the phonetic units of language have shown that early in life, infants are capable of discerning differences among the phonetic units of all languages, including native- and foreign-language sounds. Between 6 and 12 mo of age, the ability to discriminate foreign-language phonetic units sharply declines. In two studies, we investigate the necessary and sufficient conditions for reversing this decline in foreign-language phonetic perception. In Experiment 1, 9-mo-old American infants were exposed to native Mandarin Chinese speakers in 12 laboratory sessions. A control group also participated in 12 language sessions but heard only English. Subsequent tests of Mandarin speech perception demonstrated that exposure to Mandarin reversed the decline seen in the English control group. In Experiment 2, infants were exposed to the same foreign-language speakers and materials via audiovisual or audio-only recordings. The results demonstrated that exposure to recorded Mandarin, without interpersonal interaction, had no effect. Between 9 and 10 mo of age, infants show phonetic learning from live, but not prerecorded, exposure to a foreign language, suggesting a learning process that does not require long-term listening and is enhanced by social interaction.
Chapter
Experts from a range of disciplines assess the foundations and implications of a novel action-oriented view of cognition. Cognitive science is experiencing a pragmatic turn away from the traditional representation-centered framework toward a view that focuses on understanding cognition as “enactive.” This enactive view holds that cognition does not produce models of the world but rather subserves action as it is grounded in sensorimotor skills. In this volume, experts from cognitive science, neuroscience, psychology, robotics, and philosophy of mind assess the foundations and implications of a novel action-oriented view of cognition. Their contributions and supporting experimental evidence show that an enactive approach to cognitive science enables strong conceptual advances, and the chapters explore key concepts for this new model of cognition. The contributors discuss the implications of an enactive approach for cognitive development; action-oriented models of cognitive processing; action-oriented understandings of consciousness and experience; and the accompanying paradigm shifts in the fields of philosophy, brain science, robotics, and psychology. ContributorsMoshe Bar, Lawrence W. Barsalov, Olaf Blanke, Jeannette Bohg, Martin V. Butz, Peter F. Dominey, Andreas K. Engel, Judith M. Ford, Karl J. Friston, Chris D. Frith, Shaun Gallagher, Antonia Hamilton, Tobias Heed, Cecilia Heyes, Elisabeth Hill, Matej Hoffmann, Jakob Hohwy, Bernhard Hommel, Atsushi Iriki, Pierre Jacob, Henrik Jörntell, Jürgen Jost, James Kilner, Günther Knoblich, Peter König, Danica Kragic, Miriam Kyselo, Alexander Maye, Marek McGann, Richard Menary, Thomas Metzinger, Ezequiel Morsella, Saskia Nagel, Kevin J. O'Regan, Pierre-Yves Oudeyer, Giovanni Pezzulo, Tony J. Prescott, Wolfgang Prinz, Friedemann Pulvermüller, Robert Rupert, Marti Sanchez-Fibla, Andrew Schwartz, Anil K. Seth, Vicky Southgate, Antonella Tramacere, John K. Tsotsos, Paul F. M. J. Verschure, Gabriella Vigliocco, Gottfried Vosgerau
Conference Paper
Several methods of gaze control of video playback were implemented in MovEye application. Two versions of MovEye are almost ready: for watching online movies from the YouTube service and for watching movies from the files stored on local drives. We have two goals: the social one is to help people with physical disabilities to control and enrich their immediate environment; the scientific one is to compare the usability of several gaze control methods for video playback in case of healthy and disabled users. This paper aims to our gaze control applications. Our next step will be conducting the accessibility and user experience (UX) tests for both healthy and disabled users. The long-time perspective of this research could lead to the implementation of gaze control in TV sets and other video playback devices.
Conference Paper
We present cascading dwell gaze typing, a novel approach to dwell-based eye typing that dynamically adjusts the dwell time of keys in an on-screen keyboard based on the likelihood that a key will be selected next, and the location of the key on the keyboard. Our approach makes unlikely keys more difficult to select and likely keys easier to select by increasing and decreasing their required dwell times, respectively. To maintain a smooth typing rhythm for the user, we cascade the dwell time of likely keys, slowly decreasing the minimum allowable dwell time as a user enters text. Cascading the dwell time affords users the benefits of faster dwell times while causing little disruption to users' typing cadence. Results from a longitudinal study with 17 non-disabled participants show that our dynamic cascading dwell technique was significantly faster than a static dwell approach. Participants were able to achieve typing speeds of 12.39 WPM on average with our cascading technique, whereas participants were able to achieve typing speeds of 10.62 WPM on average with a static dwell time approach. In a small evaluation conducted with five people with ALS, participants achieved average typing speeds of 9.51 WPM with our cascading dwell approach. These results show that our dynamic cascading dwell technique has the potential to improve gaze typing for users with and without disabilities.
Chapter
Humans learn motor skills over an extended period of time, in parallel with many other cognitive changes. The ways in which action cognition develops and links to social and executive cognition are under investigation. Recent literature is reviewed which finds evidence that infants advance from chaotic movement to adult-like patterns in the first two or three years of life, and that their motor performance continues to improve and develop into the teenage years. Studies of links between motor and cognitive systems suggest that motor skill is weakly linked to executive function and more robustly predicts social skill. Few, if any, models account directly for these patterns of results, so the different categories of models available are described.
Chapter
Weekly observations of thirteen mother-infant dyads, from one to five months of age, were used to examine whether the type and timing of maternal changes in infant postural position were related to infant gaze, facial expression and reaching. Results show that mothers changed the infant's position more frequently when the infant was gazing away from her compared to when the infant was gazing at her, and when the infant was facially either neutral or positive. When the infants gazed away, mothers alternated their position between sitting upright while facing her and upright while facing in the direction of the infant's gaze. Some mother's choice of postural position matched the infant's developmental changes in gaze, while other mothers lagged behind. These mismatches of gaze and postural position continued until the developmental onset of reaching. Social-developmental processes are discussed with respect to the role of postural and sensorimotor factors using a dynamic systems perspective.
Article
Assistive technology includes equipment, devices and software solutions that increase functional capabilities of people with disabilities and improve the quality of their lives. The article presents assistive technology for people with cerebral palsy. These are mobility aids that enable people with cerebral palsy independent walking. For those who cannot walk, proper seating is very important. People, who cannot propel manual wheelchair, can control electric wheelchair with various controls. There are several augmentative and alternative communication devices for people with cerebral palsy that are not able to speak. Finally, environmental control systems are presented.
Article
Dasher is a promising fast assistive gaze communication method. However, previous evaluations of Dasher have been inconclusive. Either the studies have been too short, involved too few participants, suffered from sampling bias, lacked a control condition, used an inappropriate language model, or a combination of the above. To rectify this, we report results from two new evaluations of Dasher carried out using a Tobii P10 assistive eye-tracker machine. We also present a method of modifying Dasher so that it can use a state-of-the-art long-span statistical language model. Our experimental results show that compared to a baseline eye-typing method, Dasher resulted in significantly faster entry rates (12.6 wpm versus 6.0 wpm in Experiment 1, and 14.2 wpm versus 7.0 wpm in Experiment 2). These faster entry rates were possible while maintaining error rates comparable to the baseline eye-typing method. Participants' perceived physical demand, mental demand, effort and frustration were all significantly lower for Dasher. Finally, participants significantly rated Dasher as being more likeable, requiring less concentration and being more fun.
Article
Significance Infants discriminate speech sounds universally until 8 mo of age, then native discrimination improves and nonnative discrimination declines. Using magnetoencephalography, we investigate the contribution of auditory and motor brain systems to this developmental transition. We show that 7-mo-old infants activate auditory and motor brain areas similarly for native and nonnative sounds; by 11–12 mo, greater activation in auditory brain areas occurs for native sounds, whereas greater activation in motor brain areas occurs for nonnative sounds, matching the adult pattern. We posit that hearing speech invokes an Analysis by Synthesis process: auditory analysis of speech is coupled with synthesis that predicts the motor plans necessary to produce it. Both brain systems contribute to the developmental transition in infant speech perception.
Article
In natural behavior, visual information is actively sampled from the environment by a sequence of gaze changes. The timing and choice of gaze targets, and the accompanying attentional shifts, are intimately linked with ongoing behavior. Nonetheless, modeling of the deployment of these fixations has been very difficult because they depend on characterizing the underlying task structure. Recently, advances in eye tracking during natural vision, together with the development of probabilistic modeling techniques, have provided insight into how the cognitive agenda might be included in the specification of fixations. These techniques take advantage of the decomposition of complex behaviors into modular components. A particular subset of these models casts the role of fixation as that of providing task-relevant information that is rewarding to the agent, with fixation being selected on the basis of expected reward and uncertainty about environmental state. We review this work here and describe how specific examples can reveal general principles in gaze control.
Article
Language input is necessary for language learning, yet little is known about whether, in natural environments, the speech style and social context of language input to children impacts language development. In the present study we investigated the relationship between language input and language development, examining both the style of parental speech, comparing ‘parentese’ speech to standard speech, and the social context in which speech is directed to children, comparing one-on-one (1:1) to group social interactions. Importantly, the language input variables were assessed at home using digital first-person perspective recordings of the infants' auditory environment as they went about their daily lives (N =26, 11- and 14-months-old). We measured language development using (a) concurrent speech utterances, and (b) word production at 24 months. Parentese speech in 1:1 contexts is positively correlated with both concurrent speech and later word production. Mediation analyses further show that the effect of parentese speech-1:1 on infants' later language is mediated by concurrent speech. Our results suggest that both the social context and the style of speech in language addressed to children are strongly linked to a child's future language development.
Conference Paper
Infant language acquisition research faces the challenge of dealing with subjects who are unable to provide spoken answers to research questions. To obtain comprehensible data from such subjects eye tracking is a suitable research tool, as the infants’ gaze can be interpreted as behavioural responses. The purpose of the current study was to investigate the amount of training necessary for participants to learn an audio-visual contingency and present anticipatory looking behaviour in response to an auditory stimulus. Infants (n=22) and adults (n=16) were presented with training sequences, every fourth of which was followed by a test sequence. Training sequences contained implicit audio-visual contingencies consisting of a syllable (/da/ or /ga/) followed by an image appearing on the left/right side of the screen. Test sequences were identical to training sequences except that no image appeared. The latency in time to first fixation towards the non-target area during test sequences was used as a measurement of whether the participants had grasped the contingency. Infants were found to present anticipatory looking behaviour after 24 training trials. Adults were found to present anticipatory looking behaviour after 28-36 training trials. In future research a more interactive experiment design will be employed in order to individualise the amount of training, which will increase the time span available for testing.
Article
There is little dispute that the main channels of intercommunication of people with the world at large are: sight, sound, and touch; and for people with other people: eye-contact, speech, gesture. Advanced human-computer interfaces increasingly implicate speech i/o, and touch or some form of manual input.
Article
This paper presents a theoretical account of the sequence and duration of eye fixation during a number of simple cognitive tasks, such as mental rotation, sentence verification, and quantitative comparison. In each case, the eye fixation behavior is linked to a processing model for the task by assuming that the eye fixates the referent of the symbol being operated on.
Article
In this paper, we propose a teaching method as an alternative to the concurrent think-aloud (CTA) method for usability evaluation. In the teaching method, the test participant, after becoming familiar with the system, demonstrates it to a seemingly naive user (a confederate) and describes how to accomplish certain tasks. In a study that compared the teaching and the CTA methods for evaluating usability of human-computer interactive tasks, the results indicated that the number of verbalizations elicited using the teaching method far exceeded those elicited using the CTA method. Also, the concurrent verbalizations were dominated by the participants' interactive behavior and provided little insight into the participants' thought processes or search strategies, which were easily captured using the teaching method.
Article
To date, several eye input methods have been developed, which, however, are usually designed for specific purposes (e.g. typing) and require dedicated graphical interfaces. In this paper we pre-sent Eye-S, a system that allows general input to be provided to the computer through a pure eye-based approach. Thanks to the "eye graffiti" communication style adopted, the technique can be used both for writing and for generating other kinds of commands. In Eye-S, letters and general eye gestures are created through se-quences of fixations on nine areas of the screen, which we call hotspots. Being usually not visible, such sensitive regions do not interfere with other applications, that can therefore exploit all the available display space.
Conference Paper
In recent years, eye tracking systems have greatly improved, beginning to play an important role in the assistive technology field. Eye tracking relates to the capability of some devices to detect and measure eye movements, with the aim to precisely identify the user's gaze direction on a screen. The acquired data can then be exploited to provide commands to the computer. In this paper we present WeyeB, a browsing system which allows the two basic Web surfing operations, namely page scrolling and link selection, to be easily performed without using the hands.
Conference Paper
A Human-Computer Interaction (HCI) system that is designed for individuals with severe disabilities to simulate control of a traditional computer mouse is introduced. The camera-based system monitors a user's eyes and allows the user to simulate clicking the mouse using voluntary blinks and winks. For users who can control head movements and can wink with one eye while keeping their other eye visibly open, the system allows complete use of a typical mouse, including moving the pointer, left and right clicking, double clicking, and click-and-dragging. For users who cannot wink but can blink voluntarily the system allows the user to perform left clicks, the most common and useful mouse action. The system does not require any training data to distinguish open eyes versus closed eyes. Eye classification is accomplished online during real-time interactions. The system had an accuracy of 8027/8306 = 96.6% in classifying sub-images with open or closed eyes and successfully allows the users to simulate a traditional computer mouse.
Article
To make computer systems easier to use, we are in need of behavioral data which enable us to pinpoint what specific needs and problems users may have. Recently, the “thinking-aloud protocol” method was adopted as a technique for studying user behaviours in interactive computer systems. In the present paper, the “question-asking protocol” method is proposed as a viable alternative to the thinking-aloud method where the application of the latter is difficult or even inappropriate. It is argued that question-asking protocols shed light on (1) what problems users experience in what context, (2) what instructional information they come to need, (3) what features of the system are harder to lean, and (4) how users may come to understand or misunderstand the system.
Article
Nowadays home automation, with its increased availability, reliability and with its ever reducing costs is gaining momentum and is starting to become a viable solution for enabling people with disabilities to autonomously interact with their homes and to better communicate with other people. However, especially for people with severe mobility impairments, there is still a lack of tools and interfaces for effective control and interaction with home automation systems, and general–purpose solutions are seldom applicable due to the complexity, asynchronicity, time dependent behavior, and safety concerns typical of the home environment. This paper focuses on user–environment interfaces based on the eye tracking technology, which often is the only viable interaction modality for users as such. We propose an eye-based interface tackling the specific requirements of smart environments, already outlined in a public Recommendation issued by the COGAIN European Network of Excellence. The proposed interface has been implemented as a software prototype based on the ETU universal driver, thus being potentially able to run on a variety of eye trackers, and it is compatible with a wide set of smart home technologies, handled by the Domotic OSGi Gateway. A first interface evaluation, with user testing sessions, has been carried and results show that the interface is quite effective and usable without discomfort by people with almost regular eye movement control.
Conference Paper
For 11 studies, we find that the detection of usability problems as a function of number of users tested or heuristic evaluators employed is well modeled as a Poisson process. The model can be used to plan the amount of evaluation required to achieve desired levels of thoroughness or benefits. Results of early tests can provide estimates of the number of problems left to be found and the number of additional evaluations needed to find a given fraction. With quantitative evaluation costs and detection values, the model can estimate the numbers of evaluations at which optimal cost/benefit ratios are obtained and at which marginal utility vanishes. For a ” medium” example, we estimate that 16 evaluations would be worth their cost, with maximum benefit/cost ratio at four.
Article
Contrasting results have been reported regarding the phonetic acquisition of bilinguals. A lack of discrimination has been observed for certain native contrasts in 8-month-old Catalan-Spanish bilingual infants (Bosch & Sebastián-Gallés, 2003a), though not in French-English bilingual infants (Burns, Yoshida, Hill & Werker, 2007; Sundara, Polka & Molnar, 2008). At present, the data for Catalan-Spanish bilinguals constitute an exception in the early language acquisition literature. This study contributes new findings that show that Catalan-Spanish bilingual infants do not lose the capacity to discriminate native contrasts. We used an adaptation of the anticipatory eye movement paradigm (AEM; McMurray & Aslin, 2004) to explore this question. In two experiments we tested the ability of infants from Catalan and Spanish monolingual families and from Catalan-Spanish bilingual families to discriminate a Spanish-Catalan common and a Catalan-specific vowel contrast. Results from both experiments revealed that Catalan-Spanish bilingual infants showed the same discrimination abilities as those shown by their monolingual peers, even in a phonetic contrast that had not been discriminated in previous studies. Our results demonstrate that discrimination can be observed in 8-month-old bilingual infants when tested with a measure not based on recovery of attention. The high ratio of cognates in Spanish and Catalan may underlie the reason why bilinguals failed to discriminate the native vowels when tested with the familiarization-preference procedure but succeeded with the AEM paradigm.
Article
The last decade has produced an explosion in neuroscience research examining young children's early processing of language that has implications for education. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. In the arena of language, the neural signatures of learning can be documented at a remarkably early point in development, and these early measures predict performance in children's language and pre-reading abilities in the second, third, and fifth year of life, a finding with theoretical and educational import. There is evidence that children's early mastery of language requires learning in a social context, and this finding also has important implications for education. Evidence relating socio-economic status (SES) to brain function for language suggests that SES should be considered a proxy for the opportunity to learn and that the complexity of language input is a significant factor in developing brain areas related to language. The data indicate that the opportunity to learn from complex stimuli and events are vital early in life, and that success in school begins in infancy.
Article
In everyday life, eye movements enable the eyes to gather the information required for motor actions. They are thus proactive, anticipating actions rather than just responding to stimuli. This means that the oculomotor system needs to know where to look and what to look for. Using examples from table tennis, driving and music reading we show that the information the eye movement system requires is very varied in origin and highly task specific, and it is suggested that the control program or schema for a particular action must include directions for the oculomotor and visual processing systems. In many activities (reading text and music, typing, steering) processed information is held in a memory buffer for a period of about a second. This permits a match between the discontinuous input from the eyes and continuous motor output, and in particular allows the eyes to be involved in more than one task.