Figure 1 - uploaded by Joseph J. Pizzimenti
Content may be subject to copyright.
The normal menu interface condition. 

The normal menu interface condition. 

Source publication
Conference Paper
Full-text available
This paper examines factors that affect performance of a basic menu selection task by users who are visually healthy and users with Diabetic Retinopathy (DR) in order to inform better interface design. Interface characteristics such as multimodal feedback, Windows® accessibility settings, and menu item location were investigated. Analyses of Varian...

Contexts in source publication

Context 1
... and utility of these solutions, for a variety of users with visual impairments, is far from completely understood and have not been documented. Another strategy sometimes employed to aid computer users with visual and other types of impairments is the use of multimodal feedback. By supplying redundant information about interaction features and events in the form of non-visual feedback, computer users with impaired vision may experience more engaging interaction than if the information was conveyed solely through visual means. By using multimodal feedback, important information in the dialogue between human and computer may become more salient, and therefore more meaningful, to the user. Multimodal feedback has been investigated in a variety of applications in human-computer interaction (HCI), such as steering and targeting tasks [5, 14], drag-and-drop interactions [10], and scrolling operations[11]. Multimodal feedback takes advantage of untapped human sensory channels as input channels to provide additional and/or redundant information during interaction. The most common forms of multimodal feedback are auditory and haptic, which utilize the senses of hearing and touch, respectively. This feedback combination has great potential to improve the performance of users with visual impairments [17]. In a study by Jacko et al. [10], the addition of auditory and haptic information to visual feedback in a drag-and-drop task involving users who had Age-related Macular Degeneration yielded better performance than visual feedback alone. Multimodal feedback also has the potential to enhance interaction for users with a variety of impairments or no impairments at all [17], because the additional modalities, if used appropriately, can provide enhanced sensory input without increasing the complexity of the task or introducing new demands on the user. Because of this, multimodal feedback is often regarded as less of an assistive technology and more of a transparent and universal interface enhancement than other approaches for people with visual impairments. The primary objective of the study presented in this paper was to determine what potential benefits, if any, the Windows ® accessibility settings and multimodal feedback, alone and in combination, could offer users with visual impairments performing a common direct manipulation task. A secondary objective was to establish an understanding of additional factors that contribute to performance deficits for users with visual impairments, in order to generate recommendations for design. In this study, the Windows ® accessibility settings and multimodal feedback were implemented in relatively standard configurations so that a high level of expertise or effort was not required from those who would be using or implementing the enhancements. The study focused on one key interaction task, menu selection, with a diverse group of people including individuals with DR and age-matched participants (controls) who possessed no ocular disease. Twenty-nine volunteers from the Nova Southeastern University (NSU) College of Optometry patient pool and associates of NSU staff participated in the study and were assigned to one of three groups, depending on ocular condition. The Control Group (n=9) consisted of participants who had no limiting ocular pathology. Another group (Group 1, n=9) was formed from participants who had comparable visual acuity to the Control Group but were diabetic with evidence of retinopathy. The final group (Group 2, n=6) consisted of diabetic participants who also had evidence of retinopathy, but whose acuity was worse than the Control Group and Group 1. (See Table 1 for the visual acuities of each group). All participants were screened for adequate computer experience, as this has been shown to have a significant effect on performance, even for simple tasks [6, 7]. For the ANOVA, 5 participants were excluded from the analyses. Three were excluded because they did not meet the acuity requirements for Group 2 and a fourth did not meet the acuity requirements for the Control Group. Another participant was excluded due to inadequate computer experience. While acuity and other visual characteristics differed between the groups, the three groups were statistically equivalent with respect to age, gender, education level, mental and physical health (as assessed by the SF-12TM Health Survey [18]) and dexterity (when adjusted for differences in acuity). Group profiles are presented in Table 1. The task consisted of a series of basic menu navigation and selection actions under combinations of two interface conditions, consisting of two levels each (See Table 2). A menu interface was designed for the study to emulate selected menus from the Microsoft WordTM menu bar. The interface consisted of three menus containing five items each. One of the menus with which participants interacted is shown in Figures 1 and 2. These figures depict the relative sizes of the Normal (Figure 1) and Windows ® accessibility (Figure 2) conditions. The normal menu interface (Figure 1) had smaller, black text on a light background and a blue highlight. Figure 2 shows the menu interface with the Windows ® accessibility settings applied, resulting in larger, bold, white text on a black background and a purple highlight. Participants interacted with each of these menu interfaces both with and without auditory and haptic feedback, resulting in four possible interaction conditions (see Table 2). The multimodal feedback was intended to reflect the level of feedback that is currently commercially available, and was therefore implemented as follows: The auditory feedback was a simple and brief abstract sound, and the haptic feedback was generated through a SaitekTM TouchForceTM [15] optical mouse, which was otherwise very similar to currently available two- button optical mice. The haptic feedback was a short mechanical vibration, generated from a motor inside the mouse. Both types of feedback occurred when the mouse crossed a boundary between menus or items. It should be noted that the forms of auditory and haptic feedback remained the same regardless of the participants’ proximity to the target menu or item. The feedback generation software was unaware of the position of the target menu and item. This lack of “intelligent” feedback is representative of the current commercially available PC peripherals integrating multimodal feedback, which this experiment endeavored to empirically evaluate. Within each scenario, the participant was instructed to locate each menu item once, thus experiencing 15 trials per condition. The goal menu item appeared centered in the bottom half of the screen in 36-pt Arial bold font. No participants reported difficulties reading the goal menu items. Three menus were permanently located in the upper-left corner of the screen in a left, middle, and right position. The placement of menus and items were consistent throughout the experiment. When a participant selected a menu item, either correctly or incorrectly, a new goal was presented in text at the bottom of the screen. Participants performed practice trials for each condition at the start of the experiment to ensure that they felt comfortable with the task prior to completion of the trials. Once the task commenced, Total Time (TT) was recorded for each trial. Also, error measures, including Missed Opportunities (MO) were recorded. The order in which the conditions were presented and the order of the trials were randomized for each participant to control for learning and location effects. A graph of the average Total Times (TT) for each trial between conditions for each group is shown in Figure 3. A repeated measures ANOVA was used to analyze TT between conditions. Since the initial analysis revealed that the data failed to meet the assumption that error terms are normally distributed, an inverse transformation (1/TT) was applied to the data, which can be interpreted as the rate of task completion. The main effects and two-way interaction effects from the ANOVA are presented in Table 3. The results indicate significant main effects of acuity group, Windows ® accessibility settings, and multimodal feedback on 1/TT. The clarity of the results is marred by the significant interaction between the acuity groups and the Windows ® accessibility settings, indicating that the effect of the Windows ® accessibility settings was not consistent across groups, which can be seen in Figure 3. Ninety-five percent confidence intervals were used to distinguish specific significant differences (p<0.05) because of the interaction effect. The confidence intervals shown in Figure 4 indicate that the Control Group and Group 1, when examined in isolation, performed similarly in conditions with and without the Windows ® accessibility settings, whereas Group 2 performed significantly better in the conditions utilizing the Windows ® accessibility settings (as shown by non-overlapping confidence intervals). Note that larger values in Figure 4 indicate better performance (less time taken), because of the inverse transformation. This interaction indicates that the Windows ® accessibility settings had a larger effect on Group 2, which had progressed DR and reduced acuity, than on the two groups that had normal acuity. Surprisingly, multimodal feedback had a significant main effect (with no significant interaction with acuity group or Windows ® accessibility settings) indicating that the presence of multimodal feedback slightly impeded performance overall. However, the mean increase in TT due to the presence of multimodal feedback was only 137 msec (0.137 sec), which is of little practical significance, but is quite interesting nonetheless. One of the error measures examined in this study was Missed Opportunities (MO), which represents instances in which participants had an opportunity to make a correct selection but moved away from the target menu ...
Context 2
... is far from completely understood and have not been documented. Another strategy sometimes employed to aid computer users with visual and other types of impairments is the use of multimodal feedback. By supplying redundant information about interaction features and events in the form of non-visual feedback, computer users with impaired vision may experience more engaging interaction than if the information was conveyed solely through visual means. By using multimodal feedback, important information in the dialogue between human and computer may become more salient, and therefore more meaningful, to the user. Multimodal feedback has been investigated in a variety of applications in human-computer interaction (HCI), such as steering and targeting tasks [5, 14], drag-and-drop interactions [10], and scrolling operations[11]. Multimodal feedback takes advantage of untapped human sensory channels as input channels to provide additional and/or redundant information during interaction. The most common forms of multimodal feedback are auditory and haptic, which utilize the senses of hearing and touch, respectively. This feedback combination has great potential to improve the performance of users with visual impairments [17]. In a study by Jacko et al. [10], the addition of auditory and haptic information to visual feedback in a drag-and-drop task involving users who had Age-related Macular Degeneration yielded better performance than visual feedback alone. Multimodal feedback also has the potential to enhance interaction for users with a variety of impairments or no impairments at all [17], because the additional modalities, if used appropriately, can provide enhanced sensory input without increasing the complexity of the task or introducing new demands on the user. Because of this, multimodal feedback is often regarded as less of an assistive technology and more of a transparent and universal interface enhancement than other approaches for people with visual impairments. The primary objective of the study presented in this paper was to determine what potential benefits, if any, the Windows ® accessibility settings and multimodal feedback, alone and in combination, could offer users with visual impairments performing a common direct manipulation task. A secondary objective was to establish an understanding of additional factors that contribute to performance deficits for users with visual impairments, in order to generate recommendations for design. In this study, the Windows ® accessibility settings and multimodal feedback were implemented in relatively standard configurations so that a high level of expertise or effort was not required from those who would be using or implementing the enhancements. The study focused on one key interaction task, menu selection, with a diverse group of people including individuals with DR and age-matched participants (controls) who possessed no ocular disease. Twenty-nine volunteers from the Nova Southeastern University (NSU) College of Optometry patient pool and associates of NSU staff participated in the study and were assigned to one of three groups, depending on ocular condition. The Control Group (n=9) consisted of participants who had no limiting ocular pathology. Another group (Group 1, n=9) was formed from participants who had comparable visual acuity to the Control Group but were diabetic with evidence of retinopathy. The final group (Group 2, n=6) consisted of diabetic participants who also had evidence of retinopathy, but whose acuity was worse than the Control Group and Group 1. (See Table 1 for the visual acuities of each group). All participants were screened for adequate computer experience, as this has been shown to have a significant effect on performance, even for simple tasks [6, 7]. For the ANOVA, 5 participants were excluded from the analyses. Three were excluded because they did not meet the acuity requirements for Group 2 and a fourth did not meet the acuity requirements for the Control Group. Another participant was excluded due to inadequate computer experience. While acuity and other visual characteristics differed between the groups, the three groups were statistically equivalent with respect to age, gender, education level, mental and physical health (as assessed by the SF-12TM Health Survey [18]) and dexterity (when adjusted for differences in acuity). Group profiles are presented in Table 1. The task consisted of a series of basic menu navigation and selection actions under combinations of two interface conditions, consisting of two levels each (See Table 2). A menu interface was designed for the study to emulate selected menus from the Microsoft WordTM menu bar. The interface consisted of three menus containing five items each. One of the menus with which participants interacted is shown in Figures 1 and 2. These figures depict the relative sizes of the Normal (Figure 1) and Windows ® accessibility (Figure 2) conditions. The normal menu interface (Figure 1) had smaller, black text on a light background and a blue highlight. Figure 2 shows the menu interface with the Windows ® accessibility settings applied, resulting in larger, bold, white text on a black background and a purple highlight. Participants interacted with each of these menu interfaces both with and without auditory and haptic feedback, resulting in four possible interaction conditions (see Table 2). The multimodal feedback was intended to reflect the level of feedback that is currently commercially available, and was therefore implemented as follows: The auditory feedback was a simple and brief abstract sound, and the haptic feedback was generated through a SaitekTM TouchForceTM [15] optical mouse, which was otherwise very similar to currently available two- button optical mice. The haptic feedback was a short mechanical vibration, generated from a motor inside the mouse. Both types of feedback occurred when the mouse crossed a boundary between menus or items. It should be noted that the forms of auditory and haptic feedback remained the same regardless of the participants’ proximity to the target menu or item. The feedback generation software was unaware of the position of the target menu and item. This lack of “intelligent” feedback is representative of the current commercially available PC peripherals integrating multimodal feedback, which this experiment endeavored to empirically evaluate. Within each scenario, the participant was instructed to locate each menu item once, thus experiencing 15 trials per condition. The goal menu item appeared centered in the bottom half of the screen in 36-pt Arial bold font. No participants reported difficulties reading the goal menu items. Three menus were permanently located in the upper-left corner of the screen in a left, middle, and right position. The placement of menus and items were consistent throughout the experiment. When a participant selected a menu item, either correctly or incorrectly, a new goal was presented in text at the bottom of the screen. Participants performed practice trials for each condition at the start of the experiment to ensure that they felt comfortable with the task prior to completion of the trials. Once the task commenced, Total Time (TT) was recorded for each trial. Also, error measures, including Missed Opportunities (MO) were recorded. The order in which the conditions were presented and the order of the trials were randomized for each participant to control for learning and location effects. A graph of the average Total Times (TT) for each trial between conditions for each group is shown in Figure 3. A repeated measures ANOVA was used to analyze TT between conditions. Since the initial analysis revealed that the data failed to meet the assumption that error terms are normally distributed, an inverse transformation (1/TT) was applied to the data, which can be interpreted as the rate of task completion. The main effects and two-way interaction effects from the ANOVA are presented in Table 3. The results indicate significant main effects of acuity group, Windows ® accessibility settings, and multimodal feedback on 1/TT. The clarity of the results is marred by the significant interaction between the acuity groups and the Windows ® accessibility settings, indicating that the effect of the Windows ® accessibility settings was not consistent across groups, which can be seen in Figure 3. Ninety-five percent confidence intervals were used to distinguish specific significant differences (p<0.05) because of the interaction effect. The confidence intervals shown in Figure 4 indicate that the Control Group and Group 1, when examined in isolation, performed similarly in conditions with and without the Windows ® accessibility settings, whereas Group 2 performed significantly better in the conditions utilizing the Windows ® accessibility settings (as shown by non-overlapping confidence intervals). Note that larger values in Figure 4 indicate better performance (less time taken), because of the inverse transformation. This interaction indicates that the Windows ® accessibility settings had a larger effect on Group 2, which had progressed DR and reduced acuity, than on the two groups that had normal acuity. Surprisingly, multimodal feedback had a significant main effect (with no significant interaction with acuity group or Windows ® accessibility settings) indicating that the presence of multimodal feedback slightly impeded performance overall. However, the mean increase in TT due to the presence of multimodal feedback was only 137 msec (0.137 sec), which is of little practical significance, but is quite interesting nonetheless. One of the error measures examined in this study was Missed Opportunities (MO), which represents instances in which participants had an opportunity to make a correct selection but moved away from the target menu item instead. MO was calculated as the number of times per trial that the correct menu ...

Citations

Chapter
Multimodal interaction aims at more flexible, more robust, more efficient and more natural interaction than can be achieved with traditional unimodal interactive systems. For this, the developer needs some design support in order to select appropriate modalities, to find appropriate modality combinations and to implement promising modality adaptation strategies. This paper presents first patterns for multimodal interaction and focuses on patterns for “fast input”, “robust interaction” and patterns for “flexible interaction”. Before these patterns are outlined in detail, an introduction to the field of multimodal interaction is given and the pattern identification process that was the basis of this work is presented.
Conference Paper
This study explores factors affecting handheld computer interaction for older adults with Age-related Macular Degeneration (AMD). This is largely uncharted territory, as empirical investigations of human-computer interaction (HCI) concerning users with visual dysfunction and/or older adults have focused primarily on desktop computers. For this study, participants with AMD and visually-healthy controls used a handheld computer to search, select and manipulate familiar playing card icons under varied icon set sizes, inter-icon spacing and auditory feedback conditions. While all participants demonstrated a high rate of task completion, linear regression revealed several relationships between task efficiency and the interface, user characteristics and ocular factors. Two ocular measures, severity of AMD and contrast sensitivity, were found to be highly predictive of efficiency. The outcomes of this work reveal that users with visual impairments can effectively interact with GUIs on small displays in the presence of low-cost, easily implemented design interventions. This study presents a rich data set and is intended to inspire future work exploring the interactions of individuals with visual impairments with non-traditional information technology platforms, such as handheld computers.
Conference Paper
Multimodal interaction aims at more flexible, more robust, more efficient and more natural interaction than can be achieved with traditional unimodal interactive systems. In order to achieve this, the developer needs some design support in order to select appropriate modalities, to find appropriate modality combinations and to implement promising modality adaptation strategies. This paper presents a first sketch of an emerging pattern language for multimodal interaction and focusses on patterns for "flexible interaction" and patterns for "robust interaction". This work is part of a thesis project on pattern-based usability engineering for multimodal interaction.
Article
This paper examines factors that affect performance on a basic menu selection task by users who are visually healthy and users with Diabetic Retinopathy (DR) in order to inform better interface design. Linear and logistic regression models were used to examine various contextual factors that influenced task efficiency (time) and accuracy (errors). Interface characteristics such as multimodal feedback, Windows® accessibility settings, and menu item location were investigated along with various visual function and participant characteristics. Results indicated that Windows® accessibility settings and other factors, including age, computer experience, visual acuity, contrast sensitivity, and menu item location, were significant predictors of task performance.
Article
This study investigates the effectiveness of two design interventions, the Microsoft® Windows® accessibility settings and multimodal feedback, aimed at the enhancement of a menu selection task, for users with diabetic retinopathy (DR) with stratified levels of visual dysfunction. Several menu selection task performance measures, both time- and accuracy-based, were explored across different interface conditions and across groups of participants stratified by different degrees of vision loss. The results showed that the Windows® accessibility settings had a significant positive impact on performance for participants with DR. Moreover, multimodal feedback had a negligible effect for all participants. Strategies for applying multimodal feedback to menu selection are discussed, as well as the potential benefits and drawbacks of the Windows® accessibility settings.
Article
This study investigates factors affecting handheld human – computer interaction (HCI) for older adults with Age-related Macular Degeneration (AMD). This is largely an uncharted territory, as empirical investigations of HCI concerning users with visual dysfunction and/or older adults have focused primarily on desktop computers. For this study, participants with AMD and visually healthy controls used a handheld computer to search, select and manipulate familiar playing card icons under varied icon set sizes, inter-icon spacing and auditory feedback conditions. While all participants demonstrated a high rate of task completion, linear regression revealed several relationships between task efficiency and the interface, user characteristics and ocular factors. Two ocular measures, severity of AMD and contrast sensitivity, were found to be highly predictive of efficiency. The outcomes of this work reveal that users with visual impairments can effectively interact with graphical user interfaces on small displays in the presence of low-cost, easily implemented design interventions. Furthermore, results demonstrate that the detrimental influence of AMD and contrast sensitivity on handheld technology interaction can be offset by such interventions. This study presents a rich data set and is intended to inspire future work characterizing and modeling the interactions of individuals with visual impairments with non-traditional information technology platforms and contexts.