Fig 4 - uploaded by Yiqin Lu
Content may be subject to copyright.
The standard keyboard. Each key of the keyboard is a 1⇥1 square. The origin is at the centroid of the key 'Q'. For example, the centric coordinates of 'Q', 'P', 'A', 'L', 'Z' and 'M' are (0, 0), (9, 0), (0.25, 1), (8.25, 1), (0.75, 2) and (6.75, 2), respectively.

The standard keyboard. Each key of the keyboard is a 1⇥1 square. The origin is at the centroid of the key 'Q'. For example, the centric coordinates of 'Q', 'P', 'A', 'L', 'Z' and 'M' are (0, 0), (9, 0), (0.25, 1), (8.25, 1), (0.75, 2) and (6.75, 2), respectively.

Source publication
Article
Full-text available
Eyes-free input is desirable for ubiquitous computing, since interacting with mobile and wearable devices often competes for visual attention with other devices and tasks. In this paper, we explore eyes-free typing on a touchpad using one thumb, wherein a user taps on an imaginary QWERTY keyboard while receiving text feedback on a separate screen....

Context in source publication

Context 1
... as a standard QWERTY keyboard; however, the speciï¿¿c keyboard location on the touchscreen and the keyboard size (in both X-and Y-axes) should be further determined. For the sake of convenience, we normalized the coordinate of a standard keyboard by deï¿¿ning the origin to be at the centroid of 'Q' and the key to be a 1 ⇥ 1 square (as shown in Fig. ...

Citations

... Overall, manipulation in eyes-free mode results from the coordination of spatial memory and proprioceptive sense under feedforward control [8]. Spatial memory and muscle memory transferred with practice and experience can promote eyes-free interaction [27]. ...
Article
Full-text available
This paper describes the exploration of a new category of a touchscreen interface. An eyes-free interface harnesses innate human abilities and product affordances to allow reduced levels of visual attention. Interface design for eyes-free interaction with a featureless screen is highly challenging; however, it can be achieved by simplifying and optimizing menu layout patterns to take advantage of innate human abilities including proprioception and spatial memory. This opens up a range of possibilities for peripheral device control under one-handed thumb mobile interaction. To this end, two experiments with different modes of presentation were conducted to understand the effect of interface configurations on performance accuracy caused by spatial memory and proprioception. Spatial performance results from the interaction effect of both cognitive abilities on an eyes-free interface. Vertical, horizontal, diagonal, and curved layouts with different spacing patterns have been tested in both tap and draw input modes. The results revealed that evenly spaced button alignment close to the reference frame with symmetrical patterns within a square interface area and a comfortable thumb range positively affect accuracy. The conclusions describe how alignment patterns and the mode of presentation affect visual perception and spatial integration, and a framework for the development of an eyes-free interface is set out.
... Nonetheless, they are the second most popular skills explored due to three major assistive technologies that support students: Braille displays, haptic technologies, and screen readers. For this particular study, the use of Braille and its correspondent transition to m-Braille adheres to an adaptation to mobile learning and the incorporation of Braille in the digital format given that texting key to mobile communication and connectivity, but at the same time needs to become an eyesfree tool (Lu et al., 2017;Romero et al., 2011). ...
Article
Full-text available
The purpose of this literature review is to identify, categorize, and critically appraise research papers related to mobile apps for visually impaired people in language learning. This study intended to identify the language skills, the affordances, and the limitations encountered while designing and implementing the apps. Hence, a systematic review in the Scopus database and the virtual libraries of IEEE, SAGE, ERIC, and Science Direct, adhering to the PRISMA methodology, produced 274 research papers, and after the application of the different phases, a detailed analysis was performed using 17 articles. The results revealed that Information Communication Technologies, assistive technologies, and electronic accessibility features contributed to the usability guidelines and the current evolution toward modern language learning mobile applications for visually impaired users. The revised work also revealed how writing, reading, and spelling became more demanding for this particular special need, and that grammar-based and traditional activities are replaced by other communicative approaches. The emphasis was on speaking and listening skills due to these being less demanding in terms of technical requirements. The findings of this review provide insights for instructional designers to construct inclusive language learning apps which consider the three essential dimensions needed to achieve it: technological, pedagogical and psychological, in addition to appropriate affordances necessary for both sighted and visually impaired users.
... In fact, since muscle memory (from familiarity with a QWERTY layout) could help the thumbs to move close to the target keys, the number of alternatives that users' thumbs face is probably reduced to three to six keys, including the surrounding and target keys. Nevertheless, it is inferred that eyes-free finger-based touch attributed to muscle memory is regarded as conceivably inaccurate on small keys (4 to 6 mm) [31][32][33], and thus users still need to make more efforts to avoid typos. Naturally, this might be interpreted as the fact that securing vision on target keys would have a relatively greater impact than usual in key entry while using a QWERTY soft keyboard, and furthermore it could significantly affect users' subjective satisfaction-this mechanism was similar to the reason why the zones with less opportunities of being visually occluded by the thumbs showed relatively shorter task completion times in this study. ...
Article
Full-text available
This research aims to examine touch performance and user-satisfaction depending on key location in a QWERTY soft keyboard during two-thumb key entry on a smartphone. Thirty-three college students who were smartphone users were recruited, and an experimental program was implemented to measure their task completion time, the number of touch errors, and user-satisfaction during key entry. The QWERTY layout was split into 15 zones to assign absolute positions for reliable statistical analysis. The results showed that the zones with significantly longer task completion times were observed more prevalently in the zones in the periphery (p < 0.0001). In addition, relatively higher subjective satisfaction ratings were found in the zones in the center area of the QWERTY layout (p < 0.0001). It seemed that both of the results were improved in the zones that participants could immediately see without moving the thumbs, before touch interaction. Meanwhile, touch error frequencies failed to show statistical significance among the zones (p = 0.3195).
... Text entry is an essential interaction technique for HMDs. Much work about text entry has been done, including physical device-based techniques with keyboards (Hutama et al., 2021;Jiang et al., 2018;Knierim et al., 2018;McGill et al., 2015;Walker et al., 2016;2017), controllers (Boletsis & Kongsvik, 2019;Chen et al., 2019;Jiang & Weng, 2020;Yu et al., 2018), and touchscreens (Grubert et al., 2018;Gugenheimer et al., 2016;Kim & Kim, 2017;Lu et al., 2017); hands-free speech (Bowman et al., 2002;Pick et al., 2016); head-based (Majaranta et al., 2009;Yu et al., 2017) and gaze-based (Rajanna & Hansen, 2018) techniques; and hand-based techniques (gesture, micro-gesture, and hand trace mid-air input techniques). Aim-and-shoot techniques (HTC vive, 2020) and index finger pinch gesture-based techniques (Oculus Quest, 2020) are commonly used for text entry in commercial HMDs. ...
Article
Full-text available
This paper presents PinchText, a mid-air technique with a condensed keys-based keyboard, which combines hand positions and pinch gestures, enabling one-handed text entry for Head-mounted displays (HMDs). Firstly, we conduct Study 1 to collect and analyze the typing data of PinchText with two arm postures and two movement directions, obtaining the range of hand position corresponding to the middle key set. Then, we conduct Study 2, a 6-block experiment, finding that PinchText with Hand-Up Vertical (UpV) and Hand-Down Vertical (DownV) modes could achieve a speed of 12.71 words-per-minute (WPM) and 11.14 WPM respectively with both uncorrected error rates less than 0.5%, which is 71% faster than the index finger pinch-based technique. Finally, Study 3 is conducted to explore the potential of reducing the size of the decoupled visual keyboard of PinchText, verifying that the occlusion of the virtual keyboard can be decreased. Overall, PinchText is an efficient, easy-to-learn, and comfortable text entry technique for HMDs.
... Imaginary keyboards have also been explored in eyes-free conditions, where users enter text without looking at the keyboard while receiving text feedback from a distant display. BlindType [18] leverages the thumb's muscle memory to type on a touchpad and can lead to a typing rate of 17-23 WPM. Their work also reported that a classical decoding algorithm was capable of hands-free text entry. ...
... Since users perform different typing patterns under gesture-like (G-Type) and tap-like (D-Type and E-Type) methods according to prior studies [18,39], we only captured typing behaviors from two input modalities (D-Type and G-Type) in this study. That is, we did not include E-Type because D-Type and E-Type were likely to have similar endpoint distributions, given that they are both characterized by head pointing and character-level entry (i.e., text entry is done character by character). ...
... They use software keyboards on the touchscreens of the tablets. However, there is a gap between touchscreen keyboards and physical keyboards in aspects of the fatigue problem [27], switching of visual attention [10,36,47], and typing speed [17,45]. Users can rest their fngers on the physical keyboard but cannot rest on the touchscreen keyboard because touching the screen causes misrecognition. ...
... More and more people use software keyboards on the touchscreens of the tablets [53]. However, tablet keyboards cannot compare to physical keyboards in usability [10,27,36,47] and efciency [12,17,45]. On physical keyboards, users can rest their fngers on the buttons, which is a crucial usability factor of physical keyboards. ...
... allowing for eyes-free text entry on the mobile device while showing the text in AR (cf. [41]). This could in turn reduce switching costs. ...
Conference Paper
Full-text available
Mobile intervention studies employ mobile devices to observe participants' behavior change over several weeks. Researchers regularly monitor high-dimensional data streams to ensure data quality and prevent data loss (e.g., missing engagement or malfunctions). The multitude of problem sources hampers possible automated detection of such irregularities - providing a use case for interactive dashboards. With the advent of untethered head-mounted AR devices, these dashboards can be placed anywhere in the user's physical environment, leveraging the available space and allowing for flexible information arrangement and natural navigation. In this work, we present the user-centered design and the evaluation of IDIAR: Interactive Dashboards in AR, combining a head-mounted display with the familiar interaction of a smartphone. A user study with 15 domain experts for mobile intervention studies shows that participants appreciated the multimodal interaction approach. Based on our findings, we provide implications for research and design of interactive dashboards in AR.
... Although eyes-free interaction has been employed on a variety of devices (e.g. phones [6,43,46], wearables [13,50,66]) and has shown to be benefcial when interacting with large data visualizations via a smartwatch [30], we found no prior work investigating eyes-free interaction with mobile devices for AR HMDs. ...
Preprint
Full-text available
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
... Although eyes-free interaction has been employed on a variety of devices (e.g. phones [6,43,46], wearables [13,50,66]) and has shown to be benefcial when interacting with large data visualizations via a smartwatch [30], we found no prior work investigating eyes-free interaction with mobile devices for AR HMDs. ...
Conference Paper
Full-text available
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
... The gesture traces was displayed on the phone's front screen to provide feedback, as shown in Figure 2. By default, the keyboard layout was not shown on the screen. We adopted such a design because there is evidence showing that many users are able to input text on an imaginary keyboard on the phone [66], remote control [67], or hand-held touchpad [37], given the dominance and users' familiarity with Qwerty layout. In case a user could not recall the location of a particular key, keeping the index finger still for 300ms at the back screen would bring up a Qwerty layout on the front screen. ...
Conference Paper
Back-of-device interaction is a promising approach to interacting on smartphones. In this paper, we create a back-of-device command and text input technique called BackSwipe, which allows a user to hold a smartphone with one hand, and use the index finger of the same hand to draw a word-gesture anywhere at the back of the smartphone to enter commands and text. To support BackSwipe, we propose a back-of-device word-gesture decoding algorithm which infers the keyboard location from back-of-device gestures, and adjusts the keyboard size to suit the gesture scales; the inferred keyboard is then fed back into the system for decoding. Our user study shows BackSwipe is feasible and a promising input method, especially for command input in the one-hand holding posture: users can enter commands at an average accuracy of 92% with a speed of 5.32 seconds/command. The text entry performance varies across users. The average speed is 9.58 WPM with some users at 18.83 WPM; the average word error rate is 11.04% with some users at 2.85%. Overall, BackSwipe complements the extant smartphone interaction by leveraging the back of the device as a gestural input surface.