Fig 1 - uploaded by Vijit Prabhu
Content may be subject to copyright.
a): Virtual Keyboard: pointer pointing at the sub circle. 

a): Virtual Keyboard: pointer pointing at the sub circle. 

Source publication
Article
Full-text available
Virtual keyboards or on-screen keyboards are commonly used as a means of augmentative communication by people with severe speech and motion disability. Any such virtual keyboard is characterized by keys' layout design and method of access. In this paper we present a virtual keyboard that can support multiple modes of access and has an optimum layou...

Contexts in source publication

Context 1
... The pointer is rotated till the desired sub circle is highlighted and the user then triggers the access switch ( Fig. ...
Context 2
... The sub circle expands and again the arrow is rotated till the desired element is highlighted when the user again triggers the access switch, the desired key is selected. In case, the user has selected the wrong sub circle s/he can use the 'Go Back' element to return to the main menu ( Fig. ...
Context 3
... The Working of the VK Fig. 1 (a) and (b) show the graphical user interface (GUI) of the proposed VK developed in Visual Studios 2008. It consists of eight sub circles contained in a larger circle and an arrow that rotates in clockwise direction. Each sub circle consists of a group of keys representing alphabets, numbers, punctuation, navigation keys and other keyboard keys. There is a side bar that lists all the predicted words. They can be accessed using the prediction sub circle. The arrow can be rotated either by the user or can be set to rotate automatically and will highlight each sub circle or element for a fixed scan period set by the user. Initially the arrow points vertically to the first sub circle. The user can compose the text in the following ...

Similar publications

Conference Paper
Full-text available
This paper presents a methodology and a computer tool to check if a dwelling meets both the needs of a disabled future occupant and the wishes he expressed about the layout of his dwelling. The major steps of the methodology are a translation of the needs and wishes into predefined criteria that can be handled by the software and the modelling of t...
Article
Full-text available
The representation of people with disabilities in the media is an issue that is not much emphasized. This article studies the representation of people with disabilities in the internet publications, especially in Northern Cyprus. This research, tries to emphasize the way media discuss the issue of disabled people alongside to show how disabled peop...

Citations

... The performance did not, however, include a typing speed specification. For the frequency-based organizational system and the alphabetical layout, only task execution time was recorded in [25]. This particular study demonstrates how a frequency-based organization strategy can reduce the time required to complete a task. ...
Article
Full-text available
People with disabilities have new and advanced methods to communicate with the applications for virtual keyboards and other communication tools. In this paper, we utilized a novel deep reinforcement learning-based technique for determining the location of the accessible options for gaze-controlled tree-based menu selection system. A virtual English keyboard has been incorporated into the layout of the new user interface, which also includes improved requests for text modification through the gaze. The two methods used to manage the system are: 1) eye tracking for typing on the virtual keyboard and 2) eye tracking with a device for soft-switch utilizing deep reinforcement learning. Simulation results show that DRL based algorithm outperforms other baseline techniques in terms of total loss and accuracy.
...  Developing an application for comparing accessibility of different screen regions for an eye gaze- 4 movements in the interface to minimize error rate. Prabhu [21] presented a virtual keyboard that can support multiple modes of input access with optimum layout based on frequency of occurrence of alphabets in an English text. They developed a method for optimal character placement for obtaining an appropriate scan time. ...
Article
Full-text available
BACKGROUND: Users with Severe Speech and Motor Impairment (SSMI) often use a communication chart through their eye gaze or limited hand movement and care takers interpret their communication intent. There is already significant research conducted to automate this communication through electronic means. Developing electronic user interface and interaction techniques for users with SSMI poses significant challenges as research on their ocular parameters found that such users suffer from Nystagmus and Strabismus limiting number of elements in a computer screen. This paper presents an optimized eye gaze controlled virtual keyboard for English language with an adaptive dwell time feature for users with SSMI. OBJECTIVE: Present an optimized eye gaze controlled English virtual keyboard that follows both static and dynamic adaptation process. The virtual keyboard can automatically adapt to reduce eye gaze movement distance and dwell time for selection and help users with SSMI type better without any intervention of an assistant. METHODS: Before designing the virtual keyboard, we undertook a pilot study to optimize screen region which would be most comfortable for SSMI users to operate. We then proposed an optimized two-level English virtual keyboard layout through Genetic algorithm using static adaptation process; followed by dynamic adaptation process which tracks users’ interaction and reduces dwell time based on a Markov model-based algorithm. Further, we integrated the virtual keyboard for a web-based interactive dashboard that visualizes real-time Covid data. RESULTS: Using our proposed virtual keyboard layout for English language, the average task completion time for users with SSMI was 39.44 seconds in adaptive condition and 29.52 seconds in non-adaptive condition. Overall typing speed was 16.9 lpm (letters per minute) for able-bodied users and 6.6 lpm for users with SSMI without using any word completion or prediction features. A case study with an elderly participant with SSMI found a typing speed of 2.70 wpm (words per minute) and 14.88 lpm (letters per minute) after 6 months of practice. CONCLUSIONS: With the proposed layout for English virtual keyboard, the adaptive system increased typing speed statistically significantly for able bodied users than a non-adaptive version while for 6 users with SSMI, task completion time reduced by 8.8% in adaptive version than nonadaptive one. Additionally, the proposed layout was successfully integrated to a web-based interactive visualization dashboard thereby making it accessible for users with SSMI.
... Vijit Prabhu and Girijesh Prasad introduced a virtual keyboard with multi-modal access for people with disabilities [11]. The main advantage of this keyboard is, it can be operated through a switch called BCI. ...
Conference Paper
Full-text available
Although much work has been done and improved solutions have been proposed towards the design and development of keyboard layouts for non-disabled people, very few approaches have addressed the same issue for the physically handicapped group. In this paper, we propose a novel text entry system, namely BADHON, for a specific user base who are unable to use regular keyboards because of limited hand mobility and require a solution to interact with the computer using thumb toe and ankle movement. In order to resolve the inevitable tradeoff between faster typing speed acquisition and inconvenience of memorizing a new layout, two different layouts have been proposed. One combines movement time model and linguistic model to ensure good words per minute (WPM) count and other significantly reduces the burden of carrying manual while using the layout by keeping the alphabet’s orientation as easy as possible.
... For the adaptive dwell time in asynchronous mode, we consider two rules where Δt 0 can change between Δmin 0 and Δmax 0 . In this study, Δmin 0 and Δmax 0 correspond to 1 s and 5 s, respectively [43,53]. Initially, Δt 0 is set to 2000 ms. ...
... With the adaptive trial period (i.e., trial duration Δt 1 ) in synchronous mode, we consider three rules, where Δt 1 can change between Δmin 1 and Δmax 1 . In this study, Δmin 1 and Δmax 1 correspond to 1 s and 5 s, respectively [43,53]. Initially, Δt 1 is set to 2000 ms. ...
Article
Full-text available
The usability of virtual keyboard based eye-typing systems is currently limited due to the lack of adaptive and user-centered approaches leading to low text entry rate and the need for frequent recalibration. In this work, we propose a set of methods for the dwell time adaptation in asynchronous mode and trial period in synchronous mode for gaze based virtual keyboards. The rules take into account commands that allow corrections in the application, and it has been tested on a newly developed virtual keyboard for a structurally complex language by using a two-stage tree-based character selection arrangement. We propose several dwell-based and dwell-free mechanisms with the multimodal access facility wherein the search of a target item is achieved through gaze detection and the selection can happen via the use of a dwell time, soft-switch, or gesture detection using surface electromyography in asynchronous mode; while in the synchronous mode, both the search and selection may be performed with just the eye-tracker. The system performance is evaluated in terms of text entry rate and information transfer rate with 20 different experimental conditions. The proposed strategy for adapting the parameters over time has shown a significant improvement (more than 40%) over non-adaptive approaches for new users. The multimodal dwell-free mechanism using a combination of eye-tracking and soft-switch provides better performance than adaptive methods with eye-tracking only. The overall system receives an excellent grade on adjective rating scale using the system usability scale and a low weighted rating on the NASA task load index, demonstrating the user-centered focus of the system.
... However, the performance was not reported in terms of the typing speed. Only task completion time for both alphabetical layout and frequency based arrangement layout were reported [30]. This particular study showed that the time to complete a task can be reduced by frequency based arrangement layout. ...
... Second, we develop a strategy that can locate individual symbols to a particular location of the GUI. These issues can be resolved collectively by designing a multi-level virtual keyboard [33] and the characters can be placed on the layout based on their probability of occurrence and the constraints of the input device [30]. However, the probability of characters in the large corpus is not readily available for the Hindi language and it must be determined. ...
Article
Full-text available
Virtual keyboard applications and alternative communication devices provide new means of communication to assist disabled people. To date, virtual keyboard optimization schemes based on script-specific information along with multimodal input access facility are limited. In this work, we propose a novel method for optimizing the position of the displayed items for gaze-controlled tree-based menu selection systems by considering a combination of letter frequency and command selection time. The optimized graphical user interface (GUI) layout has been designed for a Hindi language virtual keyboard based on a menu wherein 10 commands provide access to type 88 different characters along with additional text editing commands. The system can be controlled in two different modes: eye-tracking alone and eye-tracking with an access soft-switch. Five different keyboard layouts have been presented and evaluated with ten healthy participants. Further, the two best performing keyboard layouts have been evaluated with eye-tracking alone on ten stroke patients. The overall performance analysis demonstrated significantly superior typing performance, high usability (87% SUS score), and low workload (NASA TLX with 17 scores) for the letter frequency and time-based organization with script specific arrangement design. This work represents the first optimized gaze-controlled Hindi virtual keyboard, which can be extended to other languages.
... Although the eyes are not meant to be output organs but rather to observe [33], the possibilities that the eyes offer to a quadriplegic cannot be overlooked. An abundance of eyecontrolled interfaces have already been developed [34][35][36][37][38] allowing users to type on an on-screen keyboard [34,39], draw [40], compose music [41], browse the Internet, etc. and thus enable a person to stay in contact with the world and communicate his/her needs. ...
... Gaze controlled systems can be based on dwell time [37], blinks [42,43], winks [44] as well as various types of switches [39]. The commercially available Tobii PCEye Go [45] is a peripheral eye tracker that runs on standard Windows computers and tablets, allowing the user to work with any application that is usually controlled by a standard computer mouse or through touch. ...
... Various alternative layouts and activation techniques have been proposed in the past [39,[54][55][56][57], but most of these are limited with regard to the variety of keys that are available. ...
Article
Full-text available
A gaze-controlled system was developed to address the specific needs of a person in an advanced stage of Multiple Sclerosis. This patient scored 9 on the EDSS and is representative of a group of patients who can reason normally, but have no functional use of their upper-limbs. The developer and the patient were connected through remote desktop control and could brainstorm a concept to perfection through iterative trial and error. The most important lesson learnt is the importance of user control towards maximum independence. Eventually, the patient was enabled to control his computer efficiently using a novel approach to mouse control together with an on-screen keyboard. He could browse the internet, read e-books, type documents, send and receive e-mails and text messages, draw in a paint application and control his TV through a specially adapted remote control—all by using his eyes to activate commands. The findings suggest that a well-designed eye tracking system can fulfil in the mental and communication needs of patients in this specific category of disability.
... Entre las soluciones que se han encontrado están los Sistemas Aumentativos y Alternativos de Comunicación (SAAC), que son herramientas tecnológicas de asistencia para personas que no pueden emplear la lingüística oral y que en su lugar usan canales de comunicación en los cuales se aplican técnicas de la lingüística no verbal (signos, símbolos, pictogramas) y/o canales no lingüísticos (gestos, expresión facial, mirada, actitud postural). Estas herramientas poseen interfaces que incluyen comandos de voz o pulsadores de tamaño y ergonomía adaptados, que facilitan la transmisión de una idea o un mensaje para su interacción con el entorno (Prabhu, 2009). ...
Article
Full-text available
Este artículo presenta una investigación empírica sobre el impacto de un sistema computacional denominado TEVI (Teclado Virtual) diseñado para facilitar la comunicación de niños con problemas de lenguaje a causa de una discapacidad física. La metodología utilizada en la investigación se basó en cuatro fases: en la primera fase, se realizó una investigación exploratoria mediante encuestas y entrevistas no estructuradas a autori- dades y a diez maestros de dos centros de educación especial en dos provincias del Ecuador. En la segunda fase, se desarrolló el sistema computacional. El sistema se estructuró en tres módulos: entrada de datos (Interfaz de usuario), procesamiento de información (Gestor de base de datos) y salida de datos (Algoritmos de procesamiento de lenguaje natural). En la tercera fase, se efectuó una evaluación preliminar del prototipo con especialistas y una persona con discapacidad motriz. Finalmente, en la cuarta fase se llevó a cabo una evaluación experimental en la que participaron 75 niños de edad escolar, entre ellos el 80 % con problemas de lenguaje por su discapacidad física. El instrumento de evaluación contempló un panel de pictogramas y un teclado virtual con texto predictivo. Los tiempos registrados en las actividades planteadas con los maestros y luego de un análisis estadístico lineal general univariante se pudo evidenciar una disminución del 57.3 % en el tiempo de ingreso de información usando el panel de pictogramas frente al teclado virtual. Un análisis cualitativo permitió recoger eventos y comentarios de maestros, quienes manifestaron que TEVI constituye una herramienta potencial para desarrollar estrategias de aprendizaje.
... 2000 GRAFIS [3] 2c S P S Pw S/M 2000 MDITIM [42] 6 D S A 3 -10 2000 Dasher [140] 6 P D Pl P/M/HT/G A 18 -34 2001 UKO [52] 7 D S Aw K(4) D 6 2001 SUITEKeys [78] 5 D S Pw ASR 16 2002 TouchMeKey4 [128] 7 D S Pl Pw K(4) S A 14 -23 2003 GazeTalk [35] 3 P D Pl Pw P/M/G A 4 -6 2003 UKO-II [38] 7 D S S Aw K(?) 2003 EdgeWrite [145] 6 P S TP A D 6 -7 2004 Evreinova [18] 6 D S K(4) A 15 2006 Baljko [6] 2c S S K(1+1) S A 1.8 2006 LURD-Writer [21] 2b P S Pw EMG D 1 -2 2006 Sporka [124] 4 D S NVVI A 2 -3 2006 EdgeWrite [144] 6 P S TB A D 7 -12 2007 SEK [48] 6 P S HT/M A D 2 -3 2007 Lin [58] 6 S P S D 2007 AUK [90] 7 D S S K(1-10) 2007 Norte [93] 6 S P S M/K(1) D 2008 HandiGlyph [9] 2c S S Aw PB D L 2 -3 2008 Lin [59] 2c S S A 1 -2 2008 MacKenzie [71] 3 P S Pl Pw G A 10 -13 2008 Miro-Borras [83, 84] 2c S D Al PB S 2008 Spakov [123] 3 P S G A 7 -16 2008 Dasher [133] 3 P D Pl G A L 15 -20 2008 Sibylle [138] 2c S S Pl Pw PB 2008 EyeWrite [147] 3 P S G A L 2 -8 2009 3dScan [22] 2b S S EMG D 1 -2 2009 MacKenzie [66] 2c S S Aw K(1) A 4 -6 2009 Majaranta [76] 3 P S G A L 16 -22 2010 BlinkWrite2 [5] 2a S S Aw G A 4 -6 2010 Qanti [25] 2b S S Aw EMG A D 2 -7 2010 SAK [67] 2b S S Aw K(1)/EMG S A D 2 -8 2010 SpreadKey [82] 6 P D Pl TB S D L 13 2010 Miro-Borras [86] 2c S S D Al Aw K(1) S A 6 -16 2010 KKBoard [89] 3 P S G A 10 -20 2010 Roark [108] 2c S D Pl PB A 2 -5 2010 Song [120] 6 S S D Pl Pw J A 5 -8 2010 pEYEwrite [135] 3 P S D Pl G A L 10 -17 2010 Speech Dasher [136] 5 P D Pl ASR/G A 40 -54 2011 CHANTI [126] 4 D S Aw NVVI D 2 -5 2011 Humsher [101] 4 D S D Pl NVVI A D 3 -6 2011 RSVP [39] 1 S D Pl EEG D 2011 BlinkWrite [113] 2a S S Aw G A 4 -5 2011 Prabhu [105] 2a S S K(1) S A 1.3 2012 Beelders [8] 5 P S ASR/G A 2 -4 2012 DualScribe [26] 7 D S Aw K(18) S A D 3 -5 2012 Lafi [54] 2c S S K(1) 2012 Cont. EdgeWrite [79] 6 P S Pw GC A 5 2012 RSVP [97] 1 S D Pl EEG A D 1.8 2012 EyeBoard [98] 3 P S G A 4 -5 2012 Raiha [106] 3 P S G A 20 -24 2012 Zhao [151] 3 P S G A 8 -12 2012 Roark [109] 2c S D Pl K(1) A 2 -5 2013 Tuisku [134] 3 P S G A 3 -4 Table 1Summary of methods classified by target group (TG, see Section 7.1 for details), selection technique (D: direct, S: scanning, P: pointing), character layout (S: static, D: dynamic), use of language model (Pl: letter-level prediction, Pw: word-level prediction, Al: ambiguous keyboard with letter-level disambiguation, Aw: ambiguous keyboard with word-level disambiguation), interaction modality (see Section 7), evaluation (A: able-bodied participants, D: disabled participants, L: longitudinal, S: simulation), and type rate in terms of words per minute (WPM). of impact on the users' physical ability to enter text. ...
Article
Full-text available
This paper provides an overview of 150 publications regarding text input for motor-impaired people and describes current state of the art. It focuses on common techniques of text entry including selection of keys, approaches to character layouts, use of language models, and interaction modalities. These aspects of text entry methods are further analyzed, and examples are given. The paper also focuses on an overview of reported evaluations by describing experiments, which can be conducted, to assess the performance of a text entry method. Following this overview, a summary of 61 text entry methods for motor-impaired people found in the related literature is presented, classifying those methods according to the aforementioned aspects and reported evaluation. This overview was assembled with the aim to provide a starting point to the new researchers in the field of accessible text entry. The text entry methods are also categorized according to the suitability for various conditions of the users.
... The eye-typing system developed by Mackenzie and Zhang [13] has implemented both letter and word prediction to increase the text entry rate and to minimize the eye movements for search and key presses. Prabhu and Prasad [18] designed a virtual keyboard that can be accessed through eye gaze, by BCI or by some muscular activity. Eye-S [19] exploited usage of eye-graffiti to write alphabets and numbers on computer screen. ...
... A hex-o-spell keyboard layout was implemented (Cecotti, 2011;Prabhu and Prasad, 2011) given its previous deployment in TCD BCI (Lu et al., 2014). Letters were arranged into six boxes, situated at the vertices of a hexagon, such that the most frequently occurring letters in English could be selected in the least amount of time (Fig. 2a). ...
Article
Brain computer interfaces (BCI) can provide communication opportunities for individuals with severe motor disabilities. Transcranial Doppler ultrasound (TCD) measures cerebral blood flow velocities and can be used to develop a BCI. A previously implemented TCD BCI system used verbal and spatial tasks as control signals; however, the spatial task involved a visual cue that awkwardly diverted the user's attention away from the communication interface. Therefore, vision-independent right-lateralized tasks were investigated. Using a bilateral TCD BCI, ten participants controlled online, an on-screen keyboard using a left-lateralized task (verbal fluency), a right-lateralized task (fist motor imagery or 3D-shape tracing), and unconstrained rest. 3D-shape tracing was generally more discernible from other tasks than was fist motor imagery. Verbal fluency, 3D-shape tracing and unconstrained rest were distinguished from each other using a linear discriminant classifier, achieving a mean agreement of κ=0.43 ± 0.17. These rates are comparable to the best offline three-class TCD BCI accuracies reported thus far. The online communication system achieved a mean information transfer rate (ITR) of 1.08 ± 0.69 bits/min with values reaching up to 2.46 bits/min, thereby exceeding the ITR of previous online TCD BCIs. These findings demonstrate the potential of a three-class online TCD BCI that does not require visual task cues.