Fig 7 - uploaded by Maraim Alnefaie
Content may be subject to copyright.
SingleTapBraille user interface

SingleTapBraille user interface

Source publication
Chapter
Full-text available
Touchscreen technology has brought about significant improvements for both normal sighted and visually impaired people. Visually impaired people tend to use touchscreen devices because these devices support a screen reader function, providing a cheaper, smaller alternative to screen reader machines. However, most of the available touchscreen keyboa...

Similar publications

Article
Full-text available
The use of digital environments for both learning and assessment is becoming prevalent. This often leads to incongruent situations, in which the study medium (eg, printed textbook) is different from the testing medium (eg, online multiple‐choice exams). Despite some evidence that incongruent study‐test situations are associated with inferior achiev...

Citations

... Learning letters requires a user to press "1" [30]. Improved SingleTap Braille uses different swipe gestures for inserting upper-case letters, lower-case letters, adding spaces, backspaces, and Grade 2 Braille [31]. Braille Tap introduced four different gestures to perform arithmetic operations: swiping from bottom to top for addition and subtraction, swiping from top to bottom for division and multiplication, swiping from left to right for clear operation, and swiping from right to left for backspace [32]. ...
... Audio feedback poses privacy issues in some scenarios [36]. Voice feedback is provided to inform the user of the task's completion in [20,25,31,32,37]. On completion of the task, vibrio-tactile feedback is provided by [20] and [21]. ...
Article
Full-text available
Smart devices are effective in helping people with impairments, overcome their disabilities, and improve their living standards. Braille is a popular method used for communication by visually impaired people. Touch screen smart devices can be used to take Braille input and instantaneously convert it into a natural language. Most of these schemes require location-specific input that is difficult for visually impaired users. In this study, a position-free accessible touchscreen-based Braille input algorithm is designed and implemented for visually impaired people. It aims to place the least burden on the user, who is only required to tap those dots that are needed for a specific character. The user has input English Braille Grade 1 data (a–z) using a newly designed application. A total dataset comprised of 1258 images was collected. The classification was performed using deep learning techniques, out of which 70%–30% was used for training and validation purposes. The proposed method was thoroughly evaluated on a dataset collected from visually impaired people using Deep Learning (DL) techniques. The results obtained from deep learning techniques are compared with classical machine learning techniques like Naïve Bayes (NB), Decision Trees (DT), SVM, and KNN. We divided the multi-class into two categories, i.e., Category-A (a–m) and Category-B (n–z). The performance was evaluated using Sensitivity, Specificity, Positive Predicted Value (PPV), Negative Predicted Value (NPV), False Positive Rate (FPV), Total Accuracy (TA), and Area under the Curve (AUC). GoogLeNet Model, followed by the Sequential model, SVM, DT, KNN, and NB achieved the highest performance. The results prove that the proposed Braille input method for touch screen devices is more effective and that the deep learning method can predict the user’s input with high accuracy.
... It is troublesome for the consultation disabled individuals to speak with the world. Lately outwardly hindered and daze individuals are changing from regular perusing and writing in Braille to utilizing PCs with input peripherals and applications [6][7], [10][11][12], [22]. A developing assortment of research has been investigating approaches to use the client's capacity to perform contact and multi-contact signal contributions on cell phones [23][24][25][26][27]. ...
Article
Full-text available
Touch screen interaction system is highly demanding for new innovations from the point of visual sensing, as familiar and easily tracing the virtual key board on the screen, three dimensional gesture communications methods, and RFID sensing etc. In spite of the existence of these types of interaction methods, visually impaired people struggle to have easy access to touch screens. The main goal of the research is to overcome the navigation problems that blind people face while interacting with touch screens. In this examination we centered to build up a Braille sketch; a motion put together information technique with respect to contact screen advanced mobile phones for outwardly debilitated individuals. Utilizing Braille codes to perform motions on contact screen makes the outwardly tested people agreeable in light of the fact that Braille is the reason for correspondence. The streamlining procedure is the demonstration of amplifying or limiting a genuine function by systematic choosing input parameters from an accessible pool of parameters and to figure the estimation of function. Here, we accept variables as hand finger motion facilitate, for example, the coordinate values on x and y axises, swipe limit speed, swipe least separation, pixel rate and speed of X and Y. To build the execution of the system structure, we improve by differing shrouded layer and neuron using Crow Search Algorithm (CSO). The ANN with CSA achieves the Optimal Hidden Layer and Neuron (OHLN) to anticipate the right motion yields. These strategies present an answer that will consequently perceive the hand signals so impeded individual can without much of a stretch speak with ordinary individuals. The proposed model will give high precision with ideal execution measurements contrasted with other existing created demonstrate.
Article
Full-text available
Braille is used as a mode of communication all over the world. Technological advancements are transforming the way Braille is read and written. This study developed an English Braille pattern identification system using robust machine learning techniques using the English Braille Grade-1 dataset. English Braille Grade-1 dataset was collected using a touchscreen device from visually impaired students of the National Special Education School Muzaffarabad. For better visualization, the dataset was divided into two classes as class 1 (1–13) (a–m) and class 2 (14–26) (n–z) using 26 Braille English characters. A position-free braille text entry method was used to generate synthetic data. N = 2512 cases were included in the final dataset. Support Vector Machine (SVM), Decision Trees (DT) and K-Nearest Neighbor (KNN) with Reconstruction Independent Component Analysis (RICA) and PCA-based feature extraction methods were used for Braille to English character recognition. Compared to PCA, Random Forest (RF) algorithm and Sequential methods, better results were achieved using the RICA-based feature extraction method. The evaluation metrics used were the True Positive Rate (TPR), True Negative Rate (TNR), Positive Predictive Value (PPV), Negative Predictive Value (NPV), False Positive Rate (FPR), Total Accuracy, Area Under the Receiver Operating Curve (AUC) and F1-Score. A statistical test was also performed to justify the significance of the results.