Figure - available from: Mobile Information Systems
This content is subject to copyright. Terms and conditions apply.
Schematic diagram of the laboratory and home environment. (a) Home environment and (b) lab environment.

Schematic diagram of the laboratory and home environment. (a) Home environment and (b) lab environment.

Source publication
Article
Full-text available
Wi-Fi sensing for gesture recognition systems is a fascinating and challenging research topic. We propose a multitask sign language recognition framework called Wi-SignFi, which accounts for gestures in the real world associated with various objects, actions, or scenes. The proposed framework comprises a convolutional neural network (CNN) and K-nea...

Similar publications

Article
Full-text available
Sign languages are visual languages used as the primary communication medium for the Deaf community. The signs comprise manual and non-manual articulators such as hand shapes, upper body movement, and facial expressions. Sign Language Recognition (SLR) aims to learn spatial and temporal representations from the videos of the signs. Most SLR studies...
Conference Paper
Full-text available
Sign languages (SLs) are an essential form of communication for hearing-impaired people. However, a communication barrier still exists between the deaf community and the hearing population due to the lack of accurate automated SL communication systems. In this work, a novel SL communication system running as a mobile application has been developed...
Chapter
Full-text available
India is home to approximately 63 million people of the Deaf and Hard of Hearing community (DHH). Indian Sign Language (ISL) is used in the deaf community all over India. So there is a need for proper learning aids which require recognition models with high performance. This work aims to recognize Indian Sign Language using a Convolutional Neural N...
Research Proposal
Full-text available
The deaf community uses sign language when communicating with the non-deaf people. This might be challenging for the general people to understand the gestures used in sign language. This sign language can be translated into a form that is easily understood by the general public. This research is based on different images and video capturing, prepro...
Chapter
Full-text available
Sign languages (SLs) are an essential form of communication for hearing-impaired people. However, a communication barrier still exists between the deaf community and the hearing population due to the lack of accurate automated SL communication systems. In this work, a novel SL communication system running as a mobile application has been developed...

Citations

... The model works better for smaller datasets and its performance is optimised by lowering the learning rate. A multitask sign language recognition framework built on CNN and K-nearest neighbour (KNN) module was proposed by [13]. The novel concept of Wireless sensing has been used for sign language recognition, although the model achieve an accuracy of 99.9% on smaller datasets but it took higher time to train the model. ...
... So, Eqs (11) and (12) with mean and variance with bias correction can be written as Eq. (13). ...
Article
Sign language translation through deep learning is a popular topic among researchers nowadays. It opens the doors of communication for deaf and mute people by translating sign language gestures. The translation of input sign language gestures into text is called the sign language translation system (SLTS). In this paper, optimised machine learning-based SLTS for Indian sign language (ISL) has been proposed to facilitate deaf-mute persons. Further, this paper presents a simulation analysis of the impact of the number of convolution layers, size of stride function, epochs, and activation function on the accuracy of translation of ISL gestures. An optimised ISL translation system (ISLTS) for fingerspelled alphanumeric data of 36 classes using a convolution neural network (CNN) with a novel RADAM_NORM optimiser has been proposed. The proposed system has been implemented using two datasets-the first customised ISL alphanumeric dataset has been taken from Kaggle and the second dataset has been prepared by the author consisting of 36 classes and nearly 50K images. The accuracy of the proposed ISLTS on the first dataset is 99.446 % and on the second dataset is 97.889%.
Article
Full-text available
Sign language recognition attempts to recognize meaningful hand gesture movements and is a significant solution for intelligent communication across societies with speech and hearing impairments. Nevertheless, understanding dynamic sign language from video-based data remains a challenging task in hand gesture recognition. However, real-time gesture recognition on low-power edge devices with limited resources has become a topic of research interest. Therefore, this work presents a memory-efficient deep-learning pipeline for identifying dynamic sign language on embedded devices. Specifically, we recover hand posture information to obtain a more discriminative 3D key point representation. Further, these properties are employed as inputs for the proposed attention-based embedded long short-term memory networks. In addition, the Indian Sign Language dataset for calendar months is also proposed. The post-training quantization is performed to reduce the model’s size to improve resource consumption at the edge. The experimental results demonstrate that the developed system has a recognition rate of 99.7% and an inference time of 500 ms on a Raspberry Pi-4 in a real-time environment. Lastly, memory profiling is performed to evaluate the performance of the model on the hardware.