A preview of this full-text is provided by Springer Nature.
Content available from Neural Computing and Applications
This content is subject to copyright. Terms and conditions apply.
S.I. : HYBRID ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
TECHNOLOGIES
Deep learning-based sign language recognition system for static signs
Ankita Wadhawan
1
•Parteek Kumar
1
Received: 3 December 2018 / Accepted: 18 December 2019 / Published online: 1 January 2020
ÓSpringer-Verlag London Ltd., part of Springer Nature 2020
Abstract
Sign language for communication is efficacious for humans, and vital research is in progress in computer vision systems.
The earliest work in Indian Sign Language (ISL) recognition considers the recognition of significant differentiable hand
signs and therefore often selecting a few signs from the ISL for recognition. This paper deals with robust modeling of static
signs in the context of sign language recognition using deep learning-based convolutional neural networks (CNN). In this
research, total 35,000 sign images of 100 static signs are collected from different users. The efficiency of the proposed
system is evaluated on approximately 50 CNN models. The results are also evaluated on the basis of different optimizers,
and it has been observed that the proposed approach has achieved the highest training accuracy of 99.72% and 99.90% on
colored and grayscale images, respectively. The performance of the proposed system has also been evaluated on the basis
of precision, recall and F-score. The system also demonstrates its effectiveness over the earlier works in which only a few
hand signs are considered for recognition.
Keywords Sign language Data acquisition Convolutional neural network Max-pooling Softmax Optimizer
1 Introduction
Sign language is a computer vision-based complete con-
voluted language that engrosses signs shaped by the
movements of hands in combination with facial expres-
sions. It is a natural language used by people with low or
no hearing sense for communication. A sign language can
be used for communication of letters, words or sentences
using different signs of the hands. This type of communi-
cation makes it easier for hearing-impaired people to
express their views and also help in bridging the commu-
nication gap between hearing-impaired people and other
person.
Humans have been adapting to sign language to com-
municate since ancient times. Hand gestures are as ancient
as the human civilization itself [1]. Hand signs are espe-
cially useful to express any word or feeling to communi-
cate. Therefore, people around the world use signals from
hand constantly to express despite the formulation of
writing conventions.
In recent times, much research has been ongoing in
developing systems that are able to classify signs of dif-
ferent sign languages into the given class. Such systems
have found applications in games, virtual reality environ-
ments, robot controls and natural language communica-
tions. At present, the Indian Sign Language systems are in
the developing stage and no sign language recognition
system is available for recognizing signs in real time. So,
there is a need to develop a complete recognizer which
identifies signs of Indian Sign Language.
The automatic recognition of human signs is a complex
multidisciplinary problem that has not yet been completely
solved. In the past years, a number of approaches were
used which involve the use of machine learning techniques
for sign language recognition. Since the advent of deep
learning techniques, there have been attempts to recognize
&Ankita Wadhawan
ankita.wadhawan@thapar.edu
Parteek Kumar
parteek.bhatia@thapar.edu
1
Computer Science and Engineering Department, Thapar
Institute of Engineering and Technology, Patiala, Punjab,
India
123
Neural Computing and Applications (2020) 32:7957–7968
https://doi.org/10.1007/s00521-019-04691-y(0123456789().,-volV)(0123456789().,-volV)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.