ArticlePDF Available

Conversion of Sign Language to Spoken Sentences by means of a Sensory Glove

Authors:

Abstract

Normal communication of deaf people in ordinary life still remains an unrealized task, despite the fact that Sign Language Recognition (SLR) made a big improvement in recent years. We want here to address this problem proposing a portable and low cost system, which demonstrated to be effective for translating gestures into written or spoken sentences. This system relies on a home-made sensory glove, used to measure the hand gestures, and on Wavelet Analysis (WA) and a Support Vector Machine (SVM) to classify the hand’s movements. In particular we devoted our efforts to translating the Italian Sign Language (LIS, Linguaggio Italiano dei Segni), applying WA for feature extractions and SVM for the classification of one hundred different dynamic gestures. The proposed system is light, not intrusive or obtrusive, to be easily utilized by deaf people in everyday life, and it has demonstrated valid results in terms of signs/words conversion
Abstract—Normal communication of deaf people in ordinary
life still remains an unrealized task, despite the fact that Sign
Language Recognition (SLR) made a big improvement in
recent years. We want here to address this problem
proposing a portable and low cost system, which
demonstrated to be effective for translating gestures into
written or spoken sentences. This system relies on a
home-made sensory glove, used to measure the hand gestures,
and on Wavelet Analysis (WA) and a Support Vector
Machine (SVM) to classify the hand’s movements. In
particular we devoted our efforts to translating the Italian
Sign Language (LIS, Linguaggio Italiano dei Segni),
applying WA for feature extractions and SVM for the
classification of one hundred different dynamic gestures. The
proposed system is light, not intrusive or obtrusive, to be
easily utilized by deaf people in everyday life, and it has
demonstrated valid results in terms of signs/words
conversion.
Index Terms—Machine intelligence, Pattern analysis,
Human Computer Interaction, Support Vector Machines,
Data glove, Sign Language Recognition, Italian Sign
Language, LIS
I. INTRODUCTION
imilarly to spoken languages, Sign Languages (SLs)
are complete and powerful forms of communication,
and are adopted by millions of people, who suffer from
deafness, all over the world. SLs are different among
different regions and states: American Sign Language
(ASL), Japanese Sign Language (JSL), German Sign
Language (GSL), Lingua Italiana dei Segni (LIS, the
Italian Sign Language), etc. However, each single SL
relies commonly on gesture and posture mimic
interpretation, which plays a fundamental role in
non-verbal communication too. SL comprehension is
generally limited only to a restricted part of population,
thus deaf people remain restrained apart from social
interactions with hearing persons, and the
body-language/non-verbal communication is mainly
limited to “feelings” rather than consciousness
understanding. These are the main reasons why a system
for “Automatic” Sign Language Recognition (A-SLR) is
welcome and a greater human effort is being devoted to
realize it [1]. A-SLR could allow deaf people to
communicate without limitations, could assign suitable
interpretations to non-verbal communication without
ambiguities, and could be the basis of a new form of
human-computer interaction, since the most natural way
of human-computer interaction would be through speech
and gestures, rather than the current adopted interfaces like
keyboard and mouse. Particularly, the integration of
A-SLR with automatic text writing or speech synthesis
modules can furnish a “speaking aid” to deaf people.
Our purpose is to realize a system capable to measure
human static and dynamic postures, classify them as
“words” organized into “sentences”, in order to “translate”
SLs into written or spoken languages. To this aim, a great
challenge comes from the measure of human postures with
acquisition devices that are both comfortable and easy to
use. In particular, we have to focus our attention on hand
postures and movements since the SL is mostly, even if
not exclusively, based on them. In fact SL is made of hand
gestures, body and facial expressions, but the latter is not
strictly “fundamental”.
Currently, hand movements are commonly measured by
motion tracking techniques based on digital cameras.
These systems offer interesting results in terms of
accuracy, but suffer from a small active range and
disadvantages related to portability. In order to overcome
these problems, new measuring systems have been
developed, in particular the ones based on sensory gloves
(i.e. gloves equipped with sensors capable to convert hand
movements into electrical signals).
The first sensory gloves on the market [2-3] were
obtrusive for movements, uncomfortable and capable to
measure only a very low number of Degrees Of Freedom
(DOF) of the human hand. Nowadays, commercial
sensory gloves are quite light, comfortable and capable to
measure up to 22 DOFs [4], covering flex-extension and
abdu-adduction movements of the fingers and spatial
arrangement of the wrist. However, the cost remains
generally too high (tens of thousands of dollars) to be
widely applied in everyday scenarios. Thus, new sensory
gloves have been created by research groups all over the
Conversion of Sign Language to Spoken
Sentences by Means of a Sensory Glove
Pietro Cavallo
Electronic Engineering Department, University of Tor Vergata, Roma, Italy
Email:p.cavallo85@gmail.com
Giovanni Saggio
Electronic Engineering Department, University of Tor Vergata, Roma, Italy
Email: saggio@uniroma2.it
S
Manuscript received August 27, 2013; revised January 21, 2014;
accepted January 23, 2014.
2002 JOURNAL OF SOFTWARE, VOL. 9, NO. 8, AUGUST 2014
© 2014 ACADEMY PUBLISHE
R
doi:10.4304/jsw.9.8.2002-2009
... While humans can easily solve their hand manipulation tasks using skills and experiences [1], object manipulation algorithms of robotic hand control systems must have human-like manipulation and real-time operation capabilities, even if the degree of freedom (DoF) of robotic hands is often less than the 27 DoFs of a human hand. To this aim, sensory gloves have been widespread in recent years, equipped with resistive flex sensors and/or inertial sensor to recognize hand gestures [3], and in particular to apply to deaf-mute people to discriminate the Italian Sign Language [4], or the American Sign Language with capacitive strain sensors to add hand proprioception and tactile sensing [5], as well as a sequence of hand gestures [6], or to measure the finger joint angles [7]. More advanced electronic gloves, also equipped with tactile sensors, allow realistic human hand-like features for prosthetic hands of hand amputees [8], haptic feedback for remote object sensing [9,10], or for gestural control in medical telepresence [11]. ...
... Although each sEMG electrode usually cover more than one muscle, as will be shown below, the Myo armband has been successfully applied to classify predetermined patterns for hand gesture recognition [17,18], or human-robot interface [19], reaching similar performances to data gloves equipped with integration of flex and inertial (IMU) sensors [4]. ...
Article
This study compares the simultaneous measurements of finger joint angles obtained with a myoelectric armband (Myo), composed of eight surface electromyography (sEMG) sensors mounted on an elastic support, and a data glove, equipped with ten flex sensor on metacarpal and proximal finger joints. The flexion angles of all finger joints in four hand postures, that is open hand, closed hand and grasping two 3D printed molds of different size, were measured with a manual goniometer, and used to create, for each finger joint, a linear model from the measurement of the corresponding flex sensor in an electronic glove, as well as a regression model from the simultaneous measurements of 8 sEMG sensors of the Myo armband. The regression models were extracted testing different algorithms from the Matlab Regression Learner Toolbox. The performance of the models of the two wearable devices were evaluated and compared, applying a standard test, taken from literature on sensory gloves to evaluate the repeatability, reproducibility and reliability of finger joint measurements. These results were also compared with those reported by published works that followed the same standard test, using data gloves based on different sensing technologies. This work aims to demonstrate that the sEMG armbands can be applied to register the static postures of each finger joint with almost the same accuracy of sensory gloves.
... The sensory glove has been finding very different applications, among which the real-time control of a granular sound synthesis process (Costantini et al., 2010), the monitoring of hand rehabilitation (Park et al., 2009;Mohan et al., 2013) or clinical hand assessment (Williams et al., 2000), the human-computer interaction (Saggio et al., 2012;Berlia and Santosh, 2014), the sign-to-language conversion (Cavallo et Saggio, 2014), the objective surgical skill assessment (Saggio et al., 2015), the serious games for training of rescue teams (Mugavero et al., 2014), the tele-robotic manipulations for astronauts (Saggio and Bizzarri, 2014), and so on. As far as we know, this is the first time the sensory glove is utilized for Aircraft Force or Army purposes. ...
Conference Paper
Full-text available
An electronic demonstrator was designed and developed to automatically interpret the signalman’s arm-and-hand visual signals. It was based on an “extended” sensory glove, which is a glove equipped with sensors to measure fingers/wrist/forearm movements, an electronic circuitry to acquire/condition/feed measured data to a personal computer, SVM based routines to classify the visual signals, and a graphical interface to represent classified data. The aim was to furnish to the Italian Aircraft Force a tool for ground-to-ground or ground-to-air communication, which can be independent from the full view of the vehicle drivers or aircraft pilots, and which can provide information redundancy to improve airport security.
Conference Paper
Full-text available
In aircraft scenarios the proper interpretation of communication meanings is mandatory for security reasons. In particular some communications, occurring between the signalman and the pilot, rely on arm-and-hand visual signals, which can be prone to misunderstanding in some circumstances as it can be, for instance, because of low-visibility. This work intends to equip the signalman with wearable sensors, to collect data related to the signals and to interpret such data by means of a SVM classification. In such a way, the pilot can count on both his/her own evaluation and on the automatic interpretation of the visual signal (redundancy increase the safety), and all the communications can be stored for further querying (if necessary). Results indicate that the system performs with a classification accuracy as high as 94.11 ± 5.54 % to 97.67 ± 3.53 %, depending on the type of gesture examined.
ResearchGate has not been able to resolve any references for this publication.