Content uploaded by Giovanni Saggio
Author content
All content in this area was uploaded by Giovanni Saggio on Mar 11, 2017
Content may be subject to copyright.
Abstract—Normal communication of deaf people in ordinary
life still remains an unrealized task, despite the fact that Sign
Language Recognition (SLR) made a big improvement in
recent years. We want here to address this problem
proposing a portable and low cost system, which
demonstrated to be effective for translating gestures into
written or spoken sentences. This system relies on a
home-made sensory glove, used to measure the hand gestures,
and on Wavelet Analysis (WA) and a Support Vector
Machine (SVM) to classify the hand’s movements. In
particular we devoted our efforts to translating the Italian
Sign Language (LIS, Linguaggio Italiano dei Segni),
applying WA for feature extractions and SVM for the
classification of one hundred different dynamic gestures. The
proposed system is light, not intrusive or obtrusive, to be
easily utilized by deaf people in everyday life, and it has
demonstrated valid results in terms of signs/words
conversion.
Index Terms—Machine intelligence, Pattern analysis,
Human Computer Interaction, Support Vector Machines,
Data glove, Sign Language Recognition, Italian Sign
Language, LIS
I. INTRODUCTION
imilarly to spoken languages, Sign Languages (SLs)
are complete and powerful forms of communication,
and are adopted by millions of people, who suffer from
deafness, all over the world. SLs are different among
different regions and states: American Sign Language
(ASL), Japanese Sign Language (JSL), German Sign
Language (GSL), Lingua Italiana dei Segni (LIS, the
Italian Sign Language), etc. However, each single SL
relies commonly on gesture and posture mimic
interpretation, which plays a fundamental role in
non-verbal communication too. SL comprehension is
generally limited only to a restricted part of population,
thus deaf people remain restrained apart from social
interactions with hearing persons, and the
body-language/non-verbal communication is mainly
limited to “feelings” rather than consciousness
understanding. These are the main reasons why a system
for “Automatic” Sign Language Recognition (A-SLR) is
welcome and a greater human effort is being devoted to
realize it [1]. A-SLR could allow deaf people to
communicate without limitations, could assign suitable
interpretations to non-verbal communication without
ambiguities, and could be the basis of a new form of
human-computer interaction, since the most natural way
of human-computer interaction would be through speech
and gestures, rather than the current adopted interfaces like
keyboard and mouse. Particularly, the integration of
A-SLR with automatic text writing or speech synthesis
modules can furnish a “speaking aid” to deaf people.
Our purpose is to realize a system capable to measure
human static and dynamic postures, classify them as
“words” organized into “sentences”, in order to “translate”
SLs into written or spoken languages. To this aim, a great
challenge comes from the measure of human postures with
acquisition devices that are both comfortable and easy to
use. In particular, we have to focus our attention on hand
postures and movements since the SL is mostly, even if
not exclusively, based on them. In fact SL is made of hand
gestures, body and facial expressions, but the latter is not
strictly “fundamental”.
Currently, hand movements are commonly measured by
motion tracking techniques based on digital cameras.
These systems offer interesting results in terms of
accuracy, but suffer from a small active range and
disadvantages related to portability. In order to overcome
these problems, new measuring systems have been
developed, in particular the ones based on sensory gloves
(i.e. gloves equipped with sensors capable to convert hand
movements into electrical signals).
The first sensory gloves on the market [2-3] were
obtrusive for movements, uncomfortable and capable to
measure only a very low number of Degrees Of Freedom
(DOF) of the human hand. Nowadays, commercial
sensory gloves are quite light, comfortable and capable to
measure up to 22 DOFs [4], covering flex-extension and
abdu-adduction movements of the fingers and spatial
arrangement of the wrist. However, the cost remains
generally too high (tens of thousands of dollars) to be
widely applied in everyday scenarios. Thus, new sensory
gloves have been created by research groups all over the
Conversion of Sign Language to Spoken
Sentences by Means of a Sensory Glove
Pietro Cavallo
Electronic Engineering Department, University of Tor Vergata, Roma, Italy
Email:p.cavallo85@gmail.com
Giovanni Saggio
Electronic Engineering Department, University of Tor Vergata, Roma, Italy
Email: saggio@uniroma2.it
S
Manuscript received August 27, 2013; revised January 21, 2014;
accepted January 23, 2014.
2002 JOURNAL OF SOFTWARE, VOL. 9, NO. 8, AUGUST 2014
© 2014 ACADEMY PUBLISHE
R
doi:10.4304/jsw.9.8.2002-2009