ArticlePDF Available

Speak Pakistan: Challenges in Developing Pakistan Sign Language using Information Technology

Authors:

Abstract and Figures

Gesture based communication called Sign Language (SL) is the fundamental communication channel between hard of hearing individuals. Communication through signing is a visual motion dialect. Hard of hearing individuals use gesture based communication as their primary medium for correspondence. Different countries have their own sign language as the United States of America has American Sign Language (ASL), China has Chinese Sign Language (CSL), India has Indian Sign Language (ISL), and similarly Pakistan has Pakistan Sign Language (PSL). Most of the developed nations have addressed the issues of their hearing impaired people by launching projects involving Information Technology to reduce this gap between a deaf and a normal person. In central and south Asia, a considerable work has been conducted on ISL and CSL. However, Pakistan Sign Language is a linguistically under-investigated in the absence of any structured information about the language contents, grammar, and tools and services for communication. Hence, the major contributions of this research are to highlight the challenges to bridge this communication gap for Pakistani deaf community by using the existing literature, and to propose an Information Technology based architectural framework to identify major components to build applications which may help bridging the gap between the deaf and normal people of the country. South Asian Studies 30 (2) 368
Content may be subject to copyright.
367
South Asian Studies
A Research Journal of South Asian Studies
Vol. 30, No.2, July – December 2015, pp. 367 - 379.
Speak Pakistan: Challenges in Developing Pakistan Sign
Language using Information Technology
Nabeel Sabir Khan
University of Management and Technology, Lahore, Pakistan.
Adnan Abid
University of Management and Technology, Lahore, Pakistan.
Kamran Abid
University of the Punjab, Lahore, Pakistan.
Uzma Farooq
University of Management and Technology, Lahore, Pakistan.
Muhammad Shoaib Farooq
University of Management and Technology, Lahore, Pakistan.
Hamza Jameel
University of Management and Technology, Lahore, Pakistan.
Abstract
Gesture based communication called Sign Language (SL) is the fundamental
communication channel between hard of hearing individuals. Communication
through signing is a visual motion dialect. Hard of hearing individuals use gesture
based communication as their primary medium for correspondence. Different
countries have their own sign language as the United States of America has
American Sign Language (ASL), China has Chinese Sign Language (CSL), India
has Indian Sign Language (ISL), and similarly Pakistan has Pakistan Sign
Language (PSL). Most of the developed nations have addressed the issues of their
hearing impaired people by launching projects involving Information Technology
to reduce this gap between a deaf and a normal person. In central and south Asia, a
considerable work has been conducted on ISL and CSL. However, Pakistan Sign
Language is a linguistically under-investigated in the absence of any structured
information about the language contents, grammar, and tools and services for
communication. Hence, the major contributions of this research are to highlight
the challenges to bridge this communication gap for Pakistani deaf community by
using the existing literature, and to propose an Information Technology based
architectural framework to identify major components to build applications which
may help bridging the gap between the deaf and normal people of the country.
South Asian Studies 30 (2)
368
Keywords: Pakistan Sign Language; Sign Language; American Sign
Language; Sign Language Translation; PSL; HamNoSys.
Introduction
The most common and useful way of communication among human is speech but
a large number of population in the world suffer from hearing/speech disability.
So, there is a huge communication gap between these disabled and normal people.
To bridge this gap, a language which is known as sign language exists. Sign
language is comprised of gestures or visual representations of several different
types.
Spoken languages vary region to region and about 6,909 (Linguistic Society
of America, 2015) spoken languages exist in the world till now. Similarly, the
languages of gestures (sign languages) vary from region to region, and about 138
(Pakistan Sign language, 2015) sign languages are known till today. Among them
American Sign Language (ASL) and British Sign Language (BSL) are based on
English language. Whereas, Indian Sign Language (ISL), and Chinese Sign
Language (CSL) are also among the well-known sign languages. The grammars of
these gesture based sign languages differ from grammars of spoken or written
languages. The reason is that gesture based languages involve shapes and
concepts, whereas spoken and written languages involve words and grammar
rules, thus, both types of languages have significantly different grammatical
structures (Debevc et al., 2014) (Marschark et al., 2004) .
The field of Information Technology (IT) is strongly influencing human life.
Several different tools, technologies and devices have been built to help mankind
resolve different problems. Similarly, people have worked on bridging this gap
between the deaf and normal person by involving IT. The idea behind such IT
tools and services is to enable the deaf to communicate with a normal person and
vice versa. There can be numerous scenarios where such services can be useful to
minimize or eliminate this communication gap.
Motivational Example: Consider a deaf person who wants to read an online
newspaper written in normal English/Urdu language. He would not be able to do
so, as he does not understand the grammatical structure of English/Urdu language.
However, if the same is shown to him using the gestures in respective sign
language, he will be able to understand that very easily. Creating an application
that converts the written text to sign language and in turn this sign language to
avatar performing the gestures can resolve this issue.
The rest of the article has been presented in the following manner: the next
section explains the general concepts related to the sign language. This is followed
by the challenges identified in the light of current state-of-the-art to enable
Pakistani deaf community to interconnect with the normal people by realizing the
scenarios like the one presented in the motivational example. The major
components of an Information Technology infrastructure to bridge this gap have
Nabeel Sabir Khan, Adnan Abid, Kamran Abid, Uzma Farooq, Muhammad Shoaib
Farooq & Hamza Jameel Speak Pakistan: Challenges
369
been presented in the next section. Lastly, we present the conclusion and possible
future directions for this research.
Concepts Involved in the Sign Language
A sign language uses manual communication and body language to convey
meaning, as opposed to acoustically conveyed sound patterns (Sign Language,
2015) and to communicate with deaf people use signs. Each particular sign
represents a distinct letter, word or phrase of the corresponding spoken language
e.g. for the word “What” the sign in different Sign languages is shown in Figure 1.
ASL BSL PSL
Figure 1: Sign of “What” in different sign languages.
Gestures
Sign languages uses gestures to make a sign for particular unit e.g. letter, word or
phrase. These gestures are further decomposed into manual gestures and non-
manual gestures. Manual gestures consist of hand shape, movement, location (Hall
et al., 2015), (Al Qodri et al., 2012), and orientation as shown in Figure 2, whereas
non-manual gestures consist of facial expression, head movement, posture and
orientation of body (Al Qodri et al., 2012), shoulder raising, and mouthing, as
shown in Figure 2. Mostly non-manual markers are used along with manual
markers.
Figure 2: Components of Gestures
South Asian Studies 30 (2)
370
Manual gestures have two attributes hands and dynamism as shown in Figure
3. Hands involves the number of hands participating in performing the gesture,
whereas dynamism has two possible instances namely, static and dynamic signs.
Where, the type of signs which include constant movement of hands i.e. sign is
performed in a flow is known as static sign. Whereas, the dynamic type of signs
include variable movements of manual and non-manual markers i.e. sign is
performed by combination of two or more signs. Therefore, a manual gesture can
be single handed static, or single handed dynamic. Similarly, it can be double
handed static or double handed dynamic in nature.
Figure 3: Attributes of Manual Features
Sign Writing Notations
Like spoken languages, Sign Languages can also be written down with the help of
Sign Writing Notation Systems. Different notation systems are present for the
representation of signs in Sign Language but no notation for sign languages is
considered as standard till to date. The main advantages of using sign language
notation systems are
They are helpful in representing the words of the natural Language to a
format that can be used later in the translation of text to sign language
animations.
They make the translation system scalable.
Storage space
There are many notation systems used for Sign language writing among which
the four most widely used Sign Writing Notation Systems are Stokoe, Gloss,
SignWriting, and HamNoSys (Hutchinson, 2012). The basic representation of
widely used sign writing notation symbols are shown in Figure 4.
Nabeel Sabir Khan, Adnan Abid, Kamran Abid, Uzma Farooq, Muhammad Shoaib
Farooq & Hamza Jameel Speak Pakistan: Challenges
371
(a) Stokoe (b) Gloss
(c) SignWriting (d) HamNoSys
Figure 4: Widely used sign writing notation symbols
The comparison between all these notations is provided in Table 2. The values in
the table clearly reflect that among all these four notations HamNoSys is the most
suitable choice because it provides the following advantages over all other
notations.
It is not dependent on any sign language, so we can represent any sign
language gesture using this notation. It can represent both manual and non-manual
features of a particular sign. It is used for both academic and research purposes.
Its representation is linear so instead of storing pictures we can store sign language
gestures in textual format which helps us to minimize the space complexity. It can
be represented in both ASCII and Unicode so it is easy to represent and store
gestures in computer. So we take HamNoSys as a standard sign writing Notation
in rest of the article.
Table 1: Comparison of widely used Sign Writing Notations
Sign Writing
Notation System
Sign Language
Dependant
Non
Manual
Features Support
Objective
Arrangement
Computer
Compatibility
Stokoe Yes No Dictionary
or
Academic
Linear Custom Font or
ASCII codes
GLOSS Yes Yes Academic Linear Custom Font or
ASCII codes
SignWriting No Some Public Use Pictorial ASCII or
Unicode
HamNoSys No Yes Academic Linear Custom Font or
Unicode
HamNoSys Sign Writing Notation System
It is known as Hamburg Sign Language (HamNoSys) notation system introduced
by the University of Hamburg in Germany in 1985 (Sign Language Phonology,
2015). It has its own predefined notations and phonetic transcriptions for the
South Asian Studies 30 (2)
372
definition of signs and gestures shown in Table1(c). It provides us a way to write
signs in a computer understandable format which is easy to interpret and process.
The origin of HamNoSys was basically Stokoe writing notation system and it
gives us an alphabetic system to define different sign language parameters like
hand-shape, hand-movement, hand-location and hand-orientation (Symbol Font
for ASL, 2015).
HamNoSys has four basic components including three sub-components as
shown in Figure 5. The components shown in solid boxes are mandatory
components for the representation of Signs in HamNoSys which are Initial
Configuration and Action/Movement. The initial Configuration component
comprises of Handshape, location and orientation. The attributes shown in boxes
with dotted border are optional that are Symmetry Operator and Non-Manual
Features. From the Figure 5 we can easily conclude that HamNoSys notation has
the capability to represent all components of gestures manual and non-manual as
described in Fig2. The Initial configuration component in Fig5 can represent all
manual gesture attributes including hand shape, movement, location and
orientation. The non-manual feature component can represent facial expression,
head tilting, mouthing and shoulder raising. The symmetry component is used to
represent whether the gesture is single handed or double handed as explained in
Figure3. The last component of HamNoSys is used to represent Dynamism of the
gesture i.e. whether the gesture is static or dynamic.
Figure 5: HamNoSys Components of Sign Gesture
Current State-of-the-art and Challenges
There are more than hundred sign languages in the world today. Generally, every
country has its own sign language e.g. American, British, Japanese, Indian sign
languages exist. Similarly, Pakistani sign language is called Pakistan Sign
Language (PSL). According to an estimate by World Health Organization over 5%
of the world’s population which is more than 360 million people have disabling
hearing loss, in which 328 million are adults and 32 million are children (World
Health Organization, 2015). A significant part of the deaf population is young and
sign language recognition system can turn them into useful human resources for
certain positions. Whereas, data given by Population Census Organization of
Pakistan more than 3.3 million people of the country are disabled, and among
them 0.25 million suffer from hearing loss, that counts to 7.4% of the overall
disabled population (Population Census Organization, 2015).
Nabeel Sabir Khan, Adnan Abid, Kamran Abid, Uzma Farooq, Muhammad Shoaib
Farooq & Hamza Jameel Speak Pakistan: Challenges
373
The research work related to Sign Language gestures detection has been done
in many different ways. The approaches for gesture recognition are either
hardware-based which use data gloves, Kinect, or other sensor based devices
(Mohandes et al., 2007), or they are based on computer vision approaches which
use digital camera and image processing algorithms (Rashid et al., 2009) (Khan et
al., 2014). Some elementary work related to Pakistan sign language has also been
conducted in both directions. There is a system named "Boltay Hath" that aims at
recognizing Pakistan Sign Language using data gloves as its interface (Alvi et al.,
2004). Similarly, a vision based approach to recognize Pakistan Sign Language
alphabets has been presented by (Khan et al., 2014).
Machine translation has recently gained popularity and is being widely used to
convert natural language (NL) text to a given sign language. An early work in this
regard was conducted by Grieve-Smith (Grieve-Smith, 2002). Similarly, a
linguistic analysis for the possible issues that may occur during machine
translation have been discussed by (Speeers, 2002). A grammatical approach
based on synchronous tree adjoining grammar has been proposed by (Zhoa et al.,
2000), which has been further enhanced by (Huenerfauth, 2004). Whereas (Zahoa
et al., 2000) presents English to ASL translation approach using tree adjoining
grammar rules. Another dimension of machine translation involves statistical
machine translation of sign languages, some work in German sign language using
statistical machine translation has been presented in (Suszczanska et al., 2002).
Likewise, example-based translation is another variant of translating sign language
to natural language, (Bungeroth et al., 2004) presents such translation for Irish sign
language. Similarly in South Africa a project South African Sign Language
Machine Translation (SASL-MT) has been conducted to enable the deaf
community of the country with the help of a machine translation system from
English to SASL (Van Zijl et al., 2003).
It is clear from the literature review that people and governments of many
different countries have worked in many different facets to enable their hearing
impaired population communicate with the normal people. Translation from sign
language to natural language and vice versa has been the core idea which has been
implemented in many ways. Unfortunately, no significant work has been done for
Pakistan sign language in this regard, and there is a great room for conducting
research in various levels. Based on the approaches discussed earlier we have laid
down the following challenges that should be addressed to help Pakistani deaf
community communicate and use sources of information.
Lack of availability of linguistic information. PSL has not been
linguistically investigated properly.
Absence of Standard Sign corpus based on different language granularity
units.
No standard grammar rules for sentence creation in PSL.
No sign writing notation exists for PSL.
South Asian Studies 30 (2)
374
Automating it all requires evaluation and no evaluation corpus to test the
systems exist.
Proposed Framework
This research presenting a framework which will centralize all standardized
gestures, their equivalent HamNoSys, grammar rules of PSL and then using these
rules we will convert Urdu/English text to sign animations using an avatar. An
architecture for the proposed framework has been presented in Figure 6. Here, we
have presented all major components of the system and their interaction with each
other. The diagram shows that there are different layers in the system including
Storage Layer, Middleware Layer, and Application Layer.
Figure 6: Architectural Framework
Components and services
The system is divided into following major components.
Storage
Middleware
Services and API’s
Storage Component involves the following two sub-components:
Standard Sign Bank
Evaluation Corpus
Standard Sign Bank: In order to make the translation possible from text to sign
language or vice versa we need a corpus containing gestures of all the words along
with their HamNoSys representation. Like natural languages the sign language also
varies from region to region so same word has different gestures in PSL. We will
store a word along with its all possible HamNoSys representations. We will make
one of the HamNoSys as a standard of that particular word. For this
Nabeel Sabir Khan, Adnan Abid, Kamran Abid, Uzma Farooq, Muhammad Shoaib
Farooq & Hamza Jameel Speak Pakistan: Challenges
375
standardization purpose we will consult Pakistan sign language experts and
interpreters so that this standard sign bank can be accepted globally among all PSL
researchers and developers so that services and applications can be constructed
that will be accepted by all deaf community of the country. We also ensure that
while making this data bank the granularity of data units i.e. letters, words and
phrases must be incorporated with the consent of PSL experts. This standard
corpus also help us while translating from sign to text because if person use non-
standard gesture during his communication still our system is capable to map that
gesture to appropriate HamNoSys. It is pertinent to mention that we are not storing
the images or any animation for the sign. But we are storing a digitized format of a
gesture known as HamNoSys. This makes our system storage efficient,
comprehensive. Furthermore, it also supports the cause of translation from sign
language to natural language, and vice versa.
Evaluation Corpus: Research needs to be evaluated and such evaluation requires
tests. This invites us to generate several gold standard corpuses for testing and
evaluating all services/tools that we intend to develop. The Corpus contains
sentences of all possible categories of the language along with their correct
translations according to the rules of the grammar so that accuracy of provided
services/tools can be measured and results can be improved.
Middleware Layer
This layer is the core of the whole framework. It consists of the following
components: -
Language Translator
o Natural Language to Pakistan Sign language Translation
o Pakistan Sign language to Natural Language Translation
Grammar
o PSL Grammar
o Natural Language Grammar (Urdu, English) Plug-in based
Sentence Manipulator
o Filter (Stop Word removal/Stemmer/Lemmatizer)
o Plugger (Add missing words)
Video to HamNoSys Generator
The language translator module consists of two sub modules, first Natural
Language (NL) Sign Language (SL) converter, which converts text to sign
language animation, and the other SLNL which converts the video of SL to NL
text. These sub components have been explained below.
Natural Language to Pakistan Sign language Translation
This component is responsible for translating the English/Urdu sentence to
equivalent sign language sentence. This module takes sentence as input and using
external service of tagger performs the morphological analysis of the sentence and
South Asian Studies 30 (2)
376
converts it into lexical units. Then it communicates with the NL grammar
component to verify the sentence structure and generate parse trees. This generated
parse tree is then fed to filter sub component of sentence manipulator which
removes stop words like a, an, the, and other prepositions. This filtered tree
becomes the input of PSL grammar, and then this module converts the NL filtered
sentence into equivalent PSL sentence. This PSL sentence communicates with the
Sign bank to generate HamNoSys regarding to each tagged lexical unit. In the end
the generated HamNoSys will use the services of external service of Avatar to
generate the Sign animations of the input sentence.
Pakistan Sign language to Natural Language Translation: To make two way
communication possible this module will take video as input. The video is passed
to external service of video processing which will preprocess the video and
segment all gestures available in the video. The segmented video is passed to
Video to HamNoSys generator which generates the corresponding HamNoSys of
each segmented gesture. After this the corresponding words against each gesture
are fed to plugger module which add missing words according to grammar of
Natural language using certain algorithms and then SLNL module generates
appropriate sentence.
PSL Grammar: The grammar module is also subdivided into two sub modules.
PSL grammar and NL grammar of English and Urdu. Grammar is the building
block of any language’s sentence structure. Every spoken language has some sort
of grammatical structure for their sentence formation. Other than an important
component, grammatical structure helps in verifying the syntax of the respective
sentence.
Like all Sign Languages of the world PSL has its own grammatical rules for
the construction of valid PSL sentence. This PSL grammar module is used by
NLPSL converter to convert the NL sentence into its equivalent PSL sentence.
NL grammar of English and Urdu: This sub component will be implemented as
a plug in for Urdu and English languages. The major task of this module is to
check the validity of Natural language sentence. As it will be a plug in we can
replace it with any other language if the grammar of that language exists and it can
also work for our regional languages like Punjabi, Pashtu etc.
In order to understand the differences between sentences structures of PSL
and English consider the following examples shown in Table 2.
Table 2: Comparison of the structure of English sentence with PSL equivalents.
English sentence and
structure PSL sentence and structure
I am from Lahore I from Lahore
I from Lahore I
From Lahore I
I am a teacher I teacher I
I teacher
Teacher I
Nabeel Sabir Khan, Adnan Abid, Kamran Abid, Uzma Farooq, Muhammad Shoaib
Farooq & Hamza Jameel Speak Pakistan: Challenges
377
Variations in different PSL sentence formats makes it obvious that Grammar is the
most important module for the accurate conversion from one language to another.
To the best of our knowledge no such grammar exists for PSL.
Sentence Manipulator: This component is used to transform natural language
sentence to Pakistan sign language sentence and vice versa. This in turn involves
two sub-components namely Filter and Plugger. This reads the tree of natural
language sentence and remove stop words and other un necessary details from the
sentence that are not used by deaf people during their English/Urdu reading or
writing. Whereas, the Plugger is used from Pakistan sign language sentence to
natural language sentence. It will use certain algorithmic techniques to add the
missing information in PSL sentence and transform in to equivalent NL sentence.
Video to HamNoSys Generator (VHG): The video processing service tracks and
segments all the gestures in the sign language video. The gestures are then given to
VHG that identifies the handshape, orientation, palm location and movements and
maps these features to appropriate HamNoSys representation. This HamNoSys
will then be used to generate words and NL grammar along with plugger converts
those in valid NL sentences.
External Services: There are certain services that are external to the system:
Tagger: The Language Translator uses tagger service to break the sentence into its
morphological structure, and helps tagging the input sentence to the parts of
speech. This tagged result is further used in the grammar component to perform
syntactic and semantic analysis on the input.
Video Segmentation: The Sign to NL module uses this service to segment the
input video into different segments based on the gesture identification. That is, it
will create a separate video segment for each identified gesture which will be
processed further by VHG to generate HamNoSys.
Avatar Generation: This service shall be used while converting text/audio to sign
language conversion. It would take the HamNoSys of tagged words from the sign
bank, and then by using this HamNoSys it would generate avatar for each
HamNoSys.
Services and APIs: The proposed middleware along with external services can be
used by developers to develop certain applications for the deaf people, for
instance, newspaper reading, mobile messaging reading, and writing an email etc.
Similarly these applications can be used by deaf community to bridge the
communication gap and get better job opportunities by minimizing the
communication gap.
Conclusion and Future Work
In this research we have covered a literature review of the work done by different
countries to enable their hearing impaired population communicate and to help
them access the information in many different ways. Certainly, the usage of
Information Technology cannot be denied in achieving such milestones. The
South Asian Studies 30 (2)
378
research analysis shows various different dimensions in which people have been
working to resolve these issues. The unfortunate part is that no significant work
exists for Pakistan sign language. We have highlighted this gap and possible
challenges which should be addressed to help Pakistani deaf community. Apart
from emphasizing upon the challenges, as another principal contribution of this
research we have also proposed a general architectural framework which can help
translating English or Urdu text/voice to animations of Pakistan sign language, and
vice versa.
In future, we intend to address all these challenges by implementing all
different components in the proposed architectural framework. To this end, we
intend to start with the text to sign language translation, followed by defining and
refining the grammatical structure for Pakistan sign language. We shall also
develop a standard corpus for Pakistan sign language for all different granularity
levels including letter, word, and phrase. We also plan to develop APIs and
services for the developers and deaf community, respectively. Lastly, we shall
develop evaluation corpus for the testing of all these services and tools for their
effectiveness and accuracy.
References
Al Qodri Maarif, H., Akmeliawati, R., & Bilal, S. (2012, July). Malaysian Sign Language
database for research. In Computer and Communication Engineering (ICCCE), 2012
International Conference on (pp. 798-801). IEEE.
Alvi, A. K., Azhar, M. Y. B., Usman, M., Mumtaz, S., Rafiq, S., Rehman, R. U., & Ahmed,
I. (2004). Pakistan sign language recognition using statistical template
matching. International Journal of Information Technology, 1(1), 1-12.
Bungeroth, J., & Ney, H. (2004, May). Statistical sign language translation. In Workshop
on representation and processing of sign languages, LREC (Vol. 4).
Debevc, M., Milošević, D., & Kožuh, I. (2014). A Comparison of Comprehension
Processes in Sign Language Interpreter Videos with or without Captions. PloS
one, 10(5), e0127577-e0127577.
Grieve-Smith, A. B. (2002). SignSynth: A sign language synthesis application using
Web3D and Perl. In Gesture and Sign Language in Human-Computer Interaction (pp.
134-145). Springer Berlin Heidelberg.
Hall, M. L., Ferreira, V. S., & Mayberry, R. I. (2015). Syntactic Priming in American Sign
Language. PloS one, 10(3), e0119611.
Huenerfauth, M. (2004, May). A multi-path architecture for machine translation of English
text into American Sign language animation. In Proceedings of the Student Research
Workshop at HLT-NAACL 2004 (pp. 25-30). Association for Computational
Linguistics.
Hutchinson, J. (2012). Literature Review: Analysis of Sign Language Notations for Parsing
in Machine Translation of SASL.
Khan, N., Shahzada, A., Ata, S., Abid, A., Khan, Y., & ShoaibFarooq, M. (2014). A Vision
Based Approach for Pakistan Sign Language Alphabets Recognition. Pensee, 76(3).
Linguistic Society of America, http://www.linguisticsociety.org/content/how-many-
languages-are-there-world (Retrieved on 6-August-2015)
Marschark, M., Sapere, P., Convertino, C., Seewagen, R., & Maltzen, H. (2004).
Comprehension of sign language interpreting: Deciphering a complex task
situation. Sign Language Studies, 4(4), 345-368.
Nabeel Sabir Khan, Adnan Abid, Kamran Abid, Uzma Farooq, Muhammad Shoaib
Farooq & Hamza Jameel Speak Pakistan: Challenges
379
Mohandes, M., & Buraiky, S. (2007). Automation of the Arabic sign language recognition
using the powerglove. AIML Journal, 7(1), 41-46.
Pakistan Sign Language , http://psl.org.pk/faq/ (Retrieved on 10-July-2015)
Population Census Orgnaization, Govt. Of Pakistanat
http://www.census.gov.pk/index.php(Retrieved on 12-June-2015)
Rashid, O., Al-Hamadi, A., Panning, A., & Michaelis, B. (2009). Posture recognition using
combined statistical and geometrical feature vectors based on SVM
Sign Language, https://en.wikipedia.org/wiki/Sign_language (Retrieved on 2-June-2015)
Sign Language Phonology, http://www.ling.fju.edu.tw/phono/sign.htm (Retrieved on 25-
May-2015)
Speers, D. (2002). Representation of American sign language for machine translation.
Suszczańska, N. I. N. A., Szmal, P. R. Z. E. M. Y. S. Ł. A. W., & Francik, J. A. R. O. S. Ł.
A. W. (2002, February). Translating Polish texts into sign language in the TGT system.
In 20th IASTED International Multi-Conference. Applied Informatics AI (pp. 282-
287)
Symbol Font for ASL, http://aslfont.github.io/Symbol-Font-For-ASL/ways-to-write.html
(Retrieved on 20-June-2015)
Van Zijl, L., & Barker, D. (2003, February). South african sign language machine
translation system. In Proceedings of the 2nd international conference on Computer
graphics, virtual Reality, visualisation and interaction in Africa (pp. 49-52). ACM.
World Health Organization, http://www.who.int (Retrieved 15-March-2015)
Zhao, L., Kipper, K., Schuler, W., Vogler, C., Badler, N., & Palmer, M. (2000). A machine
translation system from English to American Sign Language. In Envisioning machine
translation in the information future (pp. 54-67). Springer Berlin Heidelberg.
Zhao, L., Kipper, K., Schuler, W., Vogler, C., Badler, N., & Palmer, M. (2000). A machine
translation system from English to American Sign Language. InEnvisioning machine
translation in the information future (pp. 54-67). Springer Berlin Heidelberg.
Biographical Note
Nabeel Sabir Khan University of Management and Technology, Lahore,
Pakistan.
Adnan Abid University of Management and Technology, Lahore, Pakistan.
Kamran Abid University of the Punjab, Lahore, Pakistan.
Uzma Farooq University of Management and Technology, Lahore, Pakistan.
Muhammad Shoaib Farooq University of Management and Technology, Lahore,
Pakistan.
Hamza Jameel University of Management and Technology, Lahore, Pakistan.
________________________
... Just like the natural languages there are different sign languages for different countries [2]. Each gesture comprises of some hand shapes, hand movements around the upper body, and facial expression including shoulder shrugging, lips movement, smile etc [3]. The grammatical structure of the sign language sentences is different from that of scriptive natural languages. ...
... Pakistan Sign Language is not very well-studied sign language. Creation of Pakistan Sign Language resources and technologies is in early stages [3]. The work began in early 2000 [65,66] with a study that attempted to collect data for words and phrases in PSL. ...
... Following that, however, no significant effort has been made to automate the translation of natural language into PSL. Some recent efforts have been made to layout an architectural platform to convert natural language text into PSL, and vice versa [3,67,68]. Later on, a grammar-based machine translation model has been proposed and tested on a small data corpus [14]. Similarly, another useful effort has been made to develop a sizeable translation corpus for PSL, comprising of nearly 50,000 sentences [21]. ...
Article
Full-text available
Sign languages are gesture-based languages used by the deaf community of the world. Every country has a different sign language and there are more than 200 sign languages in the world. American Sign Language (ASL), British Sign Language (BSL), and German Sign Language (DGS) are well-studied sign languages. Due to their different grammatical structure and word order the deaf people feel difficulty in reading and understanding the written text in natural languages. In order to enhance the cognitive ability of the deaf subjects some translation models have been developed for these languages that translate natural language text into corresponding sign language gestures. Most of the earlier natural to sign language translation models rely on rule-based approaches. Recently, some neural machine translation models have been proposed for ASL, BSL, DGS, and Arabic sign language. However, most of these models have low accuracy scores. This research provides an improved and novel multi-stack RNN-based neural machine translation model for natural to sign language translation. The proposed model is based on encoder–decoder architecture and incorporates attention mechanism and embeddings to improve the quality of results. Rigorous experimentation has been performed to compare the proposed multi-stack RNN-based model with baseline models. The experiments have been conducted using a sizeable translation corpus comprising of nearly 50,000 sentences for Pakistan Sign Language (PSL). The performance of the proposed neural machine translation model for PSL has been evaluated with the help of well-established evaluation measures including Bilingual Evaluation Understudy Score (BLEU), and Word Error Rate (WER). The results show that multi-stacked gated recurrent unit-based RNN model that employs Bahdanau attention mechanism and GloVe embedding performed the best showing the BLEU score of 0.83 and WER 0.17, which outperform the existing translation models. The proposed model has been exposed through a software system that converts the translated sentences into PSL gestures using an avatar. The evaluation of the usability has also been performed to see how effectively the avatar-based output helps compensating the cognitive hearing deficit for the deaf people. The results show that it works well for different granularity levels.
... The text gathered from sign language is then converted into audio using Google Text to Speech App which helps in converting Urdu text into Urdu audio and a complete system of PSL to Urdu Translator is formed as shown in Figure 8. This is a very user friend system [23] which can facilitate both hearing impaired person [24] and the normal person to communicate with each other without facing any challenges [25]. ...
Article
Full-text available
The lack of a standardized sign language, and the inability to communicate with the hearing community through sign language, are the two major issues confronting Pakistan's deaf and dumb society. In this research, we have proposed an approach to help eradicate one of the issues. Now, using the proposed framework, the deaf community can communicate with normal people. The purpose of this work is to reduce the struggles of hearing-impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL) to Urdu language translator is being developed to accomplish this. The system’s dynamic sign language segment works in three phases: acquiring key points from the dataset, training a long short-term memory (LSTM) model, and making real-time predictions using sequences through openCV integrated with the Kinect device. The system’s static sign language segment works in three phases: acquiring an image-based dataset, training a model garden, and making real-time predictions using openCV integrated with the Kinect device. It also allows the hearing user to input Urdu audio to the Kinect microphone. The proposed sign language translator can detect and predict the PSL performed in front of the Kinect device and produce translations in Urdu.
... The technological area is usually concerned with the translation of spoken language into sign language as input using text, sound or image. There are also studies on reverse translation, for example the solution described in [23], the system is able to recognize sign language poses and translate through avatars in the form of talking faces [24]. Many works also focus on the development of two-way communication by creating solutions that translate spoken languages into sign languages, and are able to recognize sign languages, as for example in works [8], [25], [26]. ...
... It is a natural sign language, which means it developed spontaneously through interaction among the deaf community, rather than being artificially created (Sulman & Zuberi, 2000). Khan, et al., (2020) reported in their study that PSL is used by an estimated 2 million deaf and hard-of-hearing individuals in Pakistan, making it one of the country's most widely used languages. It is recognized as an official language of Pakistan, and efforts are being made to increase its use and visibility in society. ...
Article
Full-text available
This study was conducted from January to July 2022 in fifteen special education schools for children with hearing impairment located in the city of Karachi. The study focused on challenges faced by Sign Language Interpreters within the school environment. Strategies to counter challenges were identified. Through interviews, observation, and communication analysis, interpreting strategies and motivation were discussed. Additional information was gathered from teachers, Deaf and hearing children on attitudes, relationships, and interpreter competence. Detailed interviews with participants about their cultural values provided a clearer understanding of the topic. Communication strategies, both verbal and non-verbal, were significant in interpreting information and occasionally led to misunderstandings among interpreters in their professional and social life at school. Interpreters shared their experiences and challenges working in a diverse environment with clients of varying backgrounds. Attitudes, strategies, backgrounds, and communication were key aspects identified. Needs of developing a good working environment, interpreter development, teacher training in Special Needs Education, School adaptation for Sign Language Interpreters, and future research to achieve inclusive education were identified.
... In the lack of any organized information on the language's contents, grammar, and tools and services for communication, Pakistan Sign Language is linguistically understudied. To aid the Pakistani deaf population, this gap and related obstacles must be addressed (Khan et al., 2020). ...
... The text gathered from sign language is then converted into audio using Google Text to Speech App which helps in converting Urdu text into Urdu audio and a complete system of PSL to Urdu Translator is formed as shown in Figure 8. This is a very user friend system [23] which can facilitate both hearing impaired person [24] and the normal person to communicate with each other without facing any challenges [25]. ...
Article
Full-text available
The lack of a standardized sign language, and the inability to communicate with the hearing community through sign language, are the two major issues confronting Pakistan's deaf and dumb society. In this research, we have proposed an approach to help eradicate one of the issues. Now, using the proposed framework, the deaf community can communicate with normal people. The purpose of this work is to reduce the struggles of hearing-impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL) to Urdu language translator is being developed to accomplish this. The system’s dynamic sign language segment works in three phases: acquiring key points from the dataset, training a long short-term memory (LSTM) model, and making real-time predictions using sequences through openCV integrated with the Kinect device. The system’s static sign language segment works in three phases: acquiring an image-based dataset, training a model garden, and making real-time predictions using openCV integrated with the Kinect device. It also allows the hearing user to input Urdu audio to the Kinect microphone. The proposed sign language translator can detect and predict the PSL performed in front of the Kinect device and produce translations in Urdu.
... South and South East Asia, to facilitate their deaf community, as the literature reveals a considerable number of research articles are being published for the development of Indian [6], Veitnamese sign language [7], Bangladeshi [8], Pakistani [9], Thai [10], Arabic [11], and Malaysian [12] sign languages. Motivation: A significant research work on translation of Pakistan Sign Language (PSL) has been conducted in [13]. Whereby, a high level architecture for the translation system has been developed that translated natural language into Pakistan Sign Language and vice versa. ...
Article
Full-text available
Humans use language in order to communicate between one another. There exist a number of languages which are either spoken or written. Among these languages, there exists a special type of language called Sign Language (SL). Sign language is a general term which includes any kind of gestural language that makes use of signs and gestures to convey message. Although the deaf community feels comfortable while using Sign Language as their mode of communication, but they face a lot of problems as well. Therefore, in order to help and assist the deaf community a repository of different sign languages are essential for each sign language. This work presents a process to develop a repository by collecting and validating sign language gestures of any language by involving the deaf community and language experts. A small data collection based on a proof-of-concept application has also been presented in this work. Lastly, it highlights the benefits of such corpus by discussing possible applications that can be built to serve the deaf community of the world at large.
... The Blockchain applies to supply chain management based on IoT devices for traceability and provides security, timely data delivery, and efficient multi-hopping in a low-cost implementation. Many architectures proposed in different papers use Oracle's network, smart contracts, and the lightweight consensus for IoT (LC4IoT) protocol for IoT sensor devices, especially RFID, etc., [61][62][63][64][65][66][67][68][69].The details domain-wise of the publications are described in Table 7. ...
Article
Full-text available
Through recent progress, the forms of modern supply chains have evolved into complex networks. The supply chain management systems face a variety of challenges. These include lack of visibility of the upstream party (Provider) to the downstream party (Client); lack of flexibility in the face of sudden variations in demand and control of operating costs; lack of reliance on safety stakeholders; ineffective management of supply chain risks. Blockchain (BC) is used in the supply chain to overcome the growing demands for items. The Internet of Things (IoT) is a profoundly encouraging innovation that can help companies observe, track, and monitor products, activities, and processes within their respective value chain networks. Research establishments and logical gatherings are ceaselessly attempting to answer IoT gadgets in supply chain management. This paper presents orderly writing on and reviewing of Blockchain-based IoT advances and their current usage. We discuss the smart devices used in this system and which device is the most appropriate in the supply chain. This paper also looks at future examination themes in blockchain-based IoT, referred to as the executive’s framework production network. The essential deliberate writing audit has been consolidated by surveying research articles circulated in highly reputable publications between 2016 and 2021. Lastly, current issues and challenges are present to provide researchers with promising future directions in IoT supply chain management systems.
Chapter
The deaf people are using sign language for communication. Mostly information is available in written form or spoken language. As the deaf people cannot understand easily, avatars are useful to generate sign from text. Synthetic animation we can achieved through a notation system like HamNoSys, Stokoe, SignWriting, etc. The Hamburg notation system for sign languages (HamNoSys) is an alphabetical system for the primarily phonetic description of signs. However, there are no research on creation HamNoySys Notation for Gujarati Sign Language Alphabets. In this paper, avatar-based Guajarati dictionary has proposed for speech to text to Gujarati Sign Language System. This paper represents a creation of HamNoSys of Guajarati alphabets and convert them into HamNosys codes that codes will help to generate SiGML–XML code for performing the sign by Avatar.
Article
Full-text available
One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers’ comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.
Article
Full-text available
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.
Conference Paper
Full-text available
Malaysian Sign Language (MSL) is the main language that is commonly used by the hearing and speech impaired person in Malaysian. The SL (SL) involves hand movement, and hand gestures. In order to help people who are not familiar, but need to understand a particular SL, an automatic SL recognition system is highly required. The research in this area, especially for MSL, has been conducted by many researchers, but one of the main challenges in this research is the availability of suitable sign database for the recognition. The existing databases, especially which of MSL database, are provided often without a proper standard of image resolution, structure and compression that are sufficiently good for research purpose. To provide comprehensive information for the research on MSL, the MSL database is highly required. In this project, a MSL database is developed. The database is the first of the kind and developed for research purpose. In general, the structure of the MSL database is classified into groups that deal with the hand movement, hand gestures, and hand location. For the classification in our proposed database, the MSL is classified into One Hand, Two Hands, Static, and Dynamic. This classification is made to ease researchers in defining the research method for each type offhand signing.
Article
Full-text available
There is a persistent communication barrier between the deaf and normal community because a normal person has no or limited fluency with the sign language. A person with hear-impairment has to express himself via interpreters or text writing.This inability to communicate effectively between the two groups affects their interpersonal relationships.There are about 0.24 million Pakistanis who are either deaf or mute and they communicate through PakistanSign Language(PSL). In this research work a system for recognizing hand gestures for Pakistan Sign Language alphabets in unimpeded environment is proposed. A digital camera is used to acquire PSL alphabet’s images with random background. These images are preprocessed for hand detection using skin classification filter. The system uses discrete wavelet transform (DWT) for feature extraction.Artificial neural network(ANN) with back propagation learning algorithm is employed to recognize the sign feature vectors. The dataset contains 500 samples of Pakistan Sign Language alphabets with various background environments. The experiments show that the classification accuracy of the proposed system for the selected PSL alphabets is 86.40%.
Article
Full-text available
Remarkably few studies have examined the outcomes of sign language interpreting. Three experiments reported here examine deaf students' comprehension of interpreting in American Sign Language and English-based signing (transliteration) as a function of their sign language skills and preferences. In Experiments 1 and 2, groups of deaf students varying in their sign language skills viewed either an ASL or English-based interpretation of a nontechnical lecture, followed by either a written comprehension test (Experiment 1) or a signed comprehension test (Experiment 2). Experiment 3 involved a more technical (physics) lecture, separate testing of students with greater ASL or English-based sign skills and preferences, and control of students' prior content knowledge. Results consistently demonstrate that regardless of the deaf students' reported sign language skills and preferences, they were equally competent in comprehending ASL interpreting and English transliteration, but they gained less knowledge from lectures than hearing peers in comparison groups. The results raise questions about how much deaf students actually learn in interpreted classrooms and the link between their communication preferences and learning. (Contains 1 table and 4 notes.)
Article
Full-text available
Sign language recognition has been a topic of research since the first data glove was developed. Many researchers have attempted to recognize sign language through various techniques. However, none of them have ventured into the area of Pakistan Sign Language (PSL). The Boltay Haath project aims at recognizing PSL gestures using Statistical Template Matching. The primary input device is the DataGlove5 developed by 5DT. Alternative approaches use camera-based recognition which, being sensitive to environmental changes are not always a good choice. This paper explains the use of Statistical Template Matching for gesture recognition in Boltay Haath. The system recognizes one handed alphabet signs from PSL.
Conference Paper
Full-text available
Research in computational linguistics, computer graphics and autonomous agents has led to the development of increasingly sophisticated communicative agents over the past few years, bringing new perspective to machine translation research. The engineering of language- based smooth, expressive, natural-looking human gestures can give us useful insights into the design principles that have evolved in natural communication between people. In this paper we prototype a machine translation system from English to American Sign Language (ASL), taking into account not only linguistic but also visual and spatial information associated with ASL signs.
Article
A sign language maps letters, words, and expressions of a certain language to a set of hand gestures enabling an individual to communicate by using hands and gestures rather than by speaking. Systems capable of recognizing sign-language symbols can be used as a means of communication with hard of hearing people. In addition, virtual reality systems can largely benefit from advances in sign language recognition. Several systems have been developed to recognize other sign languages. However, the Arabic sign language did not receive such attention. This paper introduces a system to recognize isolated signs from the Arabic sign language using an instrumented glove and a machine learning method. Signs from different aspects of life are chosen from the Unified Arabic Sign Language Dictionary that is adopted by all Arab countries. The results obtained are promising even though a simple glove with limited sensors was utilized.
Article
Interaction through machines is a challenging task when visual information is not available. In this paper, we have addressed this issue to provide interaction through visual techniques. Posture recognition is done for American Sign Language to recognize static alphabets and numbers. Moreover, 3D information is exploited to obtain segmentation of hands and face using normal Gaussian dis- tribution and depth information. Features for the posture recognition are computed using statistical and geometrical properties which are translation, rotation and scale invariant. Hu-Moment as statistical features and; circularity and rectangularity as geometrical features are incorporated to build the feature vectors. These feature vectors are used to train SVM for classification that recognizes static alphabets and numbers. For the alphabets, curvature analysis is carried out to re- duce the misclassifications. In the experimental results, the proposed system recognizes posture symbols by achieving recognition rate of 98.65% and 98.6% for ASL alphabets and numbers respectively.
Conference Paper
Sign synthesis (also known as text-to-sign) has recently seen a large increase in the number of projects under development. Many of these focus on translation from spoken languages, but other applications include dictionaries and language learning. I will discuss the architecture of typical sign synthesis applications and mention some of the applications and prototypes currently available. I will focus on SignSynth, a CGI-based articulatory sign synthesis prototype I am developing at the University of New Mexico. SignSynth takes as its input a sign language text in ASCII-Stokoe notation (chosen as a simple starting point) and converts it to an internal feature tree. This underlying linguistic representation is then converted into a three-dimensionala nimation sequence in Virtual Reality Modeling Language (VRML or Web3D), which is automatically rendered by a Web3D browser.