Figure 1 - uploaded by Al Hussain Akoum
Content may be subject to copyright.
The 26 hand signs of the ASL Language.

The 26 hand signs of the ASL Language.

Source publication
Article
Full-text available
In a general overview, signed language is a technique used for communicational purposes by deaf people. It is a three-dimensional language that relies on visual gestures and moving hand signs that classify letters and words. Gesture recognition has been always a relatively fearful subject that is adherent to the individual on both academic and demo...

Contexts in source publication

Context 1
... target of this effort is to construct a system which can classify particular hand gestures and extract the corresponding literatures. This dynamic system is based on the American Sign Language alphabets (Figure 1). ...
Context 2
... output pixel comprises the average value of all pixels of the 3-by-3 neighboring region. The procedural steps for 2D median filtering are summarized in the following chart (Figure 10). ...
Context 3
... can be a series of binary, grayscale, or true-color images. The advantage of this technique is on one hand its ability to specify the word's length, minimizing spelling mistakes, and on other hand preserving all hand gestures and thus reviewing the hand's shape and form throughout the procedure (Figure 11). ...
Context 4
... edge recognition algorithms contain Sobel, Canny, Prewitt, Roberts, and Fuzzy logic methods. Choosing the suitable algorithm for your function is as important as specifying the image's threshold, which identifies its sensitivity (Figure 12). ...
Context 5
... filter: As all edge detection consequences are certainly affected by image noise, it is vital to filter out the noise to avoid false recognition (Figure 13). This step will faintly level the image to diminish the effects of noise during the procedure. ...
Context 6
... If the pixel gradient is between the two thresholds and is adjacent to an edge pixel, it will also be considered to be part of the edge. This step really makes a big difference in the size of the edge (Figure 14). The figure above clearly shows the dissimilarity in the boundaries' size though the image is binary, which implies that it only has edges when the pixel changes from logical 1 (white) to logical 0 (black). ...
Context 7
... and for even more accuracy, a third level of matching is applied. (Figure 15). ...

Similar publications

Article
Full-text available
Hand gesture recognition is a crucial task for the automated translation of sign language, which enables communication for the deaf. This work proposes the usage of a magnetic positioning system for recognizing the static gestures associated with the sign language alphabet. In particular, a magnetic positioning system, which is comprised of several...
Preprint
Full-text available
Sign Language is mainly used by deaf (hard hearing) and dumb people to exchange information between their own community and with other people. It is a language where people use their hand gestures to communicate as they can't speak or hear. Sign Language Recognition (SLR) deals with recognizing the hand gestures acquisition and continues till text...
Article
Full-text available
Hand gestures are used by the speech impaired persons to easily communicate with others. Hand gesture is a visual language, different from spoken language but serves as the same purpose of spoken language. Image segmentation and feature extraction algorithms are used to recognize the hand gestures of the deaf people .In this paper the proposed syst...
Conference Paper
Full-text available
in this paper we present a virtual reality based visual tool in order to generate 3D gesture entity performed by three-dimensional (3-d) humanoid (avatars) compliant with HAnim 2.0 in VRML/X3D-based environments. We propose a new open and flexible representation of the gestural entities transcriptions under reuse XML format. This work represents th...
Article
Full-text available
Technologies for pattern recognition are used in various fields. One of the most relevant and important directions is the use of pattern recognition technology, such as gesture recognition, in socially significant tasks, to develop automatic sign language interpretation systems in real time. More than 5% of the world’s population—about 430 million...

Citations

... Using contour assessment, the gesture and its shape can then be detected and recognized after acquiring the contour. Contour is constituted by the connection of edges, Sobel, Canny, Prewitt, Roberts and Fuzzy logic techniques are popular edge recognition algorithms (Akoum and Mawla 2015). In this paper, the Canny edge detection technique was implemented through the OpenCV image processing library to obtain the contour of the hand shape. ...
Article
Full-text available
Humans maintain and develop interrelationships through various forms of communication, including verbal and nonverbal communications. Gestures, which constitute one of the most significant forms of nonverbal communication, convey meaning through diverse forms and movements across cultures. In recent decades, research efforts aimed at providing more natural, human-centered means of interacting with computers have garnered increasing interest. Technological advancements in real-time, vision-based hand motion recognition have become progressively suitable for human–computer interaction, aided by computer vision and pattern recognition techniques. Consequently, we propose an effective system for recognizing hand gestures using time-of-flight (ToF) cameras. The hand gesture recognition system outlined in the proposed method incorporates hand shape analysis, as well as robust fingertip and palm center detection. Furthermore, depth sensors, such as ToF cameras, enhance finger detection and hand gesture recognition performance, even in dark or complex backgrounds. Hand shape recognition is performed by comparing newly recognized hand gestures with pre-trained models using a YOLO algorithm-based convolutional neural network. The proposed hand gesture recognition system is implemented in real-world virtual reality applications, and its performance is evaluated based on detection performance and recognition rate outputs. Two distinct gesture recognition datasets, each emphasizing different aspects, were employed. The analysis of results and associated parameters was conducted to evaluate the performance and effectiveness. Experimental results demonstrate that the proposed system achieves competitive classification performance compared to conventional machine learning models evaluated on standard evaluation benchmarks.
... Peijun Bao et al [7] proposed Deep CNN to directly recognize the gesture from the whole image without using any region proposal algorithm or sliding window mechanism. On the other hand, recognition of gestures with both written words and audible speech was proposed [8] by image matching technique and use of build in "Find" function. Preceding analysis on different sign languages such as American, Indian [9] and Arabic as well as Italian has fascinated researchers to work with Bangla Sign Language. ...
Article
Full-text available
Hand gestures can play important role in Computer Vision as well as communication through sign language. It provides an interaction between human and machine. Deafness is a degree of loss where the person could not understand speech and spoken language. Sign language faces many difficulties through hearing. Real time hand gesture recognition has been proposed in our research. Our proposed CNN model is used to communicate with deaf people. The Proposed model has achieved an accuracy of 94.6% to recognize different gestures. General Terms: Hearing impaired people, Computer Vision, Hand gesture recognition.
... The proposed system helps in dimensionality reduction. Alhussain Akoum, Nour Al Mawla [9] discussed steps to take input, recognize and analyze hand gestures, then translating into text. In this approach, a digital camera provides input and background is eliminated using thresholding and filtering. ...
Article
Sign language is the basic communication method among hearing disabled and speech disabled people. To express themselves, they require an interpreter or motion sensing devices who/which converts sign language in a few of the standard languages. However, there is no system for those who speak in the Telugu language and hence they are forced to speak in the national language over the regional language of their culture along with the same issues of cumbersome hardware or need for an interpreter. This paper proposes a system that detects hand gestures and signs from a real-time video stream that is processed with the help of computer vision and classified with object detection YOLOv3 algorithm. Additionally, the labels are mapped to corresponding Telugu text. The style of learning is transfer learning, unlike conventional CNNs, RNNs or traditional Machine Learning models. It involves applying a pre-trained model onto a completely new problem to solve the related problem statement and adapts to the new problem’s requirements efficiently. This requires lesser training effort in terms of dataset size and greater accuracy. It is the first system developed as a sign language translator for Telugu script. It has given the best results as compared to the existing systems. The system is trained on 52 Telugu letters, 10 numbers and 8 frequently used Telugu words.
... As the total process is manual; this process takes much time. As well as there haven't any fixed procedures to create the sample (training and testing )images [3]- [11]. Therefore every system needs to the automated process where the gesture recognition system uses a manual approach to testing their system. ...
... Therefore every system needs to the automated process where the gesture recognition system uses a manual approach to testing their system. After analyzing a few gesture recognition systems we found some problems and identified some solutions [3]- [7] scrutinizing the experimental result. We presented a dynamic process for test a system, especially to image processing soft-ware testing.The research team developed a model based on the common parameters for converting the sample image with different testing cases with the parameters. ...
... We experimented five specific well known gesture recognition system to know how the systems work, how gesture images are taken, what are the conditions or bugs that are not considered in the system. The first system is proposed Mr.Akoum [3] to identification gesture activities for hand sign system, where extracts attributes to images to create a signature by identifying the key points. Jarman et.al. ...
Article
Full-text available
In the field of information technology, the gesture recognition system plays a very essential role. As it has achieved vast importance, it is mandatory to test the recognition system to ensure the quality of the system by identifying the bugs in the software. In our research, we suggested a dynamic testing method for gesture recognition software. using dynamic image pattern generation with augmentation. The automated software testing framework is a set of processes to create new test cases for properly testing a image processing software. The research intention for generate automated testing cases by following a standard process which helps to increase the performance and efficiency of the gesture recognition system. We have built the framework to give proper testing and give result (accuracy and defect) for which gesture recognition system already in the market. our research, the team strongly following and adding two software testing standard. First one is ISO/IEC/IEEE/291129-3 to define the process for testing software. And the second one is ISO/IEC/IEEE/291129-5 to implement the techniques for software testing. We proposed this framework with major five parameters by noise, rotation, background, contrast, and scale. Which are the most use with every gesture recognition system. Our developed framework’s phase is used to generate new testing cases based on the existing gesture recognition system’s data. There are we work with five systems, commonly with the gesture recognition for experiments. We provide the testing report with total accuracy and defect by comparing existing well-known system’s data. At the final result, our system suggested an analysis report based on the testing result. And tell what are the improvement needs for the existing system to consider noised images or different scaled images to build a robust system. GUB JOURNAL OF SCIENCE AND ENGINEERING, Vol 7, Dec 2020 P 42-50
... So we want to solve the problem by providing an automated software testing framework. To develop an automated system, we have analyzed a few gesture recognition systems [5]- [9]. After investigating the systems, we have identified that there are some common parameters which are considered to model the proposed testing framework for generating testing images as test cases. ...
... The system was developed by Akoum et.al. [5] for ASL hand gesture recognition approach which extracts features from images, then finds key points and creates signatures. For testing of their system, they considered rotation, contrast, and scaling of test images. ...
... We want to contribute by proposing a software testing model which will evaluate the efficiency of gesture recognition system. By analyzing some gesture recognition system [5]- [9], we have identified some common parameters which are considered to create sample images. Rotation, contrast, size, noise, and background, these five parameters are the common features. ...
... Sign language provides a big aid and convenience in human life [1] and used especially by deaf persons and by other people to add weight in conversation. Visual representation by hands, delivers a meaningful message to others [2]. Sign Language consists in three forms: one is called facial expression, second is hand gestures and third is called body postures [1], [2]. ...
... Visual representation by hands, delivers a meaningful message to others [2]. Sign Language consists in three forms: one is called facial expression, second is hand gestures and third is called body postures [1], [2]. In our daily life, we mostly use our body postures and facial expression to deliver meaningful information to others. ...
... As we increase the cell size, number of features are also increased. In our feature extraction technique, we use cell size [2,2], [4,4] and finally [8,8] 3) Statistical Feature Measurements: Based on above two techniques HOG and LBP, we use some additional statistical techniques for better feature extraction as shown in Table II. We use Mean, Standard deviation, variance and skewness for additional feature extraction techniques. ...
... Gesture segmentation is the first step of the whole process; the segmentation results play a crucial role in the final recognition. Gesture feature extraction is an intermediate step based on the acquisition of a variety of feature vectors [2]. Gesture recognition is the last and most important step of the whole process. ...
Article
Full-text available
A static gesture recognition algorithm is proposed based on a recursive graph of the upper triangular image texture, motivated by the low accuracy and robustness of existing algorithms. Firstly, the fingertip localization method based on contour curvature is used to obtain the palm region and then the gesture contour model is established. Secondly, a recurrence plot of the gesture contour sequence is built, which is constructed using the central point and the starting point coordinates. Finally, the texture recognition algorithm is applied to calculate the normalized distance between the recurrence plots of the gesture. The experimental results show that the proposed algorithm can achieve higher recognition accuracy under varying complex backgrounds and illumination. At the same time, when the gesture is in rotation, translation, or scaling, the algorithm has high robustness with a small amount of computation and high efficiency.
Preprint
Full-text available
Sign language is a visual language that uses hand motions, changes in hand shape, and track information to convey meaning. It is the primary mode of communication for those with hearing and language impairments. The use of sign language for communication is limited, despite the fact that sign language recognition can help a large number of such persons deal with regular people. As a result, there is a need to create a more comfortable approach for people with hearing and language impairments to learn and work in order to improve their lives. Therefore, the basic idea behind this article is to make the communication between normal human beings and deaf people much easier. In order to recognize static gestures associated with sign language alphabet and a few commonly used words, we conducted a comprehensive research study employing the hand tracking technique Mediapipe and a gesture classification model based on Support Vector Machine (SVM). The results of the experiments are validated using Recall, F1 Score and Precision. Based on the validated results, we recommend the application of the discussed techniques for such communication. The suggested methods have high generalization qualities and deliver a classification accuracy of around 99 percent on 26 alphabet letters, numerical digits, and some regularly used words.
Article
Full-text available
Sign language, often termed “dactylology,” is a mode of communication for those who are hard of hearing. With over 2.5 billion people projected to have hearing loss by 2050, there are very few efficient real-time sign language translation (SLT) applications present today despite extensive research in the domain. The main purpose of the systematic literature review is to analyze existing research in SLT systems and obtain results that will help in building an efficient and improved SLT system. A total of 125 different research articles within the time frame of 2015–2022 were identified. The study analyzes each paper against nine main research questions. The results obtained show the unique strengths and weaknesses of the different methods used, and while the reviewed papers showed significant results, there is still room for improvement in the implementations. This systematic literature review helps in identifying suitable methods to develop an efficient SLT application, identifies research gaps in this domain, and simultaneously indicates recent trends in the field of SLT systems.