Contexts in source publication

Context 1
... this research, black glove installed with 6 makers is designed and constructed. Five makers are attached on the fingertip leaving one maker on the palm center. Subject wearing the black shirt and the black glove will be captured with two USB cameras installed on the desk. The 640x480 pixel color image acquired from two camera are then used for 3D extraction of maker coordinates using a well-known DLT algorithm . Figure 1 shows our block ...
Context 2
... 640x480 pixel color image acquired from two camera are then used for 3D extraction of maker coordinates using a well-known DLT algorithm . Figure 1 shows our block diagram. ...

Similar publications

Conference Paper
Full-text available
Finite Element Analysis (FEA) takes into account the material and geometrical properties to solve a given problem. The theory of idealization for fuselage section, on the other hand, does not incorporate the effect of material properties and sectional length. Thus, FEA is more time consuming and also gives more accurate results. The theory of ideal...

Citations

... Among all, American Sign Language (ASL) (See Fig. 1) is one of the most important and universally adopted languages for the deaf. Consequently, many researchers are emphasizing for the development of automatic ASL translation systems [1]. Therefore, motivated by the above reasons, we propose a system capable of automatically recognizing the alphabet, i.e the 26 letters. ...
Technical Report
Full-text available
Speech impairment is a physical disability that impairs a person's capacity to communicate verbally and audibly. People who are impacted by this employ sign language and various alternative forms of communication. In this case, the use of hand gestures is the most common way of communication for the deaf, particularly in American Sign Language (ASL), which is used to represent the alphabet, the numbers, and also a lot of times, complete words. To help the deaf communicate with everyone, the community needs a system to overcome the barrier between the people who are not hearing-impaired and the people who are. Several studies on the challenges with sign language translation have been carried out. Our research suggests a model for translating the ASL alphabet from static images, using Deep Learning methods. The suggested approach starts with some pre-processed images taken from a webcam. To classify the gesture as a letter, we used a Convolutional Neural Network (CNN) architecture to create a base model. Along with that, we apply transfer learning on the same dataset using models from Keras that are already trained on the ImageNet dataset.
... It was usually applied to solve the problems that required data processing and knowledge representation. For example, Tangsuksant et al. [12] researched American Sign Language static alphabets recognition using feedforward backpropagation of ANN. eir research returned an average accuracy of 95% throughout repeating experiments. ...
Article
Full-text available
The deaf-mutes population always feels helpless when they are not understood by others and vice versa. This is a big humanitarian problem and needs localised solution. To solve this problem, this study implements a convolutional neural network (CNN), convolutional-based attention module (CBAM) to recognise Malaysian Sign Language (MSL) from images. Two different experiments were conducted for MSL signs, using CBAM-2DResNet (2-Dimensional Residual Network) implementing “Within Blocks” and “Before Classifier” methods. Various metrics such as the accuracy, loss, precision, recall, F1-score, confusion matrix, and training time are recorded to evaluate the models’ efficiency. The experimental results showed that CBAM-ResNet models achieved a good performance in MSL signs recognition tasks, with accuracy rates of over 90% through a little of variations. The CBAM-ResNet “Before Classifier” models are more efficient than “Within Blocks” CBAM-ResNet models. Thus, the best trained model of CBAM-2DResNet is chosen to develop a real-time sign recognition system for translating from sign language to text and from text to sign language in an easy way of communication between deaf-mutes and other people. All experiment results indicated that the “Before Classifier” of CBAMResNet models is more efficient in recognising MSL and it is worth for future research.
... It was usually applied to solve the problems that required data processing and knowledge representation. For example, Tangsuksant, Adhan, and Pintavirooj [9] researched American sign language static alphabets recognition by using feed forward back propagation of ANN. Their research returned an average accuracy of 95% throughout repeating experiments. ...
Article
Full-text available
The deaf-mutes population always feeling helpless when they are not understood by others and vice versa. To fill this gap, this study implements a CNN based neural network, Convolutional Based Attention Module (CBAM) to recognise Malaysian Sign language in image and video recognition. This study has created 2071 videos for 19 dynamic signs. Two different experiments were conducted for static and dynamic signs, using CBAM-2DResNet and CBAM-3DResNet implementing ‘Within Blocks’ and ‘Before Classifier’ methods. Various metrics such as the accuracy, loss, precision, recall, F1-score, confusion matrix, and training time were recorded to evaluate the models’ efficiency. Results showed that CBAM-ResNet models had good performances in image and video recognition tasks, with recognition rates of over 90 % with little variation. CBAM-ResNet ’Before Classifier’ is more efficient than ’Within Blocks’ models of CBAM-ResNet. Thus, the best trained CBAM-2DResNet was chosen to develop a real-time sign recognition system to translate the sign language to text and text to sign language to ease the communication between deaf-mutes and other people. All experiment results indicated the CBAMResNet ’Before Classifier’ efficiency in recognizing Malaysian Sign Language and its worth of future research.
... Among these languages, the American sign language is the first to be analyzed. For the review of this language, the literature was used [15,16]. ...
Article
Full-text available
In the course of our research work, the American, Russian and Turkish sign languages were analyzed. The program of recognition of the Kazakh dactylic sign language with the use of machine learning methods is implemented. A dataset of 5000 images was formed for each gesture, gesture recognition algorithms were applied, such as Random Forest, Support Vector Machine, Extreme Gradient Boosting, while two data types were combined into one database, which caused a change in the architecture of the system as a whole. The quality of the algorithms was also evaluated. The research work was carried out due to the fact that scientific work in the field of developing a system for recognizing the Kazakh language of sign dactyls is currently insufficient for a complete representation of the language. There are specific letters in the Kazakh language, because of the peculiarities of the spelling of the language, problems arise when developing recognition systems for the Kazakh sign language. The results of the work showed that the Support Vector Machine and Extreme Gradient Boosting algorithms are superior in real-time performance, but the Random Forest algorithm has high recognition accuracy. As a result, the accuracy of the classification algorithms was 98.86 % for Random Forest, 98.68 % for Support Vector Machine and 98.54 % for Extreme Gradient Boosting. Also, the evaluation of the quality of the work of classical algorithms has high indicators. The practical significance of this work lies in the fact that scientific research in the field of gesture recognition with the updated alphabet of the Kazakh language has not yet been conducted and the results of this work can be used by other researchers to conduct further research related to the recognition of the Kazakh dactyl sign language, as well as by researchers, engaged in the development of the international sign language
... The system was not constrained to sign language comprehension, yet it has space for non-verbal interaction in Human-Robot Interaction (HRI) too (Jalal et al. 2018). Tangsuksant et al. (2014) presented the feasible methodology for ASL recognition. They designed the glove with 6 various color markers. ...
... The sequences of areas were hence utilized as the input of feedforward backpropagation of ANN for the classification of the feature. The test result delineated the average accuracy as 95% (Tangsuksant et al. 2014). Bantupalli and Xie (2018), the objective of this research was to make the vision-based applications that provided Sign Language (SL) translation to interpretation to text in this way supporting communication among non-signers and signers. ...
Article
Full-text available
This study is proposed to review the sign language recognition system based on different classifier techniques. Mostly the Neural Network and Deep Learning-based classifiers were utilized to recognize the different sign languages and this survey is proposed to review the best classifier model to represent sign language recognition (SLR). We focused mainly on deep learning techniques and also on Arabic sign language recognition systems. Numerous classifiers like CNN, RNN, MLP, LDA, HMM, ANN, SVM, KNN and more were implemented to the SLR system. Each classifier is reviewed with the recognition accuracy, in which the deep learning-based classifiers executed the optimal recognition result as contrasted to the other types of classifiers.
... RMSProp is used to improve the training of the network with learning rates that vary. RMSProp is adapted automatically to the loss error function that is optimized as in (17) and (18) [17]. ...
... (17) (18) Where, is velocity of weight, is the decay rate of the motion average. It is useful for normalizing every update weight separately. ...
...  Accuracy: The performance of this work is measured by the accuracy rate. In (19) the accuracy of 24 static letters of ASL is computed as [18]: ...
Article
Full-text available
The American Sign Language (ASL) is a complex language depends on a special gesture stander of marks. These marks are represented by hands with assistance by facial expression and body posture. ASL is the main communication language of deaf and people who have a hard of hearing of North America and other parts of the world. The deep learning is used for processing complex problems such as image recognition, and computer vision. Deep Learning utilizes the Convolution Neural Network (CNN) in order to solve the problem of ASL language recognition. In this work, a comparative study has been made between two different models of the Convolution Neural Networks where, the first model uses the Stochastic Gradient Descent with momentum (SGDM), while, the second one uses the Root Mean Square Propagation (RMSProp). The proposed method includes resizing static ASL binary images with Bicubic function. Besides that, good recognition results of the hand boundary detection using the Robert edge detection method. Each Convolution Neural Network (CNN) used to classify all the 24 static alphabets of ASL. As a comparison between these two CNN models, the Stochastic Gradient Descent with Momentum optimizes the overall function of the model to reduce the oscillation of the recognition accuracy compared with the second model that use the Root Mean Square Propagation (RMSProp) which has a more oscillation percentage that affects the accuracy curve. The training process of the first model was slower than the second one. Moreover, the accuracy percentage of the first CNN model has better accuracy enhancement equal to 99.3% rather than 83.3% for the second CNN model. Model 1 has the ability to classify all the static 24 alphabets correctly, while model 2 was able to classify only 20 letters correctly.
... The system was classified using K-Nearest Neighbor and Support Vector machine and the accuracy of 72.78% and 79.83% was achieved respectively. Tangsuksant et al. [14] proposed a method for identifying American Sign Language alphabets. ...
Article
Full-text available
Despite the importance of sign language recognition systems, there is a lack of a Systematic Literature Review and a classification scheme for it. This is the first identifiable academic literature review of sign language recognition systems. It provides an academic database of literature between the duration of 2007-2017 and proposes a classification scheme to classify the research articles. Three hundred and ninety six research articles were identified and reviewed for their direct relevance to sign language recognition systems. One hundred and seventeen research articles were subsequently selected, reviewed and classified. Each of 117 selected papers was categorized on the basis of twenty five sign languages and were further compared on the basis of six dimensions (data acquisition techniques, static/dynamic signs, signing mode, single/double handed signs, classification technique and recognition rate). The Systematic Literature Review and classification process was verified independently. Literature findings of this paper indicate that the major research on sign language recognition has been performed on static, isolated and single handed signs using camera. Overall, it will be hoped that the study may provide readers and researchers a roadmap to guide future research and facilitate knowledge accumulation and creation into the field of sign language recognition.
... In addition, it was implemented on a desktop computer with software (sequential nature) and their hand descriptors contain a lot of data, which makes it computationally inefficient. Some researchers use more than one digital camera to obtain a 3D perception[6]. Although it has an accuracy of 95%, the high computer requirements and the marked gloves make it uncomfortable for real applications. ...
Article
Full-text available
This paper reports the design and analysis of an American Sign Language (ASL) alphabet translation system implemented in hardware using a Field-Programmable Gate Array. The system process consists of three stages, the first being the communication with the neuromorphic camera (also called Dynamic Vision Sensor, DVS) sensor using the Universal Serial Bus protocol. The feature extraction of the events generated by the DVS is the second part of the process, consisting of a presentation of the digital image processing algorithms developed in software, which aim to reduce redundant information and prepare the data for the third stage. The last stage of the system process is the classification of the ASL alphabet, achieved with a single artificial neural network implemented in digital hardware for higher speed. The overall result is the development of a classification system using the ASL signs contour, fully implemented in a reconfigurable device. The experimental results consist of a comparative analysis of the recognition rate among the alphabet signs using the neuromorphic camera in order to prove the proper operation of the digital image processing algorithms. In the experiments performed with 720 samples of 24 signs, a recognition accuracy of 79.58% was obtained.
... The system becomes bulky and heavy. Watcharin Tangsuksant et.al in [8] they have translated ASL from static postures. In this they have designed the glove with six different colored markers and developed algorithm for alphabet classification. ...
Conference Paper
All over the world deaf and dump people face many problems while communication. There are various challenges experienced by speech and hearing impaired people at public places in expressing themselves to normal people. The objective of this paper is to provide the solution to this problem. To reduce the communication gap between the common people and speech impaired people the proposed system is designed and implemented. The embedded system consist of wearable sensing gloves along with flex sensors which are used to sense the motion of the fingers. Indian sign language is used for determining the words. Flex sensors and accelerometer are used as sensor, these sensors are mounted on the gloves, the movement include the angle tilt, rotation and direction changes, these signals are processed by the microcontroller and playback voice is generated indicating signs through speaker.