Figure - available from: Scientific Reports
This content is subject to copyright. Terms and conditions apply.
The IMU orientation diagram: On left, we have the X, Y, and Z coordinates of the MPU-6050. Along these 3 axes, the accelerometer and gyroscope values are recorded. The figure on the right shows the body axis to earth axis conversion diagram.

The IMU orientation diagram: On left, we have the X, Y, and Z coordinates of the MPU-6050. Along these 3 axes, the accelerometer and gyroscope values are recorded. The figure on the right shows the body axis to earth axis conversion diagram.

Source publication
Article
Full-text available
Hand gesture recognition is one of the most widely explored areas under the human–computer interaction domain. Although various modalities of hand gesture recognition have been explored in the last three decades, in recent years, due to the availability of hardware and deep learning algorithms, hand gesture recognition research has attained renewed...

Citations

... Md.AhasanAtick Faisal [14] Examine in this research, in the context of deep learning, how well an economical data glove can classify hand motions. The author developed a cost-effective dataglove using five flex sensors, an inertial measurement unit, and a powerful controller for wireless networking and computing process. ...
Article
Full-text available
The development of prosthetic hands has advanced significantly in recent years, aiming to provide more intuitive and functional solutions for individuals with upper limb amputations. This research presents a novel approach to prosthetic hand control by integrating 3D hand gesture recognition with object manipulation and recognition capabilities. Our proposed system utilizes a pre-trained object recognition model, based on transfer learning, to enable the prosthetic hand to perceive and identify objects in its vicinity. The model leverages a vast dataset of objects, enabling the prosthetic hand to recognize a wide array of everyday items, thus enhancing its versatility.In addition, the prosthetic hand incorporates a sophisticated 3D hand gesture recognition system, allowing users to control the hand's movements and actions seamlessly. By recognizing specific gestures, such as grasping, lifting, and releasing, users can intuitively interact with their environment and perform various tasks with ease. This research leverages the synergy between gesture recognition and object recognition, creating a powerful framework for prosthetic hand control. The system's adaptability and versatility make it suitable for a broad range of applications, from assisting with daily tasks to enhancing the quality of life for individuals with upper limb amputations.The results of this study demonstrate the feasibility and effectiveness of combining 3D hand gesture recognition with pre-trained object recognition through transfer learning. This approach opens up new possibilities for enhancing prosthetic hand functionality and usability, ultimately improving the lives of those who rely on these devices for daily living. The proposed model combines the features of YOLO V7 object detection with pre-trained models. The proposed model achieves 99.8% of accuracy compared to the existing models.
... The data obtained from wearable devices is very accurate, but their popularity is limited due to the high cost and price issues associated with these devices. The method proposed by researchers, based on gesture and speech information for collaborative input, and has optimized the realtime performance of human-computer interaction [9][10]. This lays the foundation for the design and optimization of human-computer interaction systems in education management based on AI technology, further enriching the form of m-learning. ...
Article
Full-text available
The continuous improvement and refinement of artificial intelligence (AI) technology has facilitated the broader application of human-computer interaction in the field of education management. The construction of an educational management human-computer interaction system based on AI technology can optimize and improve key parameters of educational management human-computer interaction scenarios, thereby creating a more comprehensive mobile learning (m-learning) application system. This paper is based on AI technology, analyzing gesture semantics and speech semantics, and combining fusion algorithms to construct an education management human interaction system. The performance changes of the system were compared with real experimental operations and the NOBOOK platform analysis. The results show that the education management human-computer interaction system constructed in this article can enhance the m-learning experience of participants. It ensures high recognition accuracy, leading to higher scores in all dimensions of indicator evaluation. Therefore, as one of the crucial forms of m-learning, the human-computer interaction system for education management based on AI can establish a foundation for the further enhancement and development of education management.
... In recent years, the evolution of wearable hand measurement devices has been evident, predominantly driven by miniaturization processes and advancements in algorithms. Notably, data gloves [4,5], including IMU [6] and bending sensors [7,8], have demonstrated significant advancements in wearability, accuracy, and stability metrics. Such advancements have consequently led to marked enhancements in the results of sign language recognition leveraging these measurement apparatus. ...
Article
Full-text available
Sign language recognition is essential in hearing-impaired people’s communication. Wearable data gloves and computer vision are partially complementary solutions. However, sign language recognition using a general monocular camera suffers from occlusion and recognition accuracy issues. In this research, we aim to improve accuracy through data fusion of 2-axis bending sensors and computer vision. We obtain the hand key point information of sign language movements captured by a monocular RGB camera and use key points to calculate hand joint angles. The system achieves higher recognition accuracy by fusing multimodal data of the skeleton, joint angles, and finger curvature. In order to effectively fuse data, we spliced multimodal data and used CNN-BiLSTM to extract effective features for sign language recognition. CNN is a method that can learn spatial information, and BiLSTM can learn time series data. We built a data collection system with bending sensor data gloves and cameras. A dataset was collected that contains 32 Japanese sign language movements of seven people, including 27 static movements and 5 dynamic movements. Each movement is repeated 10 times, totaling about 112 min. In particular, we obtained data containing occlusions. Experimental results show that our system can fuse multimodal information and perform better than using only skeletal information, with the accuracy increasing from 68.34% to 84.13%.
... This method used an inertialsensors-based data glove with 36 IMUs to collect a user's arm and hand motion data, and the accuracy can reach 99.2%. Faisal et al. [26] developed a low-cost data glove deployed with flexible sensors and an IMU, and introduced a spatial projection method that improves upon classic CNN models for gesture recognition. However, the accuracy of this method for static gesture recognition is only 82.19%. ...
Article
Full-text available
This study has designed and developed a smart data glove based on five-channel flexible capacitive stretch sensors and a six-axis inertial measurement unit (IMU) to recognize 25 static hand gestures and ten dynamic hand gestures for amphibious communication. The five-channel flexible capacitive sensors are fabricated on a glove to capture finger motion data in order to recognize static hand gestures and integrated with six-axis IMU data to recognize dynamic gestures. This study also proposes a novel amphibious hierarchical gesture recognition (AHGR) model. This model can adaptively switch between large complex and lightweight gesture recognition models based on environmental changes to ensure gesture recognition accuracy and effectiveness. The large complex model is based on the proposed SqueezeNet-BiLSTM algorithm, specially designed for the land environment, which will use all the sensory data captured from the smart data glove to recognize dynamic gestures, achieving a recognition accuracy of 98.21%. The lightweight stochastic singular value decomposition (SVD)-optimized spectral clustering gesture recognition algorithm for underwater environments that will perform direct inference on the glove-end side can reach an accuracy of 98.35%. This study also proposes a domain separation network (DSN)-based gesture recognition transfer model that ensures a 94% recognition accuracy for new users and new glove devices.
... In recent years, the evolution of wearable hand measurement devices has been evident, predominantly driven by miniaturization processes and advancements in algorithms. Notably, data gloves, including IMU [2] and bending sensors [3,4], have demonstrated significant advancements in wearability, accuracy, and stability metrics. Such advancements have consequently led to marked enhancements in the results of sign language recognition leveraging these measurement apparatuses. ...
Preprint
Full-text available
Sign language recognition is essential in hearing-impaired people’s communication. Sign language recognition is an important concern in computer vision and has been developed with rapid progress in image recognition technology. However, sign language recognition using a general monocular camera has problems with occlusion and recognition accuracy in sign language recognition. In this research, we aim to improve accuracy by using a 2-axis bending sensor as an aid in addition to image recognition. We aim to achieve higher recognition accuracy by acquiring hand keypoint information of sign language actions captured by a monocular RGB camera and adding sensor assist. To improve sign language recognition, we need to propose new AI models. In addition, the amount of dataset is small because it uses the original data set of our laboratory. To learn using sensor data and image data, we used MediaPipe, CNN, and BiLSTM to perform sign language recognition. MediaPipe is a method for estimating the skeleton of the hand and face provided by Google. In addition, CNN is a method that can learn spatial information, and BiLSTM can learn time series data. Combining the CNN and BiLSTM methods yields higher recognition accuracy. We will use these techniques to learn hand skeletal information and sensor data. Additionally, the 2-axis Bending sensor glove data support training AI model. Using these methods, we aim to improve the recognition accuracy of sign language recognition by combining sensor data and hand skeleton data. Our method performed better than using skeletal information, achieving 96.5% accuracy in Top-1.
... The manual extraction of sEMG signal features often leads to significant errors. Deep learning can automatically extract features at different levels from a large number of input samples, avoiding the complex and cumbersome process of manual feature extraction and selection [17]. In recent years, researchers have used various classifiers to classify sEMG, including k-nearest neighbor (KNN), linear discriminant analysis (LDA), and support vector machines (SVMs), etc. [18,19]. ...
Article
Full-text available
Wearable surface electromyography (sEMG) signal-acquisition devices have considerable potential for medical applications. Signals obtained from sEMG armbands can be used to identify a person’s intentions using machine learning. However, the performance and recognition capabilities of commercially available sEMG armbands are generally limited. This paper presents the design of a wireless high-performance sEMG armband (hereinafter referred to as the α Armband), which has 16 channels and a 16-bit analog-to-digital converter and can reach 2000 samples per second per channel (adjustable) with a bandwidth of 0.1–20 kHz (adjustable). The α Armband can configure parameters and interact with sEMG data through low-power Bluetooth. We collected sEMG data from the forearms of 30 subjects using the α Armband and extracted three different image samples from the time–frequency domain for training and testing convolutional neural networks. The average recognition accuracy for 10 hand gestures was as high as 98.6%, indicating that the α Armband is highly practical and robust, with excellent development potential.
... It also breaks down the spatial limitations of the hand, making multi-sensor data collection possible. Furthermore, among the multiple combinations, inertial sensors to collect hand motion and bending sensors to collect hand shape are the common approaches [23][24][25][26]. Faisal et al. [23], using the K-nearest neighbor (KNN), classified 14 static and 3 dynamic gestures for sign language recognition. ...
... Faisal et al. [23], using the K-nearest neighbor (KNN), classified 14 static and 3 dynamic gestures for sign language recognition. Faisal et al. [24] collected data from 25 subjects for 24 static and 16 dynamic American sign language gestures for validating the system. Boon Giin Lee et al. [25] used the support vector machine (SVM) to classify American sign language. ...
Article
Full-text available
There are numerous communication barriers between people with and without hearing impairments. Writing and sign language are the most common modes of communication. However, written communication takes a long time. Furthermore, because sign language is difficult to learn, few people understand it. It is difficult to communicate between hearing-impaired people and hearing people because of these issues. In this research, we built the Sign-Glove system to recognize sign language, a device that combines a bend sensor and WonderSense (an inertial sensor node). The bending sensor was used to recognize the hand shape, and WonderSense was used to recognize the hand motion. The system collects a more comprehensive sign language feature. Following that, we built a weighted DTW fusion multi-sensor. This algorithm helps us to combine the shape and movement of the hand to recognize sign language. The weight assignment takes into account the feature contributions of the sensors to further improve the recognition rate. In addition, a set of interfaces was created to display the meaning of sign language words. The experiment chose twenty sign language words that are essential for hearing-impaired people in critical situations. The accuracy and recognition rate of the system were also assessed.
... IMU acquisition system diagram. [137] Md.AhasanAtick Faisal et al. [231] One of the most actively researched areas in the field of HCI is hand gesture recognition. Although several hand gesture detection modalities have been investigated over the past three decades, recent years have seen a resurgence in the field thanks to hardware advancements and DL algorithms. ...
Article
Full-text available
Human activity recognition (HAR) has become increasingly popular in recent years due to its potential to meet the growing needs of various industries. Electromyography (EMG) is essential in various clinical and biological settings. It is a metric that helps doctors diagnose conditions that affect muscle activation patterns and monitor patients’ progress in rehabilitation. Despite its widespread Application, existing methods for recording and interpreting EMG data need more signal detection and robust categorization. Recent material science and Artificial Intelligence (AI) developments have significantly improved EMG detection. With an increasingly elderly patient population, HAR is increasingly used to monitor patients’ Activities of Daily Living (ADLs) in healthcare settings. It is also being used in security settings to identify suspect behavior, and Surface EMG (sEMG) is a potential non-invasive treatment for HAR since it monitors muscle contractions during exercise. sEMG and AI have revolutionized HAR systems in recent years. Sophisticated methods are required to recognize, break down, manufacture, and classify the EMG signals obtained by muscles. This review summarizes the various research papers based on HAR with EMG. AI has made tremendous contributions to biomedical signals classification. The different approaches of preprocessing, feature extraction, Reduction techniques, Deep Learning (DL) and Machine Learning (ML) based classification methods of EMG signals are then briefly explained. We focused on latest ML/DL methods used in HAR, Hardware involved in HAR with EMG and EMG based Application. We also discovered open issues and future research direction that may point to new lines of inquiry for ongoing research toward EMG-based detection.