Figure - available from: Journal of Ambient Intelligence and Humanized Computing
This content is subject to copyright. Terms and conditions apply.
Anatomy of the hand: fingers’ and joints’ names

Anatomy of the hand: fingers’ and joints’ names

Source publication
Article
Full-text available
Compared to the mouse, as a two-dimensional and precise interface device, hand gestures provide more degrees of freedom for users to interact with computers by employing intelligent computing methods. Leap Motion Controller is gaining more popularity due to its ability to detect and track hand joints in three dimensions. However, in some cases, the...

Citations

Article
Surface electromyography (sEMG) is a significant interaction signal in the fields of human-computer interaction and rehabilitation assessment, as it can be used for hand gesture recognition. This paper proposes a novel MLHG model to improve the robustness of sEMG-based hand gesture recognition. The model utilizes multiple labels to decode the sEMG signals from two different perspectives. In the first view, the sEMG signals are transformed into motion signals using the proposed FES-MSCNN (Feature Extraction of sEMG with Multiple Sub-CNN modules). Furthermore, a discriminator FEM-SAGE (Feature Extraction of Motion with graph SAmple and aggreGatE model) is employed to judge the authenticity of the generated motion data. The deep features of the motion signals are extracted using the FEM-SAGE model. In the second view, the deep features of the sEMG signals are extracted using the FES-MSCNN model. The extracted features of the sEMG signals and the generated motion signals are then fused for hand gesture recognition. To evaluate the performance of the proposed model, a dataset containing sEMG signals and multiple labels from 12 subjects has been collected. The experimental results indicate that the MLHG model achieves an accuracy of $99.26\%$ for within-session hand gesture recognition, $78.47\%$ for cross-time, and $53.52\%$ for cross-subject. These results represent a significant improvement compared to using only the gesture labels, with accuracy improvements of $1.91\%$ , $5.35\%$ , and $5.25\%$ in the within-session, cross-time and cross-subject cases, respectively.
Article
In this paper, a new framework for 3D hand pose estimation using a single RGB image is proposed. The framework is composed of two blocks. The first block formulates the hand pose estimation as a classification problem. Since the human hand can perform numerous poses, the classification network needs a huge number of parameters. So, we propose to classify hand poses based on three different aspects, including hand gesture, hand direction, and palm direction. In this way, the number of parameters will be significantly reduced. The motivation behind the classification block is that the model deals with the image as a whole and extracts global features. Furthermore, the output of the classification model is a valid pose that does not include any unexpected angle at joints. The second block estimates the 3D coordinates of the hand joints and focuses more on the details of the image pattern. RGB-based 3D hand pose estimation is an inherently ill-posed problem due to the lack of depth information in the 2D image. We propose to use the occlusion status of the hand joints to solve this problem. The occlusion status of the joints has been labeled manually. Some joints are partially occluded, and we propose to compute the extent of the occlusion by semantic segmentation. The existing methods in this field mostly used synthetic datasets. But all the models proposed in this paper are trained on more than 50K real images. Extensive experiments on our new dataset and two other benchmark datasets show that the proposed method can achieve good performance. We also analyze the validity of the predicted poses, and the results show that the classification block increases the validity of the poses.
Article
Deformation of 3D objects is increasingly used in human–computer interaction applications and computer games. Achieving a natural user interface like hand gestures is the most important challenge in these fields. In this paper, we propose a system for 3D object deformation through hand natural gestures. The purpose of this research is to represent the whole process of hand involvement in different tasks from start to the end, describe the challenges in each step, and resolve them in proper ways. A part of the study focuses on constraints and rules of the hand. In the proposed system, we use a camera and colored gloves to create a vision-based method to track hands. Compared to similar systems, more complex scenarios of deformations and more natural interactions are provided in our proposed framework. Furthermore, a model of the hand is created with the intention of giving a better sense of hand-object interaction to users. The hand model simulates the hand movements in the desktop environment. We conducted a set of experiments to evaluate different parts of the proposed system. The results in each step show that the proposed system can provide a precise interaction and a good experience for users to deform objects.