Figure 2 - uploaded by Adnan Rashid
Content may be subject to copyright.
An Example of a Flex Sensor Based Glove [9] 

An Example of a Flex Sensor Based Glove [9] 

Source publication
Article
Full-text available
Hand deformities often become a major obstacle in conducting everyday tasks for many people around the globe. Rehabilitation procedures are widely used for strengthening the hand muscles, which in turn leads to the restoration of functionality of the affected hand. This paper conducts a survey of various wearable technologies that can be used to ac...

Context in source publication

Context 1
... to this characteristic, flex sensors are also commonly termed as analog resistors. Figure 2 shows a gesture recognition glove with flex sensors embedded on its fingers [9]. Flex sensors can be manufactured based on the conductive ink or the fiber-optic technologies. ...

Similar publications

Article
Full-text available
We report on a case of two sisters, daughters of consanguineous parents, presenting with a similar condition of low visual acuity associated with retinal dystrophy in both eyes associated with alopecia and bone alterations or syndactyly.

Citations

... Data gloves offer high precision to accurately capture hand motion, rendering them a highly important tool for tasks that require fine-grained control and dexterity. Sensor-based systems are therefore more feasible to be used than optical-based ones [20].Due to their lightweight, compactness, affordability, and durability, flex sensors and inertial measurement units (IMUs) are the two primary types of sensors used in measuring the grasp postures [16][21] [22]. G. Saggio et al. [23] found that gloves with resistive flex sensors (RFSs) and inertial measurement units (IMUs) are both suitable for dynamic interactions and yield comparable results to static or quasi-static assessments. ...
Preprint
Data gloves play a crucial role in study of human grasping, and could provide insights into grasp synergies. Grasp synergies lead to identification of underlying patterns to develop control strategies for hand exoskeletons. This paper presents the design and implementation of a data glove that has been enhanced with instrumentation and fabricated using 3D printing technology. The glove utilizes flexible sensors for the fingers and force sensors integrated into the glove at the fingertips to accurately capture grasp postures and forces. Understanding the kinematics and dynamics of human grasp including reach-to-grasp is undertaken. A comprehensive study involving 10 healthy subjects was conducted. Grasp synergy analysis is carried out to identify underlying patterns for robotic grasping. The t-SNE visualization showcased clusters of grasp postures and forces, unveiling similarities and patterns among different GTs. These findings could serve as a comprehensive guide in design and control of tendon-driven soft hand exoskeletons for rehabilitation applications, enabling the replication of natural hand movements and grasp forces.
... Our sHO employs the same concept with addition of developed encoders system for detecting position of the fingers. This system can be utilized for monitoring the progress of recovery or improving rehabilitation by integrating gamification elements to boost user motivation and aid in the recovery process [59]- [61]. ...
Article
Full-text available
The ageing population has increased the demand for advanced medical assistive devices with actuators and sensors to enhance independence in daily activities. However, the integration of sensors into rehabilitation devices in a direct and cost-effective manner remains a challenge. This study proposes a scalable linear encoder, fabricated using 3D-printing technology. The encoder is a fully printed concept, including the electrical circuitry, and is benchmarked in a sensorized hand orthosis. It is constructed using two commercially available materials: one electrically conductive and one non-conductive. The encoder utilizes an encoder setup with a fixed scale and a sliding sensing head. Encoder’s resolution and robustness are tested under various conditions, including actuation speeds, temperatures, printing repeatability, and fatigue. The results demonstrate the stable operation of the encoder, providing a resolution of 1.2 mm. The sensorized hand orthosis incorporates three encoders with geometric shifts, along with a printed electrical circuit and inserted microcontroller for detecting finger flexion and movement direction. This setup further improves resolution to 0.4 mm. The proposed linear encoder offers an effective and inexpensive approach for integrating sensors into medical assistive devices.
... From the recognition perspective, previous work primarily focused on the classification performance provided by machine and deep learning algorithms [13,47] and proposed and improved existing computational models [8,44]. From the human motion understanding, previous work explored the device technology [42], its placement [2], computational algorithms [54], inertial time series feature selection [54], and body rehabilitation [40]. As for exploring head movements of cyclists and motorcyclists, researchers focused on augmenting helmets with Inertial Measurement Units (IMUs) to detect head movements and improve the recognition rate [3,14,16] to send rescue requests and prevent accidents [4], leaving the understanding of riders' perceived safety out of the scope. ...
... They incorporated embedded sensors in the gloves to detect hand movements and gestures but turned out to be very cumbersome. Technological advancements led to the development of more accurate techniques, such as the utilization of accelerometers [21], infrared cameras [22], and fiber-optic bend sensors [23]. Such advancements resulted in improved precision for glove-based control interfaces. ...
Article
Full-text available
The rapidly evolving field of Virtual Reality (VR)-based Human–Computer Interaction (HCI) presents a significant demand for robust and accurate hand tracking solutions. Current technologies, predominantly based on single-sensing modalities, fall short in providing comprehensive information capture due to susceptibility to occlusions and environmental factors. In this paper, we introduce a novel sensor fusion approach combined with a Long Short-Term Memory (LSTM)-based algorithm for enhanced hand tracking in VR-based HCI. Our system employs six Leap Motion controllers, two RealSense depth cameras, and two Myo armbands to yield a multi-modal data capture. This rich data set is then processed using LSTM, ensuring the accurate real-time tracking of complex hand movements. The proposed system provides a powerful tool for intuitive and immersive interactions in VR environments.
... Programs and systems for language recognition are employed. Te PC vision community uses a variety of procedures and algorithms [22]. Eforts are already been made by the researchers in the feld of communication among deaf and normal people. ...
Article
Full-text available
Communication between normal people and deaf people is the most difficult part of daily life worldwide. It is difficult for a normal person to understand a word from the deaf one in their daily routine. So, to communicate with deaf people, different countries developed different sign languages to make communication easy. In Pakistan, for deaf people, the government developed Urdu Sign Language to communicate with deaf people. Physical trainers and experts are difficult to provide everywhere in society, so we need such a computer/mobile-based system to convert the deaf sign symbol into voice and written alphabet that the normal person can easily get the intentions of the deaf one. In this paper, we provided an image processing and deep learning-based model for Urdu Sign Language. The proposed model is implemented in Python 3 and uses different image processing and machine techniques to capture the video and transform the symbols into voice and Urdu writing. First, we get a video from the deaf person, and then the model crops the frames into pictures. Then, the individual picture is recognized for the sign symbol such as if the deaf showed a symbol for one, then the model recognizes it and shows the letter which he/she wants to tell. Image processing techniques such as OpenCV are used for image recognition and classification while TensorFlow and linear regression are used for training the model to behave intelligently in the future. The results show that the proposed model increased accuracy from 80% to 97% and 100% accordingly. The accuracy of the previously available work was 80% when we implemented the algorithms, while with the proposed algorithm, when we used linear regression, we achieved the highest accuracy. Similarly, when we used the TensorFlow deep learning algorithm, we achieved 97% accuracy which was less than that of the linear regression model.
... While patients perform certain tasks in a virtual reality environment, finger movement information received through flex sensors is evaluated. In their study, Rashid and Osman [11] designed a glove with flex sensors, accelerometers and pressure sensors that can detect the improvement in motor skills of patients during hand rehabilitation. Here, flex sensors were used to measure finger joint angles, pressure sensors placed on the fingertips for grip force measurement, and accelerometers were used to measure wrist velocity. ...
... Indeed, an important role in telerehabilitation is assumed by the use of virtual, augmented, and mixed reality therapy (VAMR) or PC-based therapy [9], which has been proven to be very helpful [10][11][12]. Several works provided auxiliary systems for rehabilitation and/or telerehabilitation [13][14][15][16]. ...
Article
Full-text available
Telerehabilitation is important for post-stroke or post-surgery rehabilitation because the tasks it uses are reproducible. When combined with assistive technologies, such as robots, virtual reality, tracking systems, or a combination of them, it can also allow the recording of a patient’s progression and rehabilitation monitoring, along with an objective evaluation. In this paper, we present the structure, from actors and functionalities to software and hardware views, of a novel framework that allows cooperation between patients and therapists. The system uses a computer-vision-based system named virtual glove for real-time hand tracking (40 fps), which is translated into a light and precise system. The novelty of this work lies in the fact that it gives the therapist quantitative, not only qualitative, information about the hand’s mobility, for every hand joint separately, while at the same time providing control of the result of the rehabilitation by also quantitatively monitoring the progress of the hand mobility. Finally, it also offers a strategy for patient–therapist interaction and therapist–therapist data sharing.
... For data acquisition and preprocessing, researchers achieve hand posture sequences in the real world in different modalities based on sensors, video images, and hand skeletons. Wearable sensors [6,7] would provide accurate measurements of hand posture and movement. However, such devices not only require precise calibration but also fail to capture the natural movement of human fingers due to their bulk, and are often very expensive [8]. ...
... Secondly, the 21 hand key points are divided into four groups- (1,5,9,13,17), (2, 6, 10, 14, 18), (3,7,11,15,19), (4,8,12,16,20)-to obtain the relative positions of adjacent points of each group. ...
... Thirdly, after key points are grouped as (i, j, k, v), i.e., (1,2,3,4), (5,6,7,8), (9,10,11,12), (13,14,15,16), and (17,18,19,20), the vector is received as F c = [R 1, 2, 3 , R 2, 3, 4 , R 5, 6, 7 , . . . , R 18, 19, 20 ], (9) where ...
Article
Full-text available
Imagining recognition of behaviors from video sequences for a machine is full of challenges but meaningful. This work aims to predict students’ behavior in an experimental class, which relies on the symmetry idea from reality to annotated reality centered on the feature space. A heteromorphic ensemble algorithm is proposed to make the obtained features more aggregated and reduce the computational burden. Namely, the deep learning models are improved to obtain feature vectors representing gestures from video frames and the classification algorithm is optimized for behavior recognition. So, the symmetric idea is realized by decomposing the task into three schemas including hand detection and cropping, hand joints feature extraction, and gesture classification. Firstly, a new detector method named YOLOv4-specific tiny detection (STD) is proposed by reconstituting the YOLOv4-tiny model, which could produce two outputs with some attention mechanism leveraging context information. Secondly, the efficient pyramid squeeze attention (EPSA) net is integrated into EvoNorm-S0 and the spatial pyramid pool (SPP) layer to obtain the hand joint position information. Lastly, the D–S theory is used to fuse two classifiers, support vector machine (SVM) and random forest (RF), to produce a mixed classifier named S–R. Eventually, the synergetic effects of our algorithm are shown by experiments on self-created datasets with a high average recognition accuracy of 89.6%.
... An analytical review of the literature on the advantages and disadvantages of using robots for rehabilitation can be found in [2][3][4]. There is a considerable amount of research on the use of robotics in rehabilitation. ...
... It should be noted that the so-called serial change means that in the degree of change, we adopt a method of gradually increasing the range, but the real value is generated randomly. The ranges of change currently used are 1,2,3,4,5,10,15,20,25,30,35,40,45, and 50%. The so-called random here refers to using a random number to determine whether to change the size and order of the curvature of the fingers within the specified range of variation. ...
Article
Full-text available
The hand is involved very deeply in our lives in daily activities. When a person loses some hand function, their life can be greatly affected. The use of robotic rehabilitation to assist patients in performing daily actions might help alleviate this problem. However, how to meet individual needs is a major problem in the application of robotic rehabilitation. A biomimetic system (artificial neuromolecular system, ANM) implemented on a digital machine is proposed to deal with the above problems. Two important biological features (structure–function relationship and evolutionary friendliness) are incorporated into this system. With these two important features, the ANM system can be shaped to meet the specific needs of each individual. In this study, the ANM system is used to help patients with different needs perform 8 actions similar to those that people use in everyday life. The data source of this study is our previous research results (data of 30 healthy people and 4 hand patients performing 8 activities of daily life). The results show that while each patient’s hand problem is different, the ANM can successfully translate each patient’s hand posture into normal human motion. In addition, the system can respond to this difference smoothly rather than dramatically when the patient’s hand motions vary both temporally (finger motion sequence) and spatially (finger curvature).
... Most studies use the contact method, where patients wear a device in their hands. This method provides accurate data using various sensors, including flex sensors, accelerometers, and hall-effect [4]. However, this strategy has drawbacks, such as high equipment costs and inappropriate use. ...
Article
Full-text available
Cyber-physical-social systems (CPSS) are expected to support telemedicine and will be the future of independent rehabilitation. However, a connection between the physical and cognitive as one of the social aspects has not been adequately supported. This paper proposes a multiscopic CPSS framework to investigate this issue by developing hand–object interaction (HOI) recognition with visual attention for the block-design test (BDT). We use three vision systems to extract hand features from block interaction. First, a hand-tracking vision is used to collect the hand-skeletal data and finger joint angle feature. We estimate the physical handling postures at the microscopic level. Second, an egocentric vision with an eye tracker is used to get hand and eye movement at the mesoscopic level. We analyze hand-eye coordination through hand gestures and the focus of visual attention when the hand interacts with the block. Third, an evaluation vision is used to classify the color features in each block at the macroscopic level. The system can recognize whether the design matches the task given. The eight-block design test with two scenarios demonstrates that the system can successfully measure human behavior from physical to cognitive during BDT. We investigate the relationship between physical and cognitive evaluation through the experiment in eight healthy persons. The regression and correlation analyses between the dominant and non-dominant hands show that evaluation indices – skewness-kurtosis of handling posture, attention to pattern, and attention to blocks – could indicate improvement during BDT. We believe this study will benefit clinicians and researchers by providing information unavailable in clinical settings. Code and datasets are available online at https://github.com/anom-tmu/bdt-multiscopic.