Fig 2 - uploaded by Yang Yu
Content may be subject to copyright.

Contexts in source publication

Context 1
... rest (RE). Each motion was maintained for five seconds and there were a five-second rest between two different motion classes in each trial. The same motions were performed in five limb positions which were common in daily life. In each limb position, these motions were repeated for five times. The five different limb positions (illustrated in Fig. 2) were described as follows: P1: the arm was extended towards ground at side. P2: the elbow was flexed with forearm below the horizontal line about 45 ...
Context 2
... hybrid signal recording device, Trigno Wireless System (DELSYS INC, USA), which could record sEMG signals and tri-axis acceleration signals simultaneously, was used during the experiment. Six signal recording modules were attached on subject's forearm with a homogeneous distribution and another two modules were attached on biceps and triceps, respectively. The detailed sensor distribution was illustrated in Fig. 1. Surface electromyography and acceleration signals were recorded simultaneously with sampling rate of 2000Hz and 148Hz, respectively. With eight hybrid sensors, eight- channel sEMG signals and twenty-four (each accelerometer could record three axes of ACC signals) channels acceleration signals were simultaneously recorded. The sEMG signals were band-pass (20-450Hz) filtered on the hardware and the data were saved on the disk of a computer. Subjects were instructed to perform seven classes of hand motion including wrist flexion (WF), wrist extension (WE), wrist pronation (WP), wrist supination (WS), hand grasp (HG), hand open (HO) and rest (RE). Each motion was maintained for five seconds and there were a five-second rest between two different motion classes in each trial. The same motions were performed in five limb positions which were common in daily life. In each limb position, these motions were repeated for five times. The five different limb positions (illustrated in Fig. 2) were described as follows: P1: the arm was extended towards ground at side. P2: the elbow was flexed with forearm below the horizontal line about 45 ...

Citations

... This control challenge is well documented and referred to as the "limb position effect" [7]. Several pattern recognition-based control methods have been investigated to minimize the limb position effect [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. These methods require a user to perform a training routine across multiple limb positions, prior to daily device use. ...
Article
Full-text available
Upper limb robotic (myoelectric) prostheses are technologically advanced, but challenging to use. In response, substantial research is being done to develop person-specific prosthesis controllers that can predict a user’s intended movements. Most studies that test and compare new controllers rely on simple assessment measures such as task scores (e.g., number of objects moved across a barrier) or duration-based measures (e.g., overall task completion time). These assessment measures, however, fail to capture valuable details about: the quality of device arm movements; whether these movements match users’ intentions; the timing of specific wrist and hand control functions; and users’ opinions regarding overall device reliability and controller training requirements. In this work, we present a comprehensive and novel suite of myoelectric prosthesis control evaluation metrics that better facilitates analysis of device movement details—spanning measures of task performance, control characteristics, and user experience. As a case example of their use and research viability, we applied these metrics in real-time control experimentation. Here, eight participants without upper limb impairment compared device control offered by a deep learning-based controller (recurrent convolutional neural network-based classification with transfer learning, or RCNN-TL) to that of a commonly used controller (linear discriminant analysis, or LDA). The participants wore a simulated prosthesis and performed complex functional tasks across multiple limb positions. Analysis resulting from our suite of metrics identified 16 instances of a user-facing problem known as the “limb position effect”. We determined that RCNN-TL performed the same as or significantly better than LDA in four such problem instances. We also confirmed that transfer learning can minimize user training burden. Overall, this study contributes a multifaceted new suite of control evaluation metrics, along with a guide to their application, for use in research and testing of myoelectric controllers today, and potentially for use in broader rehabilitation technologies of the future.
... This control 16 challenge is well documented and referred to as the "limb position effect" [7]. Several 17 pattern recognition-based control methods have been investigated to minimize the limb 18 position effect [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. These methods require a user to execute a training routine across 19 multiple limb positions, prior to daily device use. ...
Preprint
Full-text available
Upper limb robotic (myoelectric) prostheses are technologically advanced, but challenging to use. In response, substantial research is being done to develop user-specific prosthesis controllers that can predict a person's intended movements. Most studies that test and compare new controllers rely on simple assessment measures such as task scores (e.g., number of objects moved across a barrier) or duration-based measures (e.g., overall task completion time). These assessment measures, however, fail to capture valuable details about: the quality of device arm movements; whether these movements match users' intentions; the timing of specific wrist and hand control functions; and users' opinions regarding overall device reliability and controller training requirements. In this work, we present a comprehensive and novel suite of myoelectric prosthesis control evaluation metrics that better facilitates analysis of device movement details—spanning measures of task performance, control characteristics, and user experience. As a case example of their use and research viability, we applied these metrics in real-time control experimentation. Here, eight participants without upper limb impairment compared device control offered by a deep learning-based controller (recurrent convolutional neural network-based classification with transfer learning, or RCNN-TL) to that of a commonly used controller (linear discriminant analysis, or LDA). The participants wore a simulated prosthesis and performed complex functional tasks across multiple limb positions. Analysis resulting from our suite of metrics identified 16 instances of a user-facing problem known as the "limb position effect". We determined that RCNN-TL performed the same as or significantly better than LDA in four such problem instances. We also confirmed that transfer learning can minimize user training burden. Overall, this study contributes a multifaceted new suite of control evaluation metrics, along with a guide to their application, for use in research and testing of myoelectric controllers today, and potentially for use in broader rehabilitation technologies of the future.
... The direction of sensors was parallel to the direction of muscle fiber. The placement of EMG electrodes and the number of electrodes used in this paper were informed by previous similar studies (Liu et al., 2014), (Yu et al., 2018). FIGURE 1 | Wearable multimodal-serious game rehabilitation approach developed to improve upper extremity motor function and cognitive function after stroke. ...
Article
Full-text available
Stroke often leads to hand motor dysfunction, and effective rehabilitation requires keeping patients engaged and motivated. Among the existing automated rehabilitation approaches, data glove-based systems are not easy to wear for patients due to spasticity, and single sensor-based approaches generally provided prohibitively limited information. We thus propose a wearable multimodal serious games approach for hand movement training after stroke. A force myography (FMG), electromyography (EMG), and inertial measurement unit (IMU)-based multi-sensor fusion model was proposed for hand movement classification, which was worn on the user's affected arm. Two movement recognition-based serious games were developed for hand movement and cognition training. Ten stroke patients with mild to moderate motor impairments (Brunnstrom Stage for Hand II-VI) performed experiments while playing interactive serious games requiring 12 activities-of-daily-living (ADLs) hand movements taken from the Fugl Meyer Assessment. Feasibility was evaluated by movement classification accuracy and qualitative patient questionnaires. The offline classification accuracy using combined FMG-EMG-IMU was 81.0% for the 12 movements, which was significantly higher than any single sensing modality; only EMG, only FMG, and only IMU were 69.6, 63.2, and 47.8%, respectively. Patients reported that they were more enthusiastic about hand movement training while playing the serious games as compared to conventional methods and strongly agreed that they subjectively felt that the proposed training could be beneficial for improving upper limb motor function. These results showed that multimodal-sensor fusion improved hand gesture classification accuracy for stroke patients and demonstrated the potential of this proposed approach to be used as upper limb movement training after stroke.
... Another candidate approach to set the trigger is using EMG. For example, estimating user's movement intention by classification method [11] such as support vector machine (SVM) [12], [13] or linear discriminant analysis (LDA) [14], [15]. The classification results were used to select pre-designed control output associated with the motion labels. ...
Article
Full-text available
Exoskeleton robots need to always actively assist the user’s movements otherwise robot just becomes a heavy load for the user. However, estimating diversified movement intentions in a user’s daily life is not easy and no algorithm so far has achieved that level of estimation. In this study, we rather focus on estimating and assisting a limited number of selected movements by using an EMG-based movement classification and a newly developed lightweight exoskeleton robot. Our lightweight knee exoskeleton is composed of a carbon fiber frame and highly backdrivable joint driven by a pneumatic artificial muscle. Thus, our robot does not interfere with the user’s motions even when the actuator is not activated. As the classification method, we adopted a positive-unlabeled (PU) classifier. Since precisely labeling all the selected data from large-scale daily movements is not practical, we assumed that only part of the selected data was labeled and used a PU classifier that can handle the unlabeled data. To validate our approach, we conducted experiments with five healthy subjects to selectively assist sit-to-stand movements from four possible daily motions. We compared our approach with two classification methods that assume fully labeled data. The results showed that all subject’s movements were properly assisted.
... Also, Geng et al. [29] studied the performance of CC and MPC in real time on amputees, and the performances were improved by 8.7% and 12.7%, respectively, compared to SPC. Others have suggested a mixed-LDA classifier and neural network approaches to reduce arm-position-based classification errors [24], [26]. ...
Article
Full-text available
Most stroke survivors have difficulties completing activities of daily living (ADLs) independently. However, few rehabilitation systems have focused on ADLs-related training for gross and fine motor function together. We propose an ADLs-based serious game rehabilitation system for the training of motor function and coordination of both arm and hand movement where the user performs corresponding ADLs movements to interact with the target in the serious game. A multi-sensor fusion model based on electromyographic (EMG), force myographic (FMG), and inertial sensing was developed to estimate users' natural upper limb movement. Eight healthy subjects and three stroke patients were recruited in an experiment to validate the system's effectiveness. The performance of different sensor and classifier configurations on hand gesture classification against the arm position variations were analyzed, and qualitative patient questionnaires were conducted. Results showed that elbow extension/flexion has a more significant negative influence on EMG-based, FMG-based, and EMG+FMG-based hand gesture recognition than shoulder abduction/adduction does. In addition, there was no significant difference in the negative influence of shoulder abduction/adduction and shoulder flexion/extension on hand gesture recognition. However, there was a significant interaction between sensor configurations and algorithm configurations in both offline and real-time recognition accuracy. The EMG+FMG-combined multi-position classifier model had the best performance against arm position change. In addition, all the stroke patients reported their ADLs-related ability could be restored by using the system. These results demonstrate that the multi-sensor fusion model could estimate hand gestures and gross movement accurately, and the proposed training system has the potential to improve patients' ability to perform ADLs.
... Various pattern recognition approaches have been explored to address the limb position effect on end effector control [9]- [13], [15]- [23]. Broadly, pattern recognition approaches have included Statistical Models and Neural Networks (including 2 > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < deep learning), each of which can use either classification or regression techniques [3], [22]. ...
... • All EMG and IMU data streams from both Myo armbands • All EMG data streams from both Myo armbands • All IMU data streams (quaternions, gyroscope, and accelerometer) from both Myo armbands • Only accelerometer data streams [9], [12], [23] from both Myo armbands Note that gyroscope and quaternion data streams were not investigated independently. Earlier pilot work revealed that accelerometer data better informed limb position in comparison to gyroscope and/or quaternion data. ...
... This may be because RCNNs offer the advantage of learning new features from complex input data. Other studies have investigated the use of engineered feature sets to address the limb position problem, and as such did not harness this advantage [17], [21], [23]. Despite yielding position-aware movement predictions using engineered features, their models did not perform quite as well as this study's RCNN classifier under S2. ...
Article
Full-text available
Objective: Persons with normal arm function can perform complex wrist and hand movements over a wide range of limb positions. However, for those with transradial amputation who use myoelectric prostheses, control across multiple limb positions can be challenging, frustrating, and can increase the likelihood of device abandonment. In response, the goal of this research was to investigate recurrent convolutional neural network (RCNN)-based position-aware myoelectric prosthesis control strategies. Methods: Surface electromyographic (EMG) and inertial measurement unit (IMU) signals, obtained from 16 non-disabled participants wearing two Myo armbands, served as inputs to RCNN classification and regression models. Such models predicted movements (wrist flexion/extension and forearm pronation/supination), based on a multi-limb-position training routine. RCNN classifiers and RCNN regressors were compared to linear discriminant analysis (LDA) classifiers and support vector regression (SVR) regressors, respectively. Outcomes were examined to determine whether RCNN-based control strategies could yield accurate movement predictions, while using the fewest number of available Myo armband data streams. Results: An RCNN classifier (trained with forearm EMG data, and forearm and upper arm IMU data) predicted movements with 99.00% accuracy (versus the LDAs 97.67%). An RCNN regressor (trained with forearm EMG and IMU data) predicted movements with R2 values of 84.93% for wrist flexion/extension and 84.97% for forearm pronation/supination (versus the SVRs 77.26% and 60.73%, respectively). The control strategies that employed these models required fewer than all available data streams. Conclusion: RCNN-based control strategies offer novel means of mitigating limb position challenges. Significance: This research furthers the development of improved position-aware myoelectric prosthesis control.
... The results show that data augmentation can improve precision and durability under disturbances. Dantas et al. [169] develop a dataset aggregation approach named DAgger that can improve long term performance within 150 days. On one hand, deep learning is data-dependent, and thus, more data means better performance. ...
Article
Full-text available
Electromyography (EMG) has already been broadly used in human-machine interaction (HMI) applications. Determining how to decode the information inside EMG signals robustly and accurately is a key problem for which we urgently need a solution. Recently, many EMG pattern recognition tasks have been addressed using deep learning methods. In this paper, we analyze recent papers and present a literature review describing the role that deep learning plays in EMG-based HMI. An overview of typical network structures and processing schemes will be provided. Recent progress in typical tasks such as movement classification, joint angle prediction, and force/torque estimation will be introduced. New issues, including multimodal sensing, inter-subject/inter-session, and robustness toward disturbances will be discussed. We attempt to provide a comprehensive analysis of current research by discussing the advantages, challenges, and opportunities brought by deep learning. We hope that deep learning can aid in eliminating factors that hinder the development of EMG-based HMI systems. Furthermore, possible future directions will be presented to pave the way for future research. Index Terms-Accuracy, deep learning, electromyography (EMG), human-machine interaction (HMI), robustness.
... Also, Geng et al. [29] studied the performance of CC and MPC in real time on amputees, and the performances were improved by 8.7% and 12.7%, respectively, compared to SPC. Others have suggested a mixed-LDA classifier and neural network approaches to reduce arm-position-based classification errors [24], [26]. ...
Article
Full-text available
Background The foot progression angle is an important measure used to help patients reduce their knee adduction moment. Current measurement systems are either lab-bounded or do not function in all environments (e.g., magnetically distorted). This work proposes a novel approach to estimate foot progression angle using a single foot-worn inertial sensor (accelerometer and gyroscope). Methods The approach uses a dynamic step frame that is recalculated for the stance phase of each step to calculate the foot trajectory relative to that frame, to minimize effects of drift and to eliminate the need for a magnetometer. The foot progression angle (FPA) is then calculated as the angle between walking direction and the dynamic step frame. This approach was validated by gait measurements with five subjects walking with three gait types (normal, toe-in and toe-out). Results The FPA was estimated with a maximum mean error of ~ 2.6° over all gait conditions. Additionally, the proposed inertial approach can significantly differentiate between the three different gait types. Conclusion The proposed approach can effectively estimate differences in FPA without requiring a heading reference (magnetometer). This work enables feedback applications on FPA for patients with gait disorders that function in any environment, i.e. outside of a gait lab or in magnetically distorted environments.
... Another frequently adopted approach for estimating user's movement intentions is a classification method. For example, support vector machine (SVM) [12], [13], [14], [15] or linear discriminant analysis (LDA) [16], [17], [18], [19] have been utilized to classify bio-signals detected from human users. The classification results were used to select pre-designed control output associated with the estimated motion labels. ...
Article
Full-text available
In assistive control strategies, we must estimate the user's movement intentions. In previous studies, such intended motions were inferred by linearly converting muscle activities to the joint torques of an assistive robot or classifying muscle activities to identify the most likely movement from pre-designed robot motion classes. However, the assistive performances of these approaches are limited in terms of accuracy and flexibility. In this study, we propose an optimal assistive control strategy that uses estimated user movement intentions as the terminal cost function not only for generating movements for different task goals but to precisely enhance the motion with an exoskeleton robot. The optimal assistive policy is derived by blending the pre-computed optimal control laws based on the linear Bellman combination method. Coefficients that determine how to blend the control laws are derived based on low-dimensional feature values that represent the user's movement intention. To validate our proposed method, we conducted an assisted basketball-throwing task and showed that the performances of our subjects significantly improved.
... The most commonly reported limb position experimental protocol in the literature is one that uses static limb positions [21,22,102,[124][125][126][127]131,[133][134][135][138][139][140][141][142][143][144][145][146][147][148][149][150]. Among the articles surveyed, these make up 50% of all experiments. ...
... The use of novel classification algorithms has achieved strong performance in the presence of limb position variability. Yu et al. [148] validated the use of a mixed-LDA classifier that used stable representations of motions defined by taking the common-mode of class-specific covariance clusters across all positions. Across five static limb positions, the mixed-LDA classifier achieved 93.6% accuracy opposed to 82.5% accuracy using a standard LDA classifier . ...
Article
Full-text available
This manuscript presents a hybrid study of a comprehensive review and a systematic (research) analysis. Myoelectric control is the cornerstone of many assistive technologies used in clinical practice, such as prosthetics and orthoses, and human-computer interaction, such as virtual reality control. Although the classification accuracy of such devices exceeds 90% in a controlled laboratory setting, myoelectric devices still face challenges in robustness to variability of daily living conditions. The intrinsic physiological mechanisms limiting practical implementations of myoelectric devices were explored: the limb position effect and the contraction intensity effect. The degradation of electromyography (EMG) pattern recognition in the presence of these factors was demonstrated on six datasets, where classification performance was 13% and 20% lower than the controlled setting for the limb position and contraction intensity effect, respectively. The experimental designs of limb position and contraction intensity literature were surveyed. Current state-of-the-art training strategies and robust algorithms for both effects were compiled and presented. Recommendations for future limb position effect studies include: the collection protocol providing exemplars of at least 6 positions (four limb positions and three forearm orientations), three-dimensional space experimental designs, transfer learning approaches, and multi-modal sensor configurations. Recommendations for future contraction intensity effect studies include: the collection of dynamic contractions, nonlinear complexity features, and proportional control.