Algorithm Performance

Algorithm Performance

Source publication
Conference Paper
Full-text available
Activity recognition using mobile phones has great potential in many applications including mobile healthcare. In order to let a person easily know whether he is in strict compliance with the doctor's exercise prescription and adjust his exercise amount accordingly, we can use a smart-phone based activity reporting system to accurately recognize a...

Similar publications

Article
Surface nanostructuring could enhance surface properties such as strength, self-cleaning, anti-fog and anti-bacterial properties. Femtosecond laser-induced periodic surface structures (LIPSS) is a nanoscale structure created with laser technique. However, its quality is significantly influenced by the complicated interrelationship between the vario...

Citations

... Training DeepLabv3 on cityscapes enables semantic segmentation of panoramic images into 19 classes, with a specific emphasis on achieving sky segmentation accuracy of 85% intersection over union (IOU) in this study. Following that, a model-based transfer learning [38,39] method is utilized to transfer the training results to BSV. ...
Article
Full-text available
The Sky View Factor (SVF) stands as a critical metric for quantitatively assessing urban spatial morphology and its estimation method based on Street View Imagery (SVI) has gained significant attention in recent years. However, most existing Street View-based methods prove inefficient and constrained in SVI dataset collection. These approaches often fall short in capturing detailed visual areas of the sky, and do not meet the requirements for handling large areas. Therefore, an online method for the rapid estimation of a large area SVF using SVI is presented in this study. The approach has been integrated into a WebGIS tool called BMapSVF, which refines the extent of the visible sky and allows for instant estimation of the SVF at observation points. In this paper, an empirical case study is carried out in the street canyons of the Qinhuai District of Nanjing to illustrate the effectiveness of the method. To validate the accuracy of the refined SVF extraction method, we employ both the SVI method based on BMapSVF and the simulation method founded on 3D urban building models. The results demonstrate an acceptable level of refinement accuracy in the test area.
... Wang et al. [44] measured the distance of activity data distributions among multiple humans to find the best domain to accomplish transfer tasks, utilizing CNN and long shortterm memory (LSTM) layers to extract time series and spatial features, and MMD loss to align them as similarly as possible. Zhao et al. [45] proposed a transfer learning-embedded decision tree that integrates decision trees with the K-means algorithm in personalized HAR. Methods based on domain adaptation cannot deal with multi-source domains, and their generalization ability is poor in practical applications without a suitable ensemble paradigm, especially on unbalanced data sets. ...
Article
Full-text available
Human activity recognition (HAR) aims to collect time series through wearable devices to precisely identify specific actions. However, the traditional HAR method ignores the activity variances among individuals, which will cause low generalization when applied to a new individual and indirectly enhance the difficulties of personalized HAR service. In this paper, we fully consider activity divergence among individuals to develop an end-to-end model, the multi-source unsupervised co-transfer network (MUCT), to provide personalized activity recognition for new individuals. We denote the collected data of different individuals as multiple domains and implement deep domain adaptation to align each pair of source and target domains. In addition, we propose a consistent filter that utilizes two heterogeneous classifiers to automatically select high-confidence instances from the target domain 1 Springer Nature 2021 L A T E X template 2 Article Title to jointly enhance the performance on the target task. The effectiveness and performance of our model are evaluated through comprehensive experiments on two activity recognition benchmarks and a private activity recognition dataset (collected by our signal sensors), where our model outperforms traditional transfer learning methods at HAR.
... (2) Feature-based transfer learning [29,30] transforms the data of two domains into the same feature space, reducing the distance between the features of the source domain and the target domain, such as domain adversarial networks [31]. (3) Model-based transfer learning [32] usually adds new layers or integrates new base learners to optimize the original model, such as incremental learning [16]. ...
Article
Full-text available
The building damage caused by natural disasters seriously threatens human security. Applying deep learning algorithms to identify collapsed buildings from remote sensing images is crucial for rapid post-disaster emergency response. However, the diversity of buildings, limited training dataset size, and lack of ground-truth samples after sudden disasters can significantly reduce the generalization of a pre-trained model for building damage identification when applied directly to non-preset locations. To address this challenge, a self-incremental learning framework (i.e., SELF) is proposed in this paper, which can quickly improve the generalization ability of the pre-trained model in disaster areas by self-training an incremental model using automatically selected samples from post-disaster images. The effectiveness of the proposed method is verified on the 2010 Yushu earthquake, 2023 Turkey earthquake, and other disaster types. The experimental results demonstrate that our approach outperforms state-of-the-art methods in terms of collapsed building identification, with an average increase of more than 6.4% in the Kappa coefficient. Furthermore, the entire process of the self-incremental learning method, including sample selection, incremental learning, and collapsed building identification, can be completed within 6 h after obtaining the post-disaster images. Therefore, the proposed method is effective for emergency response to natural disasters, which can quickly improve the application effect of the deep learning model to provide more accurate building damage results.
... A standard solution is to generalize information from well-known training individuals to the target using techniques from transfer learning and domain adaptation [12]. For instance, Zhao et al. [13] tackled cross-individual HAR using a transfer learning approach based on decision trees and k-means clustering. Similarly, Wang et al. [14] explored intra-class knowledge transfer and proposed a transfer learning model for cross-domain activity recognition tasks, while a previous study [15] focused on the problem of cross-dataset activity recognition using an approach that extracts both spatial and temporal features. ...
Article
Full-text available
Human activity recognition (HAR) plays a central role in ubiquitous computing applications such as health monitoring. In the real world, it is impractical to perform reliably and consistently over time across a population of individuals due to the cross-individual variation in human behavior. Existing transfer learning algorithms suffer the challenge of “negative transfer”. Moreover, these strategies are entirely black-box. To tackle these issues, we propose X-WRAP (eXplain, Weight and Rank Activity Prediction), a simple but effective approach for cross-individual HAR, which improves the performance, transparency, and ease of control for stakeholders in HAR. X-WRAP works by wrapping transfer learning into a meta-learning loop that identifies the approximately optimal source individuals. The candidate source domains are ranked using a linear scoring function based on interpretable meta-features capturing the properties of the source domains. X-WRAP is optimized using Bayesian optimization. Experiments conducted on a publicly available dataset show that the model can effectively improve the performance of transfer learning models consistently. In addition, X-WRAP can provide interpretable analysis according to the meta-features, making it possible for stakeholders to get a high-level understanding of selective transfer. In addition, an extensive empirical analysis demonstrates the promise of the approach to outperform in data-sparse situations.
... Transfer learning utilises the similarity between data, tasks or models to apply the models and knowledge learned in the source domain to the target domain (Zhuang et al., 2021). Zhao et al. (2011) proposed an algorithm named transfer learning embedded decision tree (TransEMDT) to complement the problem of cross-people activity recognition. Wang et al. (2018a) proposed a cross-domain learning framework, stratified transfer learning (STL), which utilised the similarity between classes to carry out knowledge transfer within classes. ...
Article
Full-text available
The loose wearing of wrist smartwatches or wristbands usually causes sensor displacement. Sensor displacement can result in the distribution changing of the sensor data and thus deteriorate the performance of human activity recognition models. This paper proposes a model-based transfer learning framework to compensate for the decline in recognition accuracy caused by the sensor displacement on the wrist. We construct two convolutional neural network (CNN) models for feature extraction in activity recognition and design three transfer scenarios to evaluate the framework. Experimental results demonstrate that the recognition accuracy distinctly drops due to the sensor displacement along a wrist. Also, our proposed CNN-based transfer learning effectively compensates for the decreased recognition accuracies and improves the models' robustness. X. (2023) 'Study on the compensation for wrist-wearing sensor displacement based on transfer learning', Int. She received her BS degree in 2021 from the same school. Her research interests include human activity recognition based on wearable sensors, deep learning, and transfer learning. Yan Wang is an Associate Professor at. He received his BE degree from Zhongyuan University of Technology in 2016. His research interests are in wearable sensor-based human activity recognition and deep learning. Hongnian Yu is the Smart Technology Research Centre Director and Head of Research with the School of Engineering and the Built Environment, Edinburgh Napier University. His research covers two main areas: robotics with applications in the rescue and recovery operations, and healthcare; and ICT enabled healthcare including assistive technologies in supporting elderly and people with dementia, and activity recognition of elderly people. He has published over 200 journal and conference research papers. He is a member of the EPSRC-Peer Review College and a Fellow of IET and RSA. Xiaoxu Wen is currently a postgraduate at
... Some existing works also focus on CDTSC based on conventional machine learning algorithms. For example, the transfer learning embedded decision tree (TransEMDT) is proposed [57] to integrate the decision tree and the k-means clustering algorithm for activity recognition model adaptation. Pan et al. [32] proposes transfer component analysis (TCA) to learn a low-dimensional latent feature space in a RKHS for cross-domain WiFi localization. ...
Article
Full-text available
Time series classification on edge devices has received considerable attention in recent years, and it is often conducted on the assumption that the training and testing data are drawn from the same distribution. However, in practical IoT applications, this assumption does not hold due to variations in installation positions, precision error, and sampling frequency of edge devices. To tackle this problem, in this paper, we propose a new SVM-based domain transfer method called subspace optimization transfer support vector machine (SOTSVM) for cross-domain time series classification. SOTSVM aims to learn a domain-invariant SVM classifier by which (1) global projected distribution alignment jointly exploits the marginal distribution discrepancy, geometric structure, and distribution scatter to reduce the global distribution discrepancy between the source and target domains; (2) feature grouping is used to divide the features into highly transferable features (HTF) and lowly transferable features (LTF), where the importance of HTF is preserved and importance of LTF is suppressed in the domain-invariant classifier training; and (3) empirical risk minimization is constructed for improving the discrimination of the SOTSVM. In this paper, we formulate a minimization problem that integrates global projected distribution alignment, feature grouping and empirical risk minimization into the joint SVM framework, giving an effective optimization algorithm. Furthermore, we present the extension of multiple kernel SOTSVM. Experimental results on three sets of cross-domain time series datasets show that our method outperforms some state-of-the-art conventional transfer learning methods and no transfer learning methods.
... Deng et al. [57] proposed a cross-person activity recognition method using a reduced kernel extreme learning machine on the source domain, which classifies the target sample and the high confident samples and applies them to the training dataset. Zhao et al. [58] introduced a transfer learning embedded decision tree algorithm that integrates a decision tree and the k-means clustering algorithm to recognize mobile phone-based different personalized activities by model adaptation. Wang et al. [59] proposed a stratified transfer learning method that adopted the pseudo-leveling concept on the unlabeled target data by measuring MMD between the feature spaces for the source domain data and the pseudo-labeled target data. ...
Article
Wearable sensor-based human activity recognition (HAR) has emerged as a principal research area and is utilized in a variety of applications. Recently, deep learning-based methods have achieved significant improvement in the HAR field with the development of human–computer interaction applications. However, most of them are limited to operating in a local neighborhood in the process of a standard convolution neural network, and correlations between different sensors on body positions are ignored. In addition, even though several recent existing works considered the correlations between different sensor positions, they still face significant challenging problems with performance degradation due to large gaps in the distribution of training and test data, and behavioral differences between subjects. In this work, we propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED), that accounts for individual sensor orientations and spatial and temporal features. The proposed method is capable of learning cross-domain embedding feature representations from multiple subjects datasets using adversarial learning and the maximum mean discrepancy (MMD) regularization to align the data distribution over multiple domains. In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition. Experimental results show that TASKED not only outperforms state-of-the-art methods on the four real-world public HAR datasets (alone or combined) but also improves the subject generalization effectively.
... Deng et al. [55] proposed a cross-person activity recognition method using a reduced kernel extreme learning machine on the source domain, which classifies the target sample and the high confident samples and applies them to the training dataset. Zhao et al. [56] introduced a transfer learning embedded decision tree algorithm that integrates a decision tree and the k-means clustering algorithm to recognize mobile phone-based different personalized activities by model adaptation. Wang et al. [57] proposed a stratified transfer learning method that adopted the pseudo-leveling concept on the unlabeled target data by measuring MMD between the feature spaces for the source domain data and the pseudo-labeled target data. ...
Preprint
Full-text available
Wearable sensor-based human activity recognition (HAR) has emerged as a principal research area and is utilized in a variety of applications. Recently, deep learning-based methods have achieved significant improvement in the HAR field with the development of human-computer interaction applications. However, they are limited to operating in a local neighborhood in the process of a standard convolution neural network, and correlations between different sensors on body positions are ignored. In addition, they still face significant challenging problems with performance degradation due to large gaps in the distribution of training and test data, and behavioral differences between subjects. In this work, we propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED), that accounts for individual sensor orientations and spatial and temporal features. The proposed method is capable of learning cross-domain embedding feature representations from multiple subjects datasets using adversarial learning and the maximum mean discrepancy (MMD) regularization to align the data distribution over multiple domains. In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition. Experimental results show that TASKED not only outperforms state-of-the-art methods on the four real-world public HAR datasets (alone or combined) but also improves the subject generalization effectively.
... Although such feature-level UDA usually can achieve better results, its efficacy greatly depends on the choice of the representation space. As for the model-level UDA [23][24][25][26], it fulfils knowledge adaptation from the source model parameters. Although such kind of UDA can distill the source knowledge to the target domain, the data distribution priors are usually ignored. ...
Article
Full-text available
As an emerging research topic in the field of machine learning, unsupervised domain adaptation (UDA) aims to transfer prior knowledge from the source domain to help training the unsupervised target domain model. Although a variety of UDA works have been proposed, they mainly concentrate on scenarios from one source to one target (1S1T) or multi-source to one target domain (mS1T), the works on UDA from one source to multi-target (1SmT) is rare and they are mainly designed for ordinary problems. When countered with ordinal 1SmT tasks where there exists order relationship among the data labels, the existing methods degenerate in performance since the label relationships are not preserved. In this article, we propose an ordinal 1SmT UDA model which transfers both explicit and implicit knowledge from the supervised source and unsupervised target domains respectively via distribution alignment and dictionary transmission. We also design an efficient algorithm to solve the model and evaluate its convergence and complexity. Finally, the effectiveness of the proposed method is evaluated with extensive experiments.
... Transfer learning aims to relax the assumption in traditional machine learning that the training data and testing data should have an identical probability distribution [27]. It has achieved great success in many areas, such as Wi-Fi localization [28], natural language processing [29], face recognition [30], and human activity recognition [31]. The enlightening works of [32,33] indicate that many factors (e.g., user habit, wearing position, and equipment fault) tend to influence the distribution of data in behavior and gesture recognition. ...
Article
Full-text available
During the last few years, significant attention has been paid to surface electromyographic (sEMG) signal–based gesture recognition. Nevertheless, sEMG signal is sensitive to various user-dependent factors, like skin impedance and muscle strength, which causes the existing gesture recognition models not suitable for new users and huge precision dropping. Therefore, we propose a dual layer transfer learning framework, named dualTL, to realize user-independent gesture recognition based on sEMG signal. DualTL is composed of two layers. The first layer of dualTL leverages the correlations of sEMG signal among different users to label partial gestures with high confidence from new users. Then, according to the consistencies of sEMG signal from the same users, the rest gestures are labeled in the second layer. We compare our method with three universal machine learning methods, seven representative transfer learning methods, and two deep learning–based sEMG gesture recognition methods. Experimental results show that the average recognition accuracy of dualTL is 80.17%. Comparing with SMO, KNN, RF, PCA, TCA, STL, and CWT, the performance improves 24.26% approximately.