Fig 1 - uploaded by Erhan Bulbul
Content may be subject to copyright.
High and low pass Butterworth filters[6]

High and low pass Butterworth filters[6]

Source publication
Conference Paper
Full-text available
Human Activity Recognition(HAR) is classifying activity of a person using responsive sensors that are affected from human movement. Both users and capabilities(sensors) of smartphones increase and users usually carry their smartphone with them. These facts makes HAR more important and popular. This work focuses on recognition of human activity usin...

Contexts in source publication

Context 1
... is Aggregation(Bagging). This method requires training data to be divided into subgroups and distributed to classifiers of the ensemble structure [11]. Aggregation is usually used to get more decisive results from sensitive learning algorithms like decision trees [12]. Using aggregation %98.1 successful classification ratio is achieved (see Fig. 10). Third ensemble approach covered in this work is stacking. Unlike other ensemble classifiers stacking always have two training phases. Training data is divided and distributed to first phase classifiers and a classifier in the second phase is trained by output of the first phase classifiers, using them like generated new features. ...
Context 2
... have two training phases. Training data is divided and distributed to first phase classifiers and a classifier in the second phase is trained by output of the first phase classifiers, using them like generated new features. With stacking classifier that consists of 30 k-NN classifiers %98.6 of the tuples' classes were succesfully predicted (see Fig. 12). Table 1 below. While SVM is the most precise approach tested in this work as seen in Table 1, most of the methods create effective models. Bayat et.al. achieved %91.15 succesfull classification rate by using accelerator data. Anguita et. al. has used the dataset in this work and achieved %96 true positive rate using multi-class SVM. ...

Similar publications

Article
Full-text available
Human activity recognition is a key to a lot of applications such as healthcare and smart home. In this study,we provide a comprehensive survey on recent advances and challenges in human activity recognition (HAR) with deep learning. Although there are many surveys on HAR, they focused mainly on the taxonomy of HAR and reviewed the state-of-the-art...
Conference Paper
Full-text available
This paper describes a deep neural network -- hidden Markov model (DNN-HMM) human activity recognition system based on instrumented objects and studies compensation strategies to deal with object variability. The sensors, comprising an accelerometer, gyroscope, magnetometer and force-sensitive resistors (FSRs), are packaged in a coaster attached to...
Article
Full-text available
Continuous Human Activity Recognition (HAR) in arbitrary directions is investigated in this paper using a network of five spatially distributed pulsed Ultra‐Wideband radars. While activities performed continuously and in unconstrained trajectories provide a more realistic and natural scenario for HAR, the network of radar sensors is proposed to add...
Preprint
Full-text available
The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep...
Article
Full-text available
In this paper, human activity recognition (HAR) attempts to recognize activities of an object in a multistory building from data retrieved via smartphone-based sensors (SBS). Most publications based on machine learning (ML) report the development of a suitable architecture to improve the classification accuracy by increasing the parameters of the a...

Citations

... Notably, Noor et al. (2017) [21] differentiated between transitional and non-transitional activities. In contrast, Bulbul et al. (2018) [22] recognized six activities: walking, climbing up and down stairs, sitting, standing, and lying down. Zainudin et al. (2018) [23] combined one-versus-all (OVA) models with a self-adaptive algorithm to select features for sports practice analysis. ...
... Notably, Noor et al. (2017) [21] differentiated between transitional and non-transitional activities. In contrast, Bulbul et al. (2018) [22] recognized six activities: walking, climbing up and down stairs, sitting, standing, and lying down. Zainudin et al. (2018) [23] combined one-versus-all (OVA) models with a self-adaptive algorithm to select features for sports practice analysis. ...
Preprint
Artificial Intelligence (AI) has found application in Human Activity Recognition (HAR) in competitive sports. To date, most Machine Learning (ML) approaches for HAR have relied on offline (batch) training, imposing higher computational and tagging burdens compared to online processing unsupervised approaches. Additionally, the decisions behind traditional ML predictors are opaque and require human interpretation. In this work, we apply an online processing unsupervised clustering approach based on low-cost wearable Inertial Measurement Units (IMUs). The outcomes generated by the system allow for the automatic expansion of limited tagging available (e.g., by referees) within those clusters, producing pertinent information for the explainable classification stage. Specifically, our work focuses on achieving automatic explainability for predictions related to athletes' activities, distinguishing between correct, incorrect, and cheating practices in Nordic Walking. The proposed solution achieved performance metrics of close to 100 % on average.
... The UCI dataset was collected from the open-source data repository of UCI and described in [36]. The dataset has samples of 30 individuals whose daily living tasks were recorded using the waist-mounted smartphone with embedded inertial sensors. ...
Article
Full-text available
Human activity recognition (HAR) is an essential part of many applications, including smart surroundings, sports analysis, and healthcare. Accurately categorizing intricate actions from sensor data is still difficult, though. This method infers an individual’s actions using a variety of sensor data, including magnetometers, gyroscopes, and accelerometers. This work suggests a unique method for classifying human activities by combining a non-linear multi-task least squares twin support vector machine (NMtLSSVM) with particle swarm optimization (PSO) for feature optimization. Utilizing the advantages of both approaches, the suggested strategy achieves excellent resilience and accuracy in activity identification. The suggested method reduces dimensionality and computing costs while maintaining pertinent information using PSO to extract the most important features from the raw sensor data. On the other hand, NMtLSSVM is used to construct a multi-task learning framework that learns several related tasks at the same time and shares information among them. As a result, generalization and resilience can be enhanced beyond single-task models. Two datasets that are available to the public, WISDM, and UCI-HAR, were used to assess the suggested method. The testing findings show that the suggested approach works noticeably better than the most advanced techniques already in use. Overall activities, the outcome achieves an average classification accuracy of 97.8% (UCI-HAR dataset) and 98.5% (WISDM dataset). Furthermore, the PSO-based feature optimization decreased the feature count by 60% without sacrificing efficiency.
... 1) DT: With six activity classes, the resulting decision tree should yield six distinct leaf nodes. In the evaluation of the classifier's efficacy, the Maximum Number of Splits (MNOS) and the Split Criterion have significant importance, as emphasized in a prior study [15]. The classifier was evaluated using three different split criteria: Gini's diversity index, Maximum deviation reduction, and the Twoing rule. ...
... Finally, data on three-axial acceleration and three-axial angular velocity were recorded at a fixed rate of 50 Hz. These recordings are further processed to provide the feature values of the 561 length in the benchmark dataset [50]. Statistics indicate that there are 10 299 samples in this collection. ...
Article
Full-text available
Human activity recognition is the process of identifying a person’s activities accurately. It is possible by placing sensors on the subject’s body and obtaining data from a variety of high-dimensional physiological signals. Recently, sensors like an accelerometer and gyroscopes have been incorporated directly into wearable devices, making activity identification fairly straightforward. High dimensionality makes it necessary to employ an optimization method that might reduce the number of features used in the dataset while taking less time in order to make activity recognition successful in wearable devices with limited battery life. In this study, we propose the autoencoder reduction method (AERed), a dimensionality reduction technique based on the symmetric design of a typical autoencoder. With the new structure, there are fewer weights that must be tweaked, which lowers the computational cost. This work makes use of a public-domain dataset from the UCI repository. The dataset’s features were reduced from 561 to 256 using the AERed method. The reduced features are classified using a random Bayesian filter Support Vector Machine classifier and reported 95.95% F1-Score. This method also consumed a lot less time than baseline methods at the dimensionality reduction stage. The proposed method is validated by performing parameter sensitivity analysis, complexity analysis, and visualization performance analysis.
... Three publicly available datasets, frequently used in sensor-based HAR researches, are utilized for the experiments on NCD task: WISDM [48], UCI-HAR [49], and USC-HAD [50]. ...
... The UCI-HAR dataset [49] comprises data records derived from nine sensor channels of the accelerometer and gyroscope sensors. These sensors were sampled at a frequency of 50 Hz. ...
Article
Full-text available
Human Activity Recognition (HAR) systems have made significant progress in recognizing and classifying human activities using sensor data from a variety of sensors. Nevertheless, they have struggled to automatically discover novel activity classes within massive amounts of unlabeled sensor data without external supervision. This restricts their ability to classify new activities of unlabeled sensor data in real-world deployments where fully supervised settings are not applicable. To address this limitation, this paper presents the Novel Class Discovery (NCD) problem, which aims to classify new class activities of unlabeled sensor data by fully utilizing existing activities of labeled data. To address this problem, we propose a new end-to-end framework called More Reliable Neighborhood Contrastive Learning (MRNCL), which is a variant of the Neighborhood Contrastive Learning (NCL) framework commonly used in visual domain. Compared to NCL, our proposed MRNCL framework is more lightweight and introduces an effective similarity measure that can find more reliable k-nearest neighbors of an unlabeled query sample in the embedding space. These neighbors contribute to contrastive learning to facilitate the model. Extensive experiments on three public sensor datasets demonstrate that the proposed model outperforms existing methods in the NCD task in sensor-based HAR, as indicated by the fact that our model performs better in clustering performance of new activity class instances.
... Over the last few years, several smartphone sensor-based HAR models [8,13,54,67,73] have been developed. These models can be classified into two types: shallow and deep learning methods. ...
Article
Full-text available
Human activity recognition by the use of smartphone-equipped sensors has gotten a lot of interest in current times because of its large variety of applications.In this regard, this study provides a comprehensive comparative analysis of shallow and deep learning models for smartphone-based HARover high granular daily human activities. Moreover, A robust architecture for smartphone-based HAR is also provided, with stages ranging from data collection to data modelling. A total of seven best performing HAR models namely Decision Tree (DT), Random Forest(RF), DeepNeural Networks (DNN), Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Gradient Boosting (GB) and Convolutional Neural Networks (CNN) are investigated. This research work is based on a real-world dataset of 95690 data samples collected from the smartphone sensors of 18 different subjects. The comparative study reveals that three models namely DNN, RF, and GB mostly dominated over the other models in terms of five performance metrics namely accuracy, recall, precision, F1-score, and AUC value.
... al. presented a novel convolutional neural network that has dense connections between inceptionlike modules [2]. They have evaluated the model on SMART-PHONE [25], and WISDM Activity Prediction datasets [26], and WISDM Actitracker Dataset [27]. ...
Conference Paper
Full-text available
Human Activity Recognition (HAR) is a well-known area of study in the Internet of Medical and Health Things. The goal of HAR is to monitor complex, subtle, and postural human behaviours in the domains of Ambient Assisted Living (AAL), injury prevention, well-being management, medical diagnostics, and, in particular, geriatric care. The use of inertial sensors in smart devices for HAR is becoming more common, as it eliminates all of the constraints of traditional computer vision techniques. The usage of artificial neural networks improves classification, but their larger complexity makes them harder to deploy near the edge, where latency is reduced. This article presents a deep learning model that is lite in terms of trainable parameters and hence enables Edge-AI. The model is named EdgeHARNet. It assessed WISDM Activity Prediction and WISDM Actitracker datasets. The presented model has only 2031 trainable parameters. It has achieved 94.036% average accuracy for the WISDM Activity Prediction dataset and 74.06% average accuracy for the WISDM Actitracker dataset. F1-score have also been reported for all individual classes. The performance comparison is carried out with existing models. We have done inference on Raspberry Pi 4 which is a single board computer as well.
... Furthermore, the capabilities and type of the device have a significant impact on categorization times. The most accurate methodology used in the paper [6] to which we have referred is SVM in our work. The dataset used in the paper [6] is made up of data generated by the smart phone's accelerometer and gyroscope sensors. ...
... The most accurate methodology used in the paper [6] to which we have referred is SVM in our work. The dataset used in the paper [6] is made up of data generated by the smart phone's accelerometer and gyroscope sensors. ...
Article
Full-text available
The main focus here is to generate the model to visualize the activities of a human that assures the aversion of human life. Machine learning techniques are used in these applications to classify signals collected by various types of sensors. Indeed, this sector frequently necessitates dealing with high-dimensional, multimodal streams of data that are activity recognition characterized by a large variability. Activity recognition is a method of identifying a person’s activities based on observations of that individual and his or her surroundings. Data obtained from many sources, such as ambient or body-worn sensors, can be used to perform recognition. The six categories of sitting, standing, walking, climbing up, climbing down, and lying are used to group the actions into a dataset (Bulbul in Mach Learn Comput Sci, 2018). We offer a study of a method for identifying activities using data from a gyroscope and accelerometer, such as walking up stairs or standing. A depiction of the data informs the analysis. The differences in mistake rates across different classification systems are investigated.
... The boosting classifiers such as AdaBoost and GBM were reported by Walse et al. (2016). The three ensemble classifiers based on bagging, boosting, and stacking were examined for recognizing human activity using smartphones (Bulbul et al., 2018). ...
... There are always two training phases for the stacking ensemble classifier. Training data is divided and distributed to first-phase classifiers, and output from the first-phase classifiers is used to train a second-phase classifier, which uses them as generated new features (Bulbul et al, 2018). Any machine learning model can be used to learn how to optimally combine the predictions from contributing members using the stacking method. ...
Book
Full-text available
The proceeding of International Conference on Social and Applied Sciences (ICSAS2022) "Sustainable Development with Ethical Practices and Smart Technologies"
... Thirty percent of the data have been used for testing, and seventy percent were used for evaluation. TP, FP, TN, FN, Precision, Recall, F1-Score, and classification accuracy are some of the characteristics used to evaluate the various methods [15]. ...
Article
Full-text available
The idea of ambient assisted living has gained support and adoption because to the quick advancement of wireless sensor networks and ongoing improvements in the development of scientific solutions based on artificial intelligence. This is as a result of its broad use in healthcare and smart homes. As it enhances quality of life, the idea of human activity recognition (HAR) & classification has caught the interest of many studies in this respect. Before putting this idea into practise, though, it must first be tested using benchmarked data sets to analyse how well it performs in real-world circumstances. The activity classification techniques have been used in this work's continuation to increase its accuracy. These algorithms can be used as a reference point to evaluate the effectiveness of other ones. Recent developments in sophisticated technologies have simplified the routine collection and storage of IoT sensor data that may be used to support decision-making. However, there is an urgent need to gather and arrange patient data in electronic form in the majority of nations. The collected data will then be examined for a diagnosis, a forecast, and potential therapies depending on the patient's eligibility. In this study, human activity is predicted using the WISDM Smartphone and Smart watch Activity and Biometrics Dataset. This study offered pre-trained machine learning models with various human activities. Then Q-SVM (Quadratic-Support Vector Machine), L-SVM (Linear-Support Vector Machine), LDA (Linear Discriminant Analysis), and PNN (Probabilistic Neural Network) classification algorithms are used to classify human activities such as: sitting, standing, walking, sitting down and standing up. Furthermore, we have also compared the obtained results with its counterpart algorithms in order to prove its effectiveness.