Conference PaperPDF Available

A Framework for Daily Living Activity Recognition using Fusion of Smartphone Inertial Sensors Data

Authors:

Abstract and Figures

Abstract—Recent years have seen rapid advancements in the human activity recognition field using data from smart sensor devices. A wide variety of real-world applications can be found in different domains, particularly health and security. Smartphones are common devices that let people do a wide range of everyday tasks anytime, anywhere. The sensors and networking capabilities found in modern smartphones enable context awareness for a wide range of applications. This research mainly focuses on recognizing human activities in the wild for which we selected an in-the-wild extra-sensory dataset. Six human activities i.e., lying down, sitting, standing, running, walking, and bicycling are selected. Time domain features are extracted and human activity recognition is performed using three different machine learning classifiers i.e., random forest, k-nearest neighbors, and decision trees. The proposed human activity recognition scheme resulted in the highest classification accuracy of 89.98%, using the random forest classifier. Our proposed scheme outperforms the state-of-the-art human activity recognition schemes in the wild. Index Terms—Human Activity, Machine Learning, Smart-
Content may be subject to copyright.
A Framework for Daily Living Activity Recognition
using Fusion of Smartphone Inertial Sensors Data
Sheharyar Khan1, Syed M.Adnan Shah 1, Sadam Hussain Noorani1, Aamir Arsalan2, M. Ehatisham-ul-Haq3,
Aasim Raheel1, Wakeel Ahmed1
1University of Engineering and Technology, Taxila, Pakistan
2Fatima Jinnah Women University, Rawalpindi
3Air University, Islamabad
Abstract—Recent years have seen rapid advancements in the
human activity recognition field using data from smart sensor
devices. A wide variety of real-world applications can be found in
different domains, particularly health and security. Smartphones
are common devices that let people do a wide range of everyday
tasks anytime, anywhere. The sensors and networking capabilities
found in modern smartphones enable context awareness for a
wide range of applications. This research mainly focuses on
recognizing human activities in the wild for which we selected
an in-the-wild extra-sensory dataset. Six human activities i.e.,
lying down, sitting, standing, running, walking, and bicycling
are selected. Time domain features are extracted and human
activity recognition is performed using three different machine
learning classifiers i.e., random forest, k-nearest neighbors, and
decision trees. The proposed human activity recognition scheme
resulted in the highest classification accuracy of 89.98%, using
the random forest classifier. Our proposed scheme outperforms
the state-of-the-art human activity recognition schemes in the
wild.
Index Terms—Human Activity, Machine Learning, Smart-
phone Sensor, Out-of-lab, Random Forest
I. INTRODUCTION
Recognizing human activities and analyzing behavior of
population are basic elements of modern civilization. Some of
the most prevalent applications requiring the recognition of hu-
man activities include public place security [1], mass surveil-
lance [2], medical aid [3], and lifestyle [4]. In recent years,
a substantial amount of focus has been devoted to developing
methods for recognizing daily living activities from inertial
sensors [5]. This is mostly due to two factors: the reduced price
of hardware and the widespread availability of mobile devices
with inertial sensors. According to statistics, smartphones are
rapidly overtaking other communication tools as the most
widely used platforms for daily interaction [6], thus people
tend to store sensitive, critical, private, and confidential data
on their phones [7]. On the other hand, data leaks and stolen
devices have emerged as critical concerns for smartphone users
[8] and human activity recognition systems can play a vital
role in the prevention of this kind of device theft. HAR can
be used to monitor patient activities by attaching the sensors
to their shirts. Moreover, sedentary behavior associated with
cardiovascular risk can be measured quantitatively by human
activity recognition [9]. Similarly, monitoring driver activity
for safe traveling is one of the important applications of human
activity recognition [10].
Different kinds of wearable (placed on the human body) and
non-wearable (smartphone and ambient) sensors have been
employed for human activity recognition [11, 12]. Ambient
sensors based on the internet of things, such as vibration,
pressure, and infrared sensors can monitor human activity in
indoor spaces, such as smart houses for assisted living [13].
However, these sensors are incapable of monitoring human
activity outside of the designated area. In contrast to this,
motion sensors, both wearable body and smartphone sensors,
allow ubiquitous monitoring of human activities for a variety
of applications [14]. On-body sensors are especially advan-
tageous since they may be worn or positioned in numerous
body postures for robustness. However, they also become a
source of discomfort for the user and may induce irrational
behavior, obstructing the goal of recognizing the individual’s
human activities.
Human activity recognition in laboratory settings has been
the focus of a wide range of studies available in the literature
[15, 16] but recognition of human activities in the out-of-
lab environment poses serious challenges and is still under-
examined as compared to the controlled environment settings.
Herein, a novel framework is presented in which the fusion
of data from two sensors i.e., accelerometer and gyroscope
are used for recognizing human activities in-the-wild setting.
Extrasensory, a public domain in-the-Wild dataset (containing
data from heterogeneous sensors on smartphones), is used in
the proposed scheme [17]. Six daily living activities are se-
lected from the dataset for primary human activity recognition,
which includes sitting, standing, lying down, running, bicy-
cling, and walking shown in Fig. 1. The proposed framework
presents the following noteworthy contributions.
Identification of daily life activities by blending the
features obtained from smartphone inertial sensors in the
out-of-lab environment.
Performance comparison of the proposed human activity
recognition framework with the state-of-the-art human
activity recognition schemes available in the recent lit-
erature.
II. RELATED WORK
A wide range of literature is available on human activity
recognition which is divided into two parts i.e., recognition of
979-8-3503-3531-6/23/$31.00 ©2023 IEEE
Fig. 1. Primary human activities used in the study
human activities in a controlled/lab environment and in-the-
wild settings. In the majority of available studies, data from
the subjects are collected in a confined or controlled setting,
via scripted tasks, either by attaching sensors to the body or by
using smartphone inertial sensors. Attaching the sensor to body
parts creates discomfort and changes the individual’s behavior
during data collection. Hence smartphones are now widely
utilized for data acquisition because of their ubiquitous nature
considering the fact that it is now an integral part of human
daily lives.
A study presented in [18] proposed a technique for activity
recognition of Parkinson’s disease patients. The convolutional
neural network (CNN) model was trained on the inertial sensor
data of the Physical Activity Monitoring (PAMAP2) dataset
of the healthy subjects and tested on the MHealth dataset
and Parkinson’s disease patient data. The model achieved an
accuracy of 84.43%by using data augmentation. The author
proposed a real-time human activity recognition framework by
combining manual feature extraction and CNN architecture
achieving an accuracy of 94.18%in [19]. Another study
presented in [20] proposed a hybrid deep learning model
by combining a CNN with long short-term memory (LSTM)
architecture. The suggested model achieved an accuracy of
99.93%on the H-Activity dataset, 98.76%on the ”MHEALTH
”dataset, and 93.11%on the UCI-HAR dataset. Authors in
[21] present a deep residual learning strategy using LSTM-
CNN and deep residual modeling. The proposed model was
evaluated using the widely available UCI-HAR dataset with
an achieved accuracy of 99.09%. A deep learning-based
scheme for recognizing six patient activities using smartphone
accelerometer data in a synthetic hospital environment is
presented in [22] and an accuracy of 94.52% is reported.
Another human activity recognition study using an accelerom-
eter sensor, wavelet energy spectrum features, and ensemble
feature selection method is presented in [23]. Motion data
obtained from the inertial sensors for the six physical activities
performed in the controlled environment are recognized using
LSTM architecture in a study conducted in [24] with a reported
accuracy of 97.89%. All the above-mentioned studies have
used data acquired in the controlled/lab environment and
scripted procedure for data acquisition and hence resulting in
higher classification accuracy values.
A human activity recognition study conducted in [25] used
smartphone and wristband wearable inertial sensors to acquire
data from 10 subjects. The extrasensory dataset was employed
for the training purpose whereas, the validation of the proposed
scheme was performed using this newly acquired dataset with
an accuracy achievement of 71.73%. Another study conducted
in [26] presents the results for human activity recognition in-
the-wild settings using data from a smartphone accelerome-
ter. Six different activities are classified and an accuracy of
87.1% is achieved using the random forest (RF) classifier.
Another study to segregate different daily life activities in an
unrestricted environment is presented in [27] with a reported
accuracy of 88.4% for five activities using RF classifier.
Authors in [28] proposed a hybrid LSTM and CNN-based
human activity recognition model in which smartwatch data
from the extrasensory dataset is utilized to train the model on
young individuals and recognize the activities of older adults.
The proposed scheme achieved an accuracy of 51% for older
adults. In another study proposed in [29], CNN architecture
is used to train the model and the performance is evaluated
on UCI-HAR, Extrasensory and DCase datasets achieving
an accuracy of 91.98%, 67.51%, and 92.30% respectively.
Another daily life activity recognition scheme using data from
the inertial sensors in an out-of-lab environment is presented
in [30]. An accuracy of 82.90% is achieved for six daily
life activities using a data of 60 subjects. A real-time human
activity recognition framework using heterogeneous inertial
sensors is presented in [31]. An accuracy of 83.00% is
achieved for the classification of 6different human activities.
It can be observed from the results that the UCI-HAR and the
DCase dataset where the subjects have to perform a scripted
set of activities produced higher classification accuracy for
recognizing human activities in comparison to the in-the-wild
extrasensory dataset where the accuracy declined significantly.
The rest of the paper is structured as follows. Section III
describes the salient phases involved in the proposed archi-
tecture for recognizing primary human physical activities in-
the-wild settings. Section IV presents the experimental results
and performance analyses of the proposed scheme. Section V
presents a comparative analysis of the obtained results with
the existing state-of-the-art schemes available in the literature
followed by a conclusion in Section VI.
Fig. 2. The proposed framework for daily life activity recognition using smartphone inertial sensors
III. THE PROPOSED METHODOLOGY
The proposed methodology for primary human activity
recognition utilizing smartphone inertial sensors in the wild
is depicted as a block diagram in Fig. 2. The raw data from
the inertial sensors are pre-processed before passing to the
feature extraction and then feature selection stage followed
by classification phase to recognize the human activity. The
details for each stage in the proposed framework are provided
in the following sub-sections.
A. Data Acquisition
The human activity recognition framework proposed in
this study utilizes ’ExtraSensory’ dataset which is (publicly
available at: http://extrasensory.ucsd.edu) and it consists of
data on six daily life activities i.e., lying down, standing,
sitting, running, bicycling, and walking where each activity is
of a duration of 20 seconds performed by 60 subjects.[32] In
the ExtraSensory dataset, the sensors used for the acquisition
of data include the gyroscope and accelerometer of smartphone
and smartwatch. However, in the current study, we have uti-
lized data from the smartphone’s accelerometer and gyroscope
sensors. The sampling rate of the inertial sensors used for
data acquisition is 40 Hz. Raw data from inertial sensors
are pre-processed before moving to the feature extraction
followed by the selection stage to mitigate the noise. We
passed the raw data through a third-order smoothing filter to
truncate the noisy peaks from each axis of inertial sensor data.
Moreover, we have used a 10-second overlapping window
for data segmentation, which is further passed to the feature
extraction phase.
B. Feature Extraction and Selection
Feature extraction is performed using a sliding window on
10-second overlapping data segments to recognize human ac-
tivities. Twenty time-domain features are retrieved from pre-
processed data segments. Features corresponding to the spatial
domain are extracted from the data for each axis of the
accelerometer and gyroscope, resulting in a 1×120 feature
vector. The mathematical formulation of the extracted features
can be obtained from the study conducted in [33].
After feature extraction, the obtained feature vector is sub-
jected to a feature selection process to pick the top-notch batch
of features with the most discriminative power to differentiate
the human activities. The attribute selection mechanism used
in the current study is the InfogainAttributeEval method which
ranks the extracted features in descending order of their
importance. In this study, we have used the top 30 features
in the ranking for recognizing human activities.
C. Classification
In the classification stage, three different machine learning
classifiers are applied to the selected subset of features. The
classifiers used in this study include random forest (RF),
decision trees (DT), and k-nearest neighbors (kNN). Random
forest is an ensemble classification technique where decision
trees are used as the base machine learning models. The
number of decision trees used in this study is 100. KNN is a
non-parametric supervised machine learning technique that is
also termed as a lazy learner because it does not perform any
training rather it only stores all the training examples. In our
study K = 5i.e., 5 nearest neighbors are chosen. The decision
tree is a supervised machine-learning classification technique
based on the parameters of entropy and information gain.
IV. EXP ER IM EN TAL RESULTS
This section presents the experimental results for human
activity recognition using smartphone inertial sensors in the
out-of-lab environment. Six different activities are performed
by 60 unique subjects in an unconstrained environment. Three
different machine learning classifiers which include RF, KNN,
and DT are utilized to train the model. Moreover in order to
validate the finding of the proposed scheme a 10-fold cross-
validation approach is adopted. 10-fold cross-validation is a
mechanism in which the complete data is segmented into 10
identical chunks. Nine chunks are used for the training part
whereas one part is adopted for the validation of the results.
This process is repeated 10 times and the average accuracy is
reported. The evaluation matrices to illustrate the performance
of the proposed scheme are classification accuracy, precision,
recall, F-measure, and kappa statistics. Table I presents the
TABLE I
AVERAGE PERFORMANCE VALUES OF RF, DT, A ND KNN CL ASS IFI ERS
FOR HUMAN ACTIVITY RECOGNITION IN THE WILD
Classifier Accuracy Precision Recall F-Measure Kappa
RF 0.898 0.902 0.900 0.900 0.861
DT 0.846 0.846 0.846 0.846 0.788
KNN 0.871 0.870 0.871 0.871 0.822
Fig. 3. Performance measures of RF, DT, and KNN classifiers for human
activity recognition in the wild
experimental results for the activity recognition experiment. It
can be observed from the table that the random forest classifier
gives the highest average classification accuracy of 89.98% as
compared to DT and KNN algorithms. Moreover, it is worth
mentioning that the RF classifier has better precision (0.90),
recall (0.90), F-measure (0.90), and kappa statistics (0.86)
value as compared to KNN and DT classifiers. Fig. 3 shows
a graphical comparison of the classification accuracy for RF,
DT, and KNN classifiers. It can be observed that RF classifier
recognizes human activities more accurately as compared to
DT and KNN.
In addition to this, the confusion matrices for RF, DT, and
KNN classifier are shown in Fig. 4, 5, and 6, respectively.
A comparative analysis of the presented confusion matrices
illustrates that random forest has the highest classification
accuracy for human activity recognition as opposed to DT
and KNN classifiers. In case of RF classifier, lying down
activity has 95%, sitting has 93.25%, standing has 79.28%,
walking has 78.19%, running has 84.80% and, bicycling has
83.71% correctly recognized instances which are higher than
the correctly classified instances for KNN and DT classifier.
V. DISCUSSION
Human activity recognition in the lab environment has been
extensively explored in the available literature. However, the
recognition of human activities in the unconstrained out-of-lab
environment has been discussed in a limited manner. Table
II exhibits the comparison of the proposed human activity
recognition scheme with the state-of-the-art frameworks for
Fig. 4. Confusion matrix using RF classifier
Fig. 5. Confusion matrix using DT classifier
human activity recognition in-the-wild settings in terms of
the number of participants, activities, and the classification
accuracy achieved. All the schemes which have been adopted
for the comparison of our proposed model have used the
”ExtraSensory” dataset which consists of inertial sensors data
acquired in the out-of-lab environment except the study in
[29].
It can be observed from the table that the number of partici-
pants in all the studies except [29] and [34] is 60 which is equal
to the number of participants in our proposed scheme. For the
experiment conducted in [29], the authors have acquired the
data from only 10 users which is considerably lower than the
extrasensory dataset employed in our experiment. Similarly,
the scheme proposed in [34] has used data of 48 subjects
from the ExtraSensory dataset. Keeping in view the count of
recognized activities, only one study presented in [29] has
recognized a higher number of activities (9) than our proposed
scheme (6) but the number of subjects for the mentioned study
Fig. 6. Confusion matrix using KNN classifier
TABLE II
COMPARISON OF EXISTING STUDIES FOR HUMAN ACTIVITY
RECOGNITION USING SMARTPHONE SENSORS IN THE WILD SETTINGS
Ref,
Year No. of Participants No. of Activities Accuracy
[30],
2022 60 06 89.43%
[28],
2021 60 05 51.00%
[34],
2020 10 09 71.73%
[26],
2020 60 06 78.40%
[31],
2020 60 06 83.00%
[27],
2020 60 05 88.40%
[29],
2019 48 06 67.51%
Proposed,
2023 60 06 89.98%
is limited to 10. Whereas, in our proposed scheme, we trained
our model on higher number of subjects i.e., 60 to recognize
the activities. Moreover, in terms of classification accuracy, our
proposed scheme achieves the highest classification accuracy
of 89.96% when comparing with all state-of-the-art schemes
available in the literature.
CONCLUSION
Extensive increase in the use of smart devices enables
us to utilize smart sensors for recognizing daily activities.
This paper presented a human activity recognition model
using smartphone’s inertial sensor data i.e., accelerometer
and gyroscope in the out-of-lab environment. Inertial sensor
data corresponding to six physical activities is used from
the extrasensory dataset. Twenty spatial domain features were
extracted from the inertial sensors which are fed to the feature
selection stage to select the appropriate chunk of features.
The selected features are given to the classification stage
using three different classifiers i.e., KNN, DT, and RF to
recognize human activities. The results depict the fact that
the RF classifier offers the best performance in contrast to
KNN and DT classifiers by blending the features extricated
from the data of accelerometer and gyroscope sensors. In the
future, human activity recognition can be extended towards
context and user identification to develop the framework to
secure our personal handheld devices.
REFERENCES
[1] R. K. Tripathi, A. S. Jalal, and S. C. Agrawal, “Sus-
picious human activity recognition: a review, Artificial
Intelligence Review, vol. 50, pp. 283–339, 2018.
[2] A. Taha, H. H. Zayed, M. Khalifa, and E.-S. M. El-
Horbaty, “Human activity recognition for surveillance
applications,” in Proceedings of the 7th International
Conference on Information Technology, pp. 577–586,
2015.
[3] M. Z. Uddin and A. Soylu, “Human activity recognition
using wearable sensors, discriminant analysis, and long
short-term memory-based neural structured learning,”
Scientific Reports, vol. 11, no. 1, p. 16455, 2021.
[4] A. Ferrari, D. Micucci, M. Mobilio, and P. Napole-
tano, “Trends in human activity recognition using smart-
phones,” Journal of Reliable Intelligent Environments,
vol. 7, no. 3, pp. 189–213, 2021.
[5] C.-T. Yen, J.-X. Liao, and Y.-K. Huang, “Human daily
activity recognition performed using wearable inertial
sensors combined with deep learning algorithms,” Ieee
Access, vol. 8, pp. 174105–174114, 2020.
[6] K. Wei, M. Dong, K. Ota, and K. Xu, “Camf: Context-
aware message forwarding in mobile social networks,
IEEE Transactions on Parallel and Distributed Systems,
vol. 26, no. 8, pp. 2178–2187, 2015.
[7] S. Mekruksavanich, “Supermarket shopping system using
rfid as the iot application,” in 2020 Joint International
Conference on Digital Arts, Media and Technology with
ECTI Northern Section Conference on Electrical, Elec-
tronics, Computer and Telecommunications Engineering
(ECTI DAMT NCON), pp. 83–86, 2020.
[8] T. O. Y. Kim and J. Kim, Analyzing user awareness of
privacy data leak in mobile applications, 2015.
[9] E. Ka´
ntoch, “Recognition of sedentary behavior by
machine learning analysis of wearable sensors during
activities of daily living for telemedical assessment of
cardiovascular risk, Sensors, vol. 18, no. 10, p. 3219,
2018.
[10] C. Braunagel, E. Kasneci, W. Stolzmann, and W. Rosen-
stiel, “Driver-activity recognition in the context of con-
ditionally autonomous driving, in 2015 IEEE 18th In-
ternational Conference on Intelligent Transportation Sys-
tems, pp. 1652–1657, IEEE, 2015.
[11] C. Han, L. Zhang, Y. Tang, W. Huang, F. Min, and J. He,
“Human activity recognition using wearable sensors by
heterogeneous convolutional neural networks, Expert
Systems with Applications, vol. 198, p. 116764, 2022.
[12] R. G. Ramos, J. D. Domingo, E. Zalama, and J. G´
omez-
Garc´
ıa-Bermejo, “Daily human activity recognition using
non-intrusive sensors, Sensors, vol. 21, no. 16, p. 5270,
2021.
[13] ˇ
S. Aida and J. Kevri´
c, “Human activity recognition using
ambient sensor data,” IFAC-PapersOnLine, vol. 55, no. 4,
pp. 97–102, 2022.
[14] S. Zhang, Y. Li, S. Zhang, F. Shahabi, S. Xia, Y. Deng,
and N. Alshurafa, “Deep learning in human activity
recognition with wearable sensors: A review on ad-
vances, Sensors, vol. 22, no. 4, p. 1476, 2022.
[15] C. Fan and F. Gao, “Enhanced human activity recognition
using wearable sensors via a hybrid feature selection
method,” Sensors, vol. 21, no. 19, p. 6434, 2021.
[16] I. A. Lawal and S. Bano, “Deep human activity recogni-
tion with localisation of wearable sensors,” IEEE Access,
vol. 8, pp. 155060–155070, 2020.
[17] Y. Vaizman, K. Ellis, G. Lanckriet, and N. Weibel,
“Extrasensory app: Data collection in-the-wild with rich
user interface to self-report behavior, in Proceedings of
the 2018 CHI conference on human factors in computing
systems, pp. 1–12, 2018.
[18] S. Davidashvilly, M. Hssayeni, C. Chi, J. Jimenez-
Shahed, and B. Ghoraani, Activity recognition in parkin-
son’s patients from motion data using a cnn model trained
by healthy subjects,” in 2022 44th Annual International
Conference of the IEEE Engineering in Medicine &
Biology Society (EMBC), pp. 3199–3202, IEEE, 2022.
[19] K. Peppas, A. C. Tsolakis, S. Krinidis, and D. Tzovaras,
“Real-time physical activity recognition on smart mobile
devices using convolutional neural networks, Applied
Sciences, vol. 10, no. 23, p. 8482, 2020.
[20] M. A. Khatun, M. A. Yousuf, S. Ahmed, M. Z. Uddin,
S. A. Alyami, S. Al-Ashhab, H. F. Akhdar, A. Khan,
A. Azad, and M. A. Moni, “Deep cnn-lstm with self-
attention model for human activity recognition using
wearable sensor, IEEE Journal of Translational Engi-
neering in Health and Medicine, vol. 10, pp. 1–16, 2022.
[21] S. Mekruksavanich, P. Jantawong, N. Hnoohom, and
A. Jitpattanakul, A novel deep bigru-resnet model for
human activity recognition using smartphone sensors, in
2022 19th International Joint Conference on Computer
Science and Software Engineering (JCSSE), pp. 1–5,
2022.
[22] E. Fridriksdottir and A. G. Bonomi, Accelerometer-
based human activity recognition for patient monitoring
using a deep neural network, Sensors, vol. 20, no. 22,
p. 6424, 2020.
[23] Y. Tian, J. Zhang, J. Wang, Y. Geng, and X. Wang,
“Robust human activity recognition using single ac-
celerometer via wavelet energy spectrum features and
ensemble feature selection,” Systems Science & Control
Engineering, vol. 8, no. 1, pp. 83–96, 2020.
[24] S. Rani, H. Babbar, S. Coleman, A. Singh, and H. M.
Aljahdali, An efficient and lightweight deep learn-
ing model for human activity recognition using smart-
phones,” Sensors, vol. 21, no. 11, p. 3845, 2021.
[25] F. Cruciani, C. Sun, S. Zhang, C. Nugent, C. Li, S. Song,
C. Cheng, I. Cleland, and P. Mccullagh, “A public
domain dataset for human activity recognition in free-
living conditions, pp. 166–171, 2019.
[26] Y. Asim, M. A. Azam, M. Ehatisham-ul Haq, U. Naeem,
and A. Khalid, “Context-aware human activity recogni-
tion (cahar) in-the-wild using smartphone accelerometer,
IEEE Sensors Journal, vol. 20, no. 8, pp. 4361–4371,
2020.
[27] M. Ehatisham-ul Haq and M. A. Azam, “Opportunistic
sensing for inferring in-the-wild human contexts based
on activity pattern recognition using smart computing,
Future Generation Computer Systems, vol. 106, pp. 374–
392, 2020.
[28] S. Fatima, Activity recognition in older adults with
training data from younger adults: Preliminary results on
in vivo smartwatch sensor data, pp. 1–4, 2021.
[29] F. Cruciani, A. Vafeiadis, C. Nugent, I. Cleland, P. Mc-
Cullagh, K. Votis, D. Giakoumis, D. Tzovaras, L. Chen,
and R. Hamzaoui, “Feature learning for human activity
recognition using convolutional neural networks, CCF
Transactions on Pervasive Computing and Interaction,
vol. 2, no. 1, pp. 18–32, 2020.
[30] M. Ehatisham-ul Haq, F. Murtaza, M. A. Azam, and
Y. Amin, “Daily living activity recognition in-the-wild:
Modeling and inferring activity-aware human contexts,
Electronics, vol. 11, no. 2, p. 226, 2022.
[31] M. Ehatisham-Ul-Haq, M. A. Azam, Y. Amin, and
U. Naeem, “C2fhar: Coarse-to-fine human activity recog-
nition with behavioral context modeling using smart
inertial sensors,” IEEE Access, vol. 8, pp. 7731–7747,
2020.
[32] Y. Vaizman, K. Ellis, and G. Lanckriet, “Recognizing
detailed human context in the wild from smartphones
and smartwatches, IEEE pervasive computing, vol. 16,
no. 4, pp. 62–74, 2017.
[33] S. Tahir, A. Raheel, M. Ehatisham-ul Haq, and A. Ar-
salan, “Object based human-object interaction (hoi)
recognition using wrist-mounted sensors,” in 2020
IEEE 23rd International Multitopic Conference (INMIC),
pp. 1–6, IEEE, 2020.
[34] F. Cruciani, C. Sun, S. Zhang, C. Nugent, C. Li,
S. Song, C. Cheng, I. Cleland, and P. Mccullagh, “A
public domain dataset for human activity recognition
in free-living conditions, in 2019 IEEE SmartWorld,
Ubiquitous Intelligence & Computing, Advanced
& Trusted Computing, Scalable Computing &
Communications, Cloud & Big Data Computing,
Internet of People and Smart City Innovation
(SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI),
pp. 166–171, IEEE, 2019.
... This technique achieved an accuracy 92.97% using Random Forest (RF) classifier. Another article [12] in which author used smart phone sensors data for the recognition of six human activities. Multiple features are extracted to perform HAR and achieved an accuracy of 89.98% with RF classifier. ...
... Two-Three axis's wearable accelerometer Axivity Sensors device was placed on back and thighs of a participant. Twenty Time domain features were extracted that are widely used in different studies [12]. Each sensor's axis Back(x,y,z), Thigh(x,y,z) computed 20 features and from 6 axis's 1x120 feature vector is created. ...
Conference Paper
Nowadays Wearable sensor-based Human Activity Recognition (HAR) is gaining popularity for its affordability and low computational demands. These sensors are widely used in healthcare and surveillance. However, using smartphone sensors for HAR can be inaccurate due to their non-fixed position. This study uses the publicly available Human Activity Recognition Trondheim dataset (HARTH) to develop a HAR model. The proposed HAR model is capable to recognize diverse human daily life activities in the free-living environment. Identifying individual natural behaviour in the wild (free living) remains challenging because humans engage in unscripted daily activities. While working with controlled conditions (scripted data) can produce optimal results, this system often struggles when applied in real-life scenario. Multiple machine learning classifiers are tested on time domain features extracted from sensor data, with the Multilayer Perceptron (MLP) classifier achieving an impressive 92.92% accuracy, making a significant contribution to the HAR field.
... This technique achieved an accuracy 92.97% using Random Forest (RF) classifier. Another article [12] in which author used smart phone sensors data for the recognition of six human activities. Multiple features are extracted to perform HAR and achieved an accuracy of 89.98% with RF classifier. ...
... Two-Three axis's wearable accelerometer Axivity Sensors device was placed on back and thighs of a participant. Twenty Time domain features were extracted that are widely used in different studies [12]. Each sensor's axis Back(x,y,z), Thigh(x,y,z) computed 20 features and from 6 axis's 1x120 feature vector is created. ...
... From each of the four electrodes, twenty-five features are computed, resulting in a total of 100 features extracted to create a feature vector. Mathematical representations of Twenty-five features are listed in table I.These features are used in different studies in Human Activity Recognition [13] , [14], [15]. ...
... The RF stands out as an exceptionally efficient classifier widely employed in various classification scenarios. [38]. Mathematically, RF can be written as the equation 25. ...
Article
Full-text available
Perceived stress is the predominant mental health concern in this age of development and progress. This research study aims to classify perceived stress using non-invasive electroencephalography (EEG) signals. The dataset employed in this research comprises EEG information from twenty-eight participants in a closed-eye state, utilizing commercially available Muse EEG headbands. We have preprocessed EEG data and performed analysis on EEG data spanning 210 seconds. Two segmentation techniques were employed: non-overlap and overlap. The information-gain-based feature selection method was applied before classification to extract distinct features to enhance feature relevance and reduce dimensionality. To categorize the EEG data into stressed and non-stressed groups, the Perceived Stress Scale (PSS) questionnaire was utilized. Employing a Random Forest classifier alongside an overlap segmentation technique, our proposed method attained a peak classification accuracy of 93.8%. This surpasses existing stress classification schemes found in the literature.
Article
Full-text available
Human Activity Recognition (HAR) systems are devised for continuously observing human behavior - primarily in the fields of environmental compatibility, sports injury detection, senior care, rehabilitation, entertainment, and the surveillance in intelligent home settings. Inertial sensors, e.g., accelerometers, linear acceleration, and gyroscopes are frequently employed for this purpose, which are now compacted into smart devices, e.g., smartphones. Since the use of smartphones is so widespread now-a-days, activity data acquisition for the HAR systems is a pressing need. In this article, we have conducted the smartphone sensor-based raw data collection, namely H-Activity , using an Android-OS-based application for accelerometer, gyroscope, and linear acceleration. Furthermore, a hybrid deep learning model is proposed, coupling convolutional neural network and long-short term memory network (CNN-LSTM), empowered by the self-attention algorithm to enhance the predictive capabilities of the system. In addition to our collected dataset ( H-Activity ), the model has been evaluated with some benchmark datasets, e.g., MHEALTH, and UCI-HAR to demonstrate the comparative performance of our model. When compared to other models, the proposed model has an accuracy of 99.93% using our collected H-Activity data, and 98.76% and 93.11% using data from MHEALTH and UCI-HAR databases respectively, indicating its efficacy in recognizing human activity recognition. We hope that our developed model could be applicable in the clinical settings and collected data could be useful for further research.
Article
Full-text available
Mobile and wearable devices have enabled numerous applications, including activity tracking, wellness monitoring, and human–computer interaction, that measure and improve our daily lives. Many of these applications are made possible by leveraging the rich collection of low-power sensors found in many mobile and wearable devices to perform human activity recognition (HAR). Recently, deep learning has greatly pushed the boundaries of HAR on mobile and wearable devices. This paper systematically categorizes and summarizes existing work that introduces deep learning methods for wearables-based HAR and provides a comprehensive analysis of the current advancements, developing trends, and major challenges. We also present cutting-edge frontiers and future directions for deep learning-based HAR.
Article
Full-text available
Advancement in smart sensing and computing technologies has provided a dynamic opportunity to develop intelligent systems for human activity monitoring and thus assisted living. Consequently, many researchers have put their efforts into implementing sensor-based activity recognition systems. However, recognizing people’s natural behavior and physical activities with diverse contexts is still a challenging problem because human physical activities are often distracted by changes in their surroundings/environments. Therefore, in addition to physical activity recognition, it is also vital to model and infer the user’s context information to realize human-environment interactions in a better way. Therefore, this research paper proposes a new idea for activity recognition in-the-wild, which entails modeling and identifying detailed human contexts (such as human activities, behavioral environments, and phone states) using portable accelerometer sensors. The proposed scheme offers a detailed/fine-grained representation of natural human activities with contexts, which is crucial for modeling human-environment interactions in context-aware applications/systems effectively. The proposed idea is validated using a series of experiments, and it achieved an average balanced accuracy of 89.43%, which proves its effectiveness.
Article
Full-text available
The study of human activity recognition (HAR) plays an important role in many areas such as healthcare, entertainment, sports, and smart homes. With the development of wearable electronics and wireless communication technologies, activity recognition using inertial sensors from ubiquitous smart mobile devices has drawn wide attention and become a research hotspot. Before recognition, the sensor signals are typically preprocessed and segmented, and then representative features are extracted and selected based on them. Considering the issues of limited resources of wearable devices and the curse of dimensionality, it is vital to generate the best feature combination which maximizes the performance and efficiency of the following mapping from feature subsets to activities. In this paper, we propose to integrate bee swarm optimization (BSO) with a deep Q-network to perform feature selection and present a hybrid feature selection methodology, BAROQUE, on basis of these two schemes. Following the wrapper approach, BAROQUE leverages the appealing properties from BSO and the multi-agent deep Q-network (DQN) to determine feature subsets and adopts a classifier to evaluate these solutions. In BAROQUE, the BSO is employed to strike a balance between exploitation and exploration for the search of feature space, while the DQN takes advantage of the merits of reinforcement learning to make the local search process more adaptive and more efficient. Extensive experiments were conducted on some benchmark datasets collected by smartphones or smartwatches, and the metrics were compared with those of BSO, DQN, and some other previously published methods. The results show that BAROQUE achieves an accuracy of 98.41% for the UCI-HAR dataset and takes less time to converge to a good solution than other methods, such as CFS, SFFS, and Relief-F, yielding quite promising results in terms of accuracy and efficiency.
Article
Full-text available
Healthcare using body sensor data has been getting huge research attentions by a wide range of researchers because of its good practical applications such as smart health care systems. For instance, smart wearable sensor-based behavior recognition system can observe elderly people in a smart eldercare environment to improve their lifestyle and can also help them by warning about forthcoming unprecedented events such as falls or other health risk, to prolong their independent life. Although there are many ways of using distinguished sensors to observe behavior of people, wearable sensors mostly provide reliable data in this regard to monitor the individual’s functionality and lifestyle. In this paper, we propose a body sensor-based activity modeling and recognition system using time-sequential information-based deep Neural Structured Learning (NSL), a promising deep learning algorithm. First, we obtain data from multiple wearable sensors while the subjects conduct several daily activities. Once the data is collected, the time-sequential information then go through some statistical feature processing. Furthermore, kernel-based discriminant analysis (KDA) is applied to see the better clustering of the features from different activity classes by minimizing inner-class scatterings while maximizing inter-class scatterings of the samples. The robust time-sequential features are then applied with Neural Structured Learning (NSL) based on Long Short-Term Memory (LSTM), for activity modeling. The proposed approach achieved around 99% recall rate on a public dataset. It is also compared to existing different conventional machine learning methods such as typical Deep Belief Network (DBN), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN) where they yielded the maximum recall rate of 94%. Furthermore, a fast and efficient explainable Artificial Intelligence (XAI) algorithm, Local Interpretable Model-Agnostic Explanations (LIME) is used to explain and check the machine learning decisions. The robust activity recognition system can be adopted for understanding peoples' behavior in their daily life in different environments such as homes, clinics, and offices.
Conference Paper
Physical activity recognition in patients with Parkinson's Disease (PwPD) is challenging due to the lack of large-enough and good quality motion data for PwPD. A common approach to this obstacle involves the use of models trained on better quality data from healthy patients. Models can struggle to generalize across these domains due to motor complications affecting the movement patterns in PwPD and differences in sensor axes orientations between data. In this paper, we investigated the generalizability of a deep convolutional neural network (CNN) model trained on a young, healthy population to PD, and the role of data augmentation on alleviating sensor position variability. We used two publicly available healthy datasets - PAMAP2 and MHEALTH. Both datasets had sensor placements on the chest, wrist, and ankle with 9 and 10 subjects, respectively. A private PD dataset was utilized as well. The proposed CNN model was trained on PAMAP2 in k-fold cross-validation based on the number of subjects, with and without data augmentation, and tested directly on MHEALTH and PD data. Without data augmentation, the trained model resulted in 48.16% accuracy on MHEALTH and 0% on the PD data when directly applied with no model adaptation techniques. With data augmentation, the accuracies improved to 87.43% and 44.78%, respectively, indicating that the method compensated for the potential sensor placement variations between data. Clinical Relevance- Wearable sensors and machine learning can provide important information about the activity level of PwPD. This information can be used by the treating physician to make appropriate clinical interventions such as rehabilitation to improve quality of life.
Article
Variety and volume of data make human activity recognition especially interesting field for machine learning. It has thus seen incredible growth in past several years taking part in big data questions as well. In a broad sense question of HAR - human activity recognition is a very complex one, often times dealing with large amounts of data not belonging to a predefined class. However, this paper deals with supervised learning classifications task, focusing on several activity classes known as Activities of Daily Living - ADL. Generalized models for common activities and issues are looked into, and issues that appear due to the huge volume of data that is recognized as "other" when the models are applied to the real life data sets. Support vector machine method (SVM), naïve Bayes classifiers, KNN, Random Tree and Bagged Trees (Ensemble) algorithms are applied, and venturing into artificial neural network algorithms Multilayer Perceptron algorithm is applied as well.
Article
Recent researches on sensor based human activity recognition (HAR) are mostly devoted to designing various network architectures to enhance their feature representation capacity for raw sensor data. In this paper, we focus on strengthening the vanilla convolution without adjusting the model architectures in HAR scenario. Inspired by the idea of grouped convolution, we propose a novel heterogeneous convolution for activity recognition task, where all filters within a specific convolutional layer are separated into two uneven groups. Specifically, the sensor input is down-sampled into a low-dimensional embedding, which is then convolved by one filter group to recalibrate normal filters within the other group. The two filter groups can complement each other, which is very beneficial for augmenting the receptive field of sensor signals for HAR task. Extensive experiments are conducted on several benchmark HAR datasets, which consists of OPPORTUNITY, PAMAP2, UCI-HAR, USC-HAD as well as the Weakly Labeled HAR dataset. The results show that the baseline models can be significantly improved. Our heterogeneous convolution is simple and can easily be integrated into standard convolutional layers without increasing extra parameters and computational overhead. Finally, the actual operation of heterogeneous convolution is evaluated on an embedded Raspberry Pi platform.