Content uploaded by Sheharyar Khan
Author content
All content in this area was uploaded by Sheharyar Khan on Apr 22, 2023
Content may be subject to copyright.
A Framework for Daily Living Activity Recognition
using Fusion of Smartphone Inertial Sensors Data
Sheharyar Khan1, Syed M.Adnan Shah 1, Sadam Hussain Noorani1, Aamir Arsalan2, M. Ehatisham-ul-Haq3,
Aasim Raheel1, Wakeel Ahmed1
1University of Engineering and Technology, Taxila, Pakistan
2Fatima Jinnah Women University, Rawalpindi
3Air University, Islamabad
Abstract—Recent years have seen rapid advancements in the
human activity recognition field using data from smart sensor
devices. A wide variety of real-world applications can be found in
different domains, particularly health and security. Smartphones
are common devices that let people do a wide range of everyday
tasks anytime, anywhere. The sensors and networking capabilities
found in modern smartphones enable context awareness for a
wide range of applications. This research mainly focuses on
recognizing human activities in the wild for which we selected
an in-the-wild extra-sensory dataset. Six human activities i.e.,
lying down, sitting, standing, running, walking, and bicycling
are selected. Time domain features are extracted and human
activity recognition is performed using three different machine
learning classifiers i.e., random forest, k-nearest neighbors, and
decision trees. The proposed human activity recognition scheme
resulted in the highest classification accuracy of 89.98%, using
the random forest classifier. Our proposed scheme outperforms
the state-of-the-art human activity recognition schemes in the
wild.
Index Terms—Human Activity, Machine Learning, Smart-
phone Sensor, Out-of-lab, Random Forest
I. INTRODUCTION
Recognizing human activities and analyzing behavior of
population are basic elements of modern civilization. Some of
the most prevalent applications requiring the recognition of hu-
man activities include public place security [1], mass surveil-
lance [2], medical aid [3], and lifestyle [4]. In recent years,
a substantial amount of focus has been devoted to developing
methods for recognizing daily living activities from inertial
sensors [5]. This is mostly due to two factors: the reduced price
of hardware and the widespread availability of mobile devices
with inertial sensors. According to statistics, smartphones are
rapidly overtaking other communication tools as the most
widely used platforms for daily interaction [6], thus people
tend to store sensitive, critical, private, and confidential data
on their phones [7]. On the other hand, data leaks and stolen
devices have emerged as critical concerns for smartphone users
[8] and human activity recognition systems can play a vital
role in the prevention of this kind of device theft. HAR can
be used to monitor patient activities by attaching the sensors
to their shirts. Moreover, sedentary behavior associated with
cardiovascular risk can be measured quantitatively by human
activity recognition [9]. Similarly, monitoring driver activity
for safe traveling is one of the important applications of human
activity recognition [10].
Different kinds of wearable (placed on the human body) and
non-wearable (smartphone and ambient) sensors have been
employed for human activity recognition [11, 12]. Ambient
sensors based on the internet of things, such as vibration,
pressure, and infrared sensors can monitor human activity in
indoor spaces, such as smart houses for assisted living [13].
However, these sensors are incapable of monitoring human
activity outside of the designated area. In contrast to this,
motion sensors, both wearable body and smartphone sensors,
allow ubiquitous monitoring of human activities for a variety
of applications [14]. On-body sensors are especially advan-
tageous since they may be worn or positioned in numerous
body postures for robustness. However, they also become a
source of discomfort for the user and may induce irrational
behavior, obstructing the goal of recognizing the individual’s
human activities.
Human activity recognition in laboratory settings has been
the focus of a wide range of studies available in the literature
[15, 16] but recognition of human activities in the out-of-
lab environment poses serious challenges and is still under-
examined as compared to the controlled environment settings.
Herein, a novel framework is presented in which the fusion
of data from two sensors i.e., accelerometer and gyroscope
are used for recognizing human activities in-the-wild setting.
Extrasensory, a public domain in-the-Wild dataset (containing
data from heterogeneous sensors on smartphones), is used in
the proposed scheme [17]. Six daily living activities are se-
lected from the dataset for primary human activity recognition,
which includes sitting, standing, lying down, running, bicy-
cling, and walking shown in Fig. 1. The proposed framework
presents the following noteworthy contributions.
•Identification of daily life activities by blending the
features obtained from smartphone inertial sensors in the
out-of-lab environment.
•Performance comparison of the proposed human activity
recognition framework with the state-of-the-art human
activity recognition schemes available in the recent lit-
erature.
II. RELATED WORK
A wide range of literature is available on human activity
recognition which is divided into two parts i.e., recognition of
979-8-3503-3531-6/23/$31.00 ©2023 IEEE
Fig. 1. Primary human activities used in the study
human activities in a controlled/lab environment and in-the-
wild settings. In the majority of available studies, data from
the subjects are collected in a confined or controlled setting,
via scripted tasks, either by attaching sensors to the body or by
using smartphone inertial sensors. Attaching the sensor to body
parts creates discomfort and changes the individual’s behavior
during data collection. Hence smartphones are now widely
utilized for data acquisition because of their ubiquitous nature
considering the fact that it is now an integral part of human
daily lives.
A study presented in [18] proposed a technique for activity
recognition of Parkinson’s disease patients. The convolutional
neural network (CNN) model was trained on the inertial sensor
data of the Physical Activity Monitoring (PAMAP2) dataset
of the healthy subjects and tested on the MHealth dataset
and Parkinson’s disease patient data. The model achieved an
accuracy of 84.43%by using data augmentation. The author
proposed a real-time human activity recognition framework by
combining manual feature extraction and CNN architecture
achieving an accuracy of 94.18%in [19]. Another study
presented in [20] proposed a hybrid deep learning model
by combining a CNN with long short-term memory (LSTM)
architecture. The suggested model achieved an accuracy of
99.93%on the H-Activity dataset, 98.76%on the ”MHEALTH
”dataset, and 93.11%on the UCI-HAR dataset. Authors in
[21] present a deep residual learning strategy using LSTM-
CNN and deep residual modeling. The proposed model was
evaluated using the widely available UCI-HAR dataset with
an achieved accuracy of 99.09%. A deep learning-based
scheme for recognizing six patient activities using smartphone
accelerometer data in a synthetic hospital environment is
presented in [22] and an accuracy of 94.52% is reported.
Another human activity recognition study using an accelerom-
eter sensor, wavelet energy spectrum features, and ensemble
feature selection method is presented in [23]. Motion data
obtained from the inertial sensors for the six physical activities
performed in the controlled environment are recognized using
LSTM architecture in a study conducted in [24] with a reported
accuracy of 97.89%. All the above-mentioned studies have
used data acquired in the controlled/lab environment and
scripted procedure for data acquisition and hence resulting in
higher classification accuracy values.
A human activity recognition study conducted in [25] used
smartphone and wristband wearable inertial sensors to acquire
data from 10 subjects. The extrasensory dataset was employed
for the training purpose whereas, the validation of the proposed
scheme was performed using this newly acquired dataset with
an accuracy achievement of 71.73%. Another study conducted
in [26] presents the results for human activity recognition in-
the-wild settings using data from a smartphone accelerome-
ter. Six different activities are classified and an accuracy of
87.1% is achieved using the random forest (RF) classifier.
Another study to segregate different daily life activities in an
unrestricted environment is presented in [27] with a reported
accuracy of 88.4% for five activities using RF classifier.
Authors in [28] proposed a hybrid LSTM and CNN-based
human activity recognition model in which smartwatch data
from the extrasensory dataset is utilized to train the model on
young individuals and recognize the activities of older adults.
The proposed scheme achieved an accuracy of 51% for older
adults. In another study proposed in [29], CNN architecture
is used to train the model and the performance is evaluated
on UCI-HAR, Extrasensory and DCase datasets achieving
an accuracy of 91.98%, 67.51%, and 92.30% respectively.
Another daily life activity recognition scheme using data from
the inertial sensors in an out-of-lab environment is presented
in [30]. An accuracy of 82.90% is achieved for six daily
life activities using a data of 60 subjects. A real-time human
activity recognition framework using heterogeneous inertial
sensors is presented in [31]. An accuracy of 83.00% is
achieved for the classification of 6different human activities.
It can be observed from the results that the UCI-HAR and the
DCase dataset where the subjects have to perform a scripted
set of activities produced higher classification accuracy for
recognizing human activities in comparison to the in-the-wild
extrasensory dataset where the accuracy declined significantly.
The rest of the paper is structured as follows. Section III
describes the salient phases involved in the proposed archi-
tecture for recognizing primary human physical activities in-
the-wild settings. Section IV presents the experimental results
and performance analyses of the proposed scheme. Section V
presents a comparative analysis of the obtained results with
the existing state-of-the-art schemes available in the literature
followed by a conclusion in Section VI.
Fig. 2. The proposed framework for daily life activity recognition using smartphone inertial sensors
III. THE PROPOSED METHODOLOGY
The proposed methodology for primary human activity
recognition utilizing smartphone inertial sensors in the wild
is depicted as a block diagram in Fig. 2. The raw data from
the inertial sensors are pre-processed before passing to the
feature extraction and then feature selection stage followed
by classification phase to recognize the human activity. The
details for each stage in the proposed framework are provided
in the following sub-sections.
A. Data Acquisition
The human activity recognition framework proposed in
this study utilizes ’ExtraSensory’ dataset which is (publicly
available at: http://extrasensory.ucsd.edu) and it consists of
data on six daily life activities i.e., lying down, standing,
sitting, running, bicycling, and walking where each activity is
of a duration of 20 seconds performed by 60 subjects.[32] In
the ExtraSensory dataset, the sensors used for the acquisition
of data include the gyroscope and accelerometer of smartphone
and smartwatch. However, in the current study, we have uti-
lized data from the smartphone’s accelerometer and gyroscope
sensors. The sampling rate of the inertial sensors used for
data acquisition is 40 Hz. Raw data from inertial sensors
are pre-processed before moving to the feature extraction
followed by the selection stage to mitigate the noise. We
passed the raw data through a third-order smoothing filter to
truncate the noisy peaks from each axis of inertial sensor data.
Moreover, we have used a 10-second overlapping window
for data segmentation, which is further passed to the feature
extraction phase.
B. Feature Extraction and Selection
Feature extraction is performed using a sliding window on
10-second overlapping data segments to recognize human ac-
tivities. Twenty time-domain features are retrieved from pre-
processed data segments. Features corresponding to the spatial
domain are extracted from the data for each axis of the
accelerometer and gyroscope, resulting in a 1×120 feature
vector. The mathematical formulation of the extracted features
can be obtained from the study conducted in [33].
After feature extraction, the obtained feature vector is sub-
jected to a feature selection process to pick the top-notch batch
of features with the most discriminative power to differentiate
the human activities. The attribute selection mechanism used
in the current study is the InfogainAttributeEval method which
ranks the extracted features in descending order of their
importance. In this study, we have used the top 30 features
in the ranking for recognizing human activities.
C. Classification
In the classification stage, three different machine learning
classifiers are applied to the selected subset of features. The
classifiers used in this study include random forest (RF),
decision trees (DT), and k-nearest neighbors (kNN). Random
forest is an ensemble classification technique where decision
trees are used as the base machine learning models. The
number of decision trees used in this study is 100. KNN is a
non-parametric supervised machine learning technique that is
also termed as a lazy learner because it does not perform any
training rather it only stores all the training examples. In our
study K = 5i.e., 5 nearest neighbors are chosen. The decision
tree is a supervised machine-learning classification technique
based on the parameters of entropy and information gain.
IV. EXP ER IM EN TAL RESULTS
This section presents the experimental results for human
activity recognition using smartphone inertial sensors in the
out-of-lab environment. Six different activities are performed
by 60 unique subjects in an unconstrained environment. Three
different machine learning classifiers which include RF, KNN,
and DT are utilized to train the model. Moreover in order to
validate the finding of the proposed scheme a 10-fold cross-
validation approach is adopted. 10-fold cross-validation is a
mechanism in which the complete data is segmented into 10
identical chunks. Nine chunks are used for the training part
whereas one part is adopted for the validation of the results.
This process is repeated 10 times and the average accuracy is
reported. The evaluation matrices to illustrate the performance
of the proposed scheme are classification accuracy, precision,
recall, F-measure, and kappa statistics. Table I presents the
TABLE I
AVERAGE PERFORMANCE VALUES OF RF, DT, A ND KNN CL ASS IFI ERS
FOR HUMAN ACTIVITY RECOGNITION IN THE WILD
Classifier Accuracy Precision Recall F-Measure Kappa
RF 0.898 0.902 0.900 0.900 0.861
DT 0.846 0.846 0.846 0.846 0.788
KNN 0.871 0.870 0.871 0.871 0.822
Fig. 3. Performance measures of RF, DT, and KNN classifiers for human
activity recognition in the wild
experimental results for the activity recognition experiment. It
can be observed from the table that the random forest classifier
gives the highest average classification accuracy of 89.98% as
compared to DT and KNN algorithms. Moreover, it is worth
mentioning that the RF classifier has better precision (0.90),
recall (0.90), F-measure (0.90), and kappa statistics (0.86)
value as compared to KNN and DT classifiers. Fig. 3 shows
a graphical comparison of the classification accuracy for RF,
DT, and KNN classifiers. It can be observed that RF classifier
recognizes human activities more accurately as compared to
DT and KNN.
In addition to this, the confusion matrices for RF, DT, and
KNN classifier are shown in Fig. 4, 5, and 6, respectively.
A comparative analysis of the presented confusion matrices
illustrates that random forest has the highest classification
accuracy for human activity recognition as opposed to DT
and KNN classifiers. In case of RF classifier, lying down
activity has 95%, sitting has 93.25%, standing has 79.28%,
walking has 78.19%, running has 84.80% and, bicycling has
83.71% correctly recognized instances which are higher than
the correctly classified instances for KNN and DT classifier.
V. DISCUSSION
Human activity recognition in the lab environment has been
extensively explored in the available literature. However, the
recognition of human activities in the unconstrained out-of-lab
environment has been discussed in a limited manner. Table
II exhibits the comparison of the proposed human activity
recognition scheme with the state-of-the-art frameworks for
Fig. 4. Confusion matrix using RF classifier
Fig. 5. Confusion matrix using DT classifier
human activity recognition in-the-wild settings in terms of
the number of participants, activities, and the classification
accuracy achieved. All the schemes which have been adopted
for the comparison of our proposed model have used the
”ExtraSensory” dataset which consists of inertial sensors data
acquired in the out-of-lab environment except the study in
[29].
It can be observed from the table that the number of partici-
pants in all the studies except [29] and [34] is 60 which is equal
to the number of participants in our proposed scheme. For the
experiment conducted in [29], the authors have acquired the
data from only 10 users which is considerably lower than the
extrasensory dataset employed in our experiment. Similarly,
the scheme proposed in [34] has used data of 48 subjects
from the ExtraSensory dataset. Keeping in view the count of
recognized activities, only one study presented in [29] has
recognized a higher number of activities (9) than our proposed
scheme (6) but the number of subjects for the mentioned study
Fig. 6. Confusion matrix using KNN classifier
TABLE II
COMPARISON OF EXISTING STUDIES FOR HUMAN ACTIVITY
RECOGNITION USING SMARTPHONE SENSORS IN THE WILD SETTINGS
Ref,
Year No. of Participants No. of Activities Accuracy
[30],
2022 60 06 89.43%
[28],
2021 60 05 51.00%
[34],
2020 10 09 71.73%
[26],
2020 60 06 78.40%
[31],
2020 60 06 83.00%
[27],
2020 60 05 88.40%
[29],
2019 48 06 67.51%
Proposed,
2023 60 06 89.98%
is limited to 10. Whereas, in our proposed scheme, we trained
our model on higher number of subjects i.e., 60 to recognize
the activities. Moreover, in terms of classification accuracy, our
proposed scheme achieves the highest classification accuracy
of 89.96% when comparing with all state-of-the-art schemes
available in the literature.
CONCLUSION
Extensive increase in the use of smart devices enables
us to utilize smart sensors for recognizing daily activities.
This paper presented a human activity recognition model
using smartphone’s inertial sensor data i.e., accelerometer
and gyroscope in the out-of-lab environment. Inertial sensor
data corresponding to six physical activities is used from
the extrasensory dataset. Twenty spatial domain features were
extracted from the inertial sensors which are fed to the feature
selection stage to select the appropriate chunk of features.
The selected features are given to the classification stage
using three different classifiers i.e., KNN, DT, and RF to
recognize human activities. The results depict the fact that
the RF classifier offers the best performance in contrast to
KNN and DT classifiers by blending the features extricated
from the data of accelerometer and gyroscope sensors. In the
future, human activity recognition can be extended towards
context and user identification to develop the framework to
secure our personal handheld devices.
REFERENCES
[1] R. K. Tripathi, A. S. Jalal, and S. C. Agrawal, “Sus-
picious human activity recognition: a review,” Artificial
Intelligence Review, vol. 50, pp. 283–339, 2018.
[2] A. Taha, H. H. Zayed, M. Khalifa, and E.-S. M. El-
Horbaty, “Human activity recognition for surveillance
applications,” in Proceedings of the 7th International
Conference on Information Technology, pp. 577–586,
2015.
[3] M. Z. Uddin and A. Soylu, “Human activity recognition
using wearable sensors, discriminant analysis, and long
short-term memory-based neural structured learning,”
Scientific Reports, vol. 11, no. 1, p. 16455, 2021.
[4] A. Ferrari, D. Micucci, M. Mobilio, and P. Napole-
tano, “Trends in human activity recognition using smart-
phones,” Journal of Reliable Intelligent Environments,
vol. 7, no. 3, pp. 189–213, 2021.
[5] C.-T. Yen, J.-X. Liao, and Y.-K. Huang, “Human daily
activity recognition performed using wearable inertial
sensors combined with deep learning algorithms,” Ieee
Access, vol. 8, pp. 174105–174114, 2020.
[6] K. Wei, M. Dong, K. Ota, and K. Xu, “Camf: Context-
aware message forwarding in mobile social networks,”
IEEE Transactions on Parallel and Distributed Systems,
vol. 26, no. 8, pp. 2178–2187, 2015.
[7] S. Mekruksavanich, “Supermarket shopping system using
rfid as the iot application,” in 2020 Joint International
Conference on Digital Arts, Media and Technology with
ECTI Northern Section Conference on Electrical, Elec-
tronics, Computer and Telecommunications Engineering
(ECTI DAMT NCON), pp. 83–86, 2020.
[8] T. O. Y. Kim and J. Kim, “Analyzing user awareness of
privacy data leak in mobile applications,” 2015.
[9] E. Ka´
ntoch, “Recognition of sedentary behavior by
machine learning analysis of wearable sensors during
activities of daily living for telemedical assessment of
cardiovascular risk,” Sensors, vol. 18, no. 10, p. 3219,
2018.
[10] C. Braunagel, E. Kasneci, W. Stolzmann, and W. Rosen-
stiel, “Driver-activity recognition in the context of con-
ditionally autonomous driving,” in 2015 IEEE 18th In-
ternational Conference on Intelligent Transportation Sys-
tems, pp. 1652–1657, IEEE, 2015.
[11] C. Han, L. Zhang, Y. Tang, W. Huang, F. Min, and J. He,
“Human activity recognition using wearable sensors by
heterogeneous convolutional neural networks,” Expert
Systems with Applications, vol. 198, p. 116764, 2022.
[12] R. G. Ramos, J. D. Domingo, E. Zalama, and J. G´
omez-
Garc´
ıa-Bermejo, “Daily human activity recognition using
non-intrusive sensors,” Sensors, vol. 21, no. 16, p. 5270,
2021.
[13] ˇ
S. Aida and J. Kevri´
c, “Human activity recognition using
ambient sensor data,” IFAC-PapersOnLine, vol. 55, no. 4,
pp. 97–102, 2022.
[14] S. Zhang, Y. Li, S. Zhang, F. Shahabi, S. Xia, Y. Deng,
and N. Alshurafa, “Deep learning in human activity
recognition with wearable sensors: A review on ad-
vances,” Sensors, vol. 22, no. 4, p. 1476, 2022.
[15] C. Fan and F. Gao, “Enhanced human activity recognition
using wearable sensors via a hybrid feature selection
method,” Sensors, vol. 21, no. 19, p. 6434, 2021.
[16] I. A. Lawal and S. Bano, “Deep human activity recogni-
tion with localisation of wearable sensors,” IEEE Access,
vol. 8, pp. 155060–155070, 2020.
[17] Y. Vaizman, K. Ellis, G. Lanckriet, and N. Weibel,
“Extrasensory app: Data collection in-the-wild with rich
user interface to self-report behavior,” in Proceedings of
the 2018 CHI conference on human factors in computing
systems, pp. 1–12, 2018.
[18] S. Davidashvilly, M. Hssayeni, C. Chi, J. Jimenez-
Shahed, and B. Ghoraani, “Activity recognition in parkin-
son’s patients from motion data using a cnn model trained
by healthy subjects,” in 2022 44th Annual International
Conference of the IEEE Engineering in Medicine &
Biology Society (EMBC), pp. 3199–3202, IEEE, 2022.
[19] K. Peppas, A. C. Tsolakis, S. Krinidis, and D. Tzovaras,
“Real-time physical activity recognition on smart mobile
devices using convolutional neural networks,” Applied
Sciences, vol. 10, no. 23, p. 8482, 2020.
[20] M. A. Khatun, M. A. Yousuf, S. Ahmed, M. Z. Uddin,
S. A. Alyami, S. Al-Ashhab, H. F. Akhdar, A. Khan,
A. Azad, and M. A. Moni, “Deep cnn-lstm with self-
attention model for human activity recognition using
wearable sensor,” IEEE Journal of Translational Engi-
neering in Health and Medicine, vol. 10, pp. 1–16, 2022.
[21] S. Mekruksavanich, P. Jantawong, N. Hnoohom, and
A. Jitpattanakul, “A novel deep bigru-resnet model for
human activity recognition using smartphone sensors,” in
2022 19th International Joint Conference on Computer
Science and Software Engineering (JCSSE), pp. 1–5,
2022.
[22] E. Fridriksdottir and A. G. Bonomi, “Accelerometer-
based human activity recognition for patient monitoring
using a deep neural network,” Sensors, vol. 20, no. 22,
p. 6424, 2020.
[23] Y. Tian, J. Zhang, J. Wang, Y. Geng, and X. Wang,
“Robust human activity recognition using single ac-
celerometer via wavelet energy spectrum features and
ensemble feature selection,” Systems Science & Control
Engineering, vol. 8, no. 1, pp. 83–96, 2020.
[24] S. Rani, H. Babbar, S. Coleman, A. Singh, and H. M.
Aljahdali, “An efficient and lightweight deep learn-
ing model for human activity recognition using smart-
phones,” Sensors, vol. 21, no. 11, p. 3845, 2021.
[25] F. Cruciani, C. Sun, S. Zhang, C. Nugent, C. Li, S. Song,
C. Cheng, I. Cleland, and P. Mccullagh, “A public
domain dataset for human activity recognition in free-
living conditions,” pp. 166–171, 2019.
[26] Y. Asim, M. A. Azam, M. Ehatisham-ul Haq, U. Naeem,
and A. Khalid, “Context-aware human activity recogni-
tion (cahar) in-the-wild using smartphone accelerometer,”
IEEE Sensors Journal, vol. 20, no. 8, pp. 4361–4371,
2020.
[27] M. Ehatisham-ul Haq and M. A. Azam, “Opportunistic
sensing for inferring in-the-wild human contexts based
on activity pattern recognition using smart computing,”
Future Generation Computer Systems, vol. 106, pp. 374–
392, 2020.
[28] S. Fatima, “Activity recognition in older adults with
training data from younger adults: Preliminary results on
in vivo smartwatch sensor data,” pp. 1–4, 2021.
[29] F. Cruciani, A. Vafeiadis, C. Nugent, I. Cleland, P. Mc-
Cullagh, K. Votis, D. Giakoumis, D. Tzovaras, L. Chen,
and R. Hamzaoui, “Feature learning for human activity
recognition using convolutional neural networks,” CCF
Transactions on Pervasive Computing and Interaction,
vol. 2, no. 1, pp. 18–32, 2020.
[30] M. Ehatisham-ul Haq, F. Murtaza, M. A. Azam, and
Y. Amin, “Daily living activity recognition in-the-wild:
Modeling and inferring activity-aware human contexts,”
Electronics, vol. 11, no. 2, p. 226, 2022.
[31] M. Ehatisham-Ul-Haq, M. A. Azam, Y. Amin, and
U. Naeem, “C2fhar: Coarse-to-fine human activity recog-
nition with behavioral context modeling using smart
inertial sensors,” IEEE Access, vol. 8, pp. 7731–7747,
2020.
[32] Y. Vaizman, K. Ellis, and G. Lanckriet, “Recognizing
detailed human context in the wild from smartphones
and smartwatches,” IEEE pervasive computing, vol. 16,
no. 4, pp. 62–74, 2017.
[33] S. Tahir, A. Raheel, M. Ehatisham-ul Haq, and A. Ar-
salan, “Object based human-object interaction (hoi)
recognition using wrist-mounted sensors,” in 2020
IEEE 23rd International Multitopic Conference (INMIC),
pp. 1–6, IEEE, 2020.
[34] F. Cruciani, C. Sun, S. Zhang, C. Nugent, C. Li,
S. Song, C. Cheng, I. Cleland, and P. Mccullagh, “A
public domain dataset for human activity recognition
in free-living conditions,” in 2019 IEEE SmartWorld,
Ubiquitous Intelligence & Computing, Advanced
& Trusted Computing, Scalable Computing &
Communications, Cloud & Big Data Computing,
Internet of People and Smart City Innovation
(SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI),
pp. 166–171, IEEE, 2019.