ChapterPDF Available

Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis

Authors:

Abstract and Figures

People tracking is one of the fundamental goals of human behavior recognition. Development of cameras, tracking algorithms and effective computations make it appropriate. But, when the question is privacy and secrecy, cameras have a great obligation on it. Our fundamental goal of this research is to replace video camera with a device (2D LiDAR) that significantly preserve the privacy of the user, solve the issue of narrow field of view and make the system functional simultaneously. We consider individual movements of every moving objects on the plane and figure out the objects as a person based on ankle orientation and movements. Our approach calculates the number of frames of every moving object and finally create a video based on those frames.
Content may be subject to copyright.
Tracking People Using Ankle-Level 2D LiDAR
for Gait Analysis
Mahmudul Hasan
1,2(&)
, Junichi Hanawa
1
, Riku Goto
1
,
Hisato Fukuda
1
, Yoshinori Kuno
1
, and Yoshinori Kobayashi
1
1
Saitama University, Saitama, Japan
{hasan,hanawa0801,r.goto,fukuda,kuno,
yosinori}@hci.ics.saitama-u.ac.jp
2
Comilla University, Comilla, Bangladesh
Abstract. People tracking is one of the fundamental goals of human behavior
recognition. Development of cameras, tracking algorithms and effective com-
putations make it appropriate. But, when the question is privacy and secrecy,
cameras have a great obligation on it. Our fundamental goal of this research is to
replace video camera with a device (2D LiDAR) that signicantly preserve the
privacy of the user, solve the issue of narrow eld of view and make the system
functional simultaneously. We consider individual movements of every moving
objects on the plane and gure out the objects as a person based on ankle
orientation and movements. Our approach calculates the number of frames of
every moving object and nally create a video based on those frames.
Keywords: People tracking 2D LiDAR Kalman lter Ankle level tracking
1 Introduction
Person Tracking (PT) with machine is a salient eld in Human Computer Interaction
(HCI). Research on person tracking reached on utmost approximations in recent years.
It involves with surface mapping, pointing personsposition, consequent movements,
differentiate these with other properties and nally projecting these on desired surface.
Time series of individual position data enables us to analyze trajectory for many
purpose (e.g. marketing). PT with 2D and 3D cameras play signicant role in different
practical applications. Real time PT from a live video makes it more robust and usable
in different scenarios. Some statistical models and their efcacies make the PT well
accepted to all. Here video camera plays the role of data acquisition and some of these
can perform enhancement of that captured data. Recently, along with the development
of deep learning-based image processing, the performance of people detection and
tracking using cameras are dramatically improved. However, when we consider using
cameras everywhere in a daily life, privacy issue cannot be ignored. In addition, some
phenomena make it difcult to use cameras in the special situation such as smoke, fog
etc. Furthermore, though the low-cost cameras (not only RGB cameras but also RGB-D
cameras) are widely used, but computational cost of image processing is not least,
besides it may go to apex in case of using deep learning techniques especially for many
cameras for wide area surveillance.
©The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2021
T. Ahram (Ed.): AHFE 2020, AISC 1213, pp. 4046, 2021.
https://doi.org/10.1007/978-3-030-51328-3_7
Our focus of this research is to use a sensor that will not compromise with privacy
but enhance the efciency of tracking. To cope with these problems, we propose a new
people tracking technique using 2D LiDAR. Cost and real-time computational facility
payed as the key inuence behind this. To minimize the occlusions of pedestrians each
other, we put 2D LiDAR at an ankle level and range the target area horizontally. Main
issue of tracking people in this sensor setup is how to discriminate individuals from the
isolated observation of multiple ankles of multiple persons. Here, we proposed an
avant-garde method to use the time series data of ranging for classifying individuals.
Individual ankles trajectories were considered for movement detection. Distances
between ankles were calculated by well-known Euclidean nearest neighbor technique.
This approach helps to determine the cluster of every pairs of ankles as a person. We
clearly identied the paths of walking, running even if it goes very fast. This approach
calculates the number of frames of every moving object and nally creates a video
based on those frames. These videos can be further used for surveillances or any other
use. Our method provides very accurate and robust tracking when the target is walking
or running. Experimental result shows the effectiveness of our proposed method.
2 Related Works
LiDAR stands for Light Detection and Ranging is a sensing device that can measure
variable distances on the plane. Many researchers contributed on this tracking [1]eld
using cameras or LiDARs. Trajectory based human behavior analysis [2] using LiDAR
was our initial appreciation of better use of this sensor. Then museum guide robot [3,4]
shows how LiDAR works on practical scenario.
Specic person tracking is a vital task in daily applications. Misu, K. et al. [5]
illustrates an approach for identifying and tracking a specic individual in outdoor by a
mobile robot. Here their robot used 3D LIDARs for person recognition and identi-
cation, and a directivity-variable antenna (named ESPAR antenna) for nding a certain
person if he is occluded and/or goes out-of-view. On the other hand, Yan, Zhi et al. [6]
presented a framework which permit a robot to learn a sophisticated 3D LiDAR-based
person classier from other sensors time to time and getting benets of a multi-sensor
tracking system. Here, Koide, Kenji et al. [7] depicted a human identication system
for a mobile service robot using a smart cellphone and laser range nders (LRFs). All
these approaches used 3D LiDAR for their purposes which is not cost effective, but in
our method, we prepared our system as it can track people with 2D LiDAR in outdoor
and indoor also.
Claudia, A.A. et al. [8]dened an experimental outcome using a single LIDAR
sensor to provide a continuous recognition of an individual with respect of time and
space. This system was based on the People Tracker package, aka PeTra, that used a
convolutional neural network (CNN) to detect person legs in intricated scenarios. This
system tracks only an individual with timely basis, but our system can track multiple
people on the surface same time. Dimitrievski M. et al. [9] introduced a unique 2D3D
pedestrian tracker created for applications in autonomous vehicles. Here they used
multiple sensors for the same. Sualeh, M. et al. [10] proposed a vigorous Multiple
Object Detection and Tracking (MODT) procedure, using various 3D LiDARs for
Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis 41
perception. The combined LiDAR data is considered with an effective MODT struc-
ture, think about the shortcomings of the vehicle-embedded handling situation. Bence
Gálai et al. [11,12] introduced a performance analysis of numerous descriptors suitable
to person gait analysis in Rotating Multi-Beam (RMB) Lidar measurement systems. All
these methods are based on multiple sensor-based methods and need gigantic calcu-
lations to make it robust. Qing Li et al. [13] presented a unique deep convolutional
network pipeline, LO-Net, for immediate lidar odometry valuation. Jiaxiong Qiu et al.
[14] suggested a deep learning architecture that delivered perfect dense depth for the
alfresco scene from a single-color image and a sparse depth. These applications are
specially prepared for autonomous driving based on LiDAR and depth sensor cameras
for tracking.
Research on person tracking and their positioning is wide covering and conducted
over a period. Many researches on this arena used cameras, 2D/3D LiDARs, ultrasonic
sensors etc. for their experiments. But based on only 2D LiDAR person tracking and
gait analysis is really a challenging and new concept. We concentrate on the topic and
showed how it can be traced. Our focus is to develop a low-cost 2D LiDAR based
tracking system which can be used any application of tracking without interfering
human privacy.
3 Proposed Method
We introduce a tracking method based on LiDAR sensor. Our method works on
different environment. Here we have used 2D lidar sensor for its lower price and
computational effectiveness. For our experiment we used HOKUYO UMT30LX Lidar
sensor in a plane ground surface. We placed our LiDAR at ankle level and collected
our data in 270-degree directions. When people are walking within the range of LiDAR
sensor it collects actual position of the moving object and distance from its own
position. We plot ankle positions of persons on the plane and tracked their movements.
As shown in Fig. 1. a LiDAR is placed in the ankle level of a person and it provides
visual information to the corresponding computer. In the rst frame white lines indicate
the boundaries of the surface, i.e. walls. We then remove the boundaries from the frame
by background subtraction and showed only ankle position on the surface. The green
marked lines indicate the ankle position of a person. If more than one person appears in
front of the sensor it clearly identies the persons. Distances between pixels of an ankle
and between two ankles are measured and used for getting decision about number of
people moving in front of a sensor. An experimented threshold is used for making
decision of a person in different circumstances. Here we used Euclidian distance
measurement approach for tracking persons.
dist xa;ya
ðÞ;xb;yb
ðÞðÞ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xaxb
ðÞ
2þyayb
ðÞ
2

rð1Þ
where, dist is a function for calculating distance between pixel (x
a
,y
a
) and (x
b
,y
b
)ofan
anklesposition then between all appeared ankles on the plane.
42 M. Hasan et al.
Depending on the positional relationship with the sensor sometimes it is possible
that one ankle occludes another one. But this situation does not inuence the decision.
By using cluster-based techniques, it does not assume that both feet are visible. The
system only calculates the distance between ankles appeared in the LiDAR eld of
view. If there is no other ankle is found, system identify that there is only one people
walking on the oor. If the distance is larger than the threshold our system tracks the
ankles as a separate person even though one ankle position is found. Thus, system
overcomes the problem of obscure and disappearance of feet on the oor. Here in
Fig. 2. it shows that our system clearly identies the ankles with disappearance. In two
frames before, here it calculated the ankle distance and found that it is more than
maximum threshold and there are two persons here. But in one frame before the
distance became smaller and it counted that this is within the range and there is only
one person here. Finally, in current frame it nds that one ankle is occluded by another
and without misclassication it shows that there is one person in the frame.
Fig. 1. (a) Lidar sensor placed on ankle level and getting data (b) Personsankle movements
and standing positions are showed and (c) Tracking people with a handler marker sign |and
shows his/her moving directions.
Fig. 2. (a) ankles are in different positions and it goes in overlapping position, (b) frame by
frame detection
Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis 43
4 Experimental Results
There are no well-known data sets for 2D LiDAR based person tracking and gait
analysis. For the experiments we prepared our own data sets and appraise our method
on this benchmark, which has 35 samples with 27 female and 8 male participants. The
standard contains two scenarios: normal and highly crowded. We ensure that we
evaluate the accomplishment of our proposals on the validation data set. We used
Kalman lter to predict for tracking individuals. Here our system can track a person
even if only one ankle is appeared on the frame.
In Fig. 3. we clearly see that in different criteria our system can predict the
movements of different persons with utmost accuracy. Here in the rst upper frame
individual walking is shown, and corresponding lower frame shows that person is
going far from LiDAR. In 2
nd
frame one person is running in different directions then
3
rd,
4
th
and 5
th
frames show other scenarios and corresponding lower frames show their
positions tracked by LiDAR using Kalman lter technique respectively.
Individual
Walking
Individual
Running
Only Ankle
Movement
Combined
Walking
Combined
Running
Fig. 3. Kalman Filter based movements in different circumstances; Upper Row: Ankle Positions
on the frame; Lower Row: Direction of the movements
Table 1. Experimental data and its performance
Individual
Walking
Individual
Running
Only
Ankle
movement
Combined Walking Combined Running
Persons/Frames 4 48
Frms
451
Frms
443
Frms
242
Frms
347
Frms
443
Frms
247
Frms
347
Frms
442
Frms
Gesture Correctly
Identied
448
Frms
450
Frms
440
Frms
241
Frms
344
Frms
439
Frms
236
Frms
333
Frms
329
Frms
Percentage 100% 98.04% 93.02% 97.62% 93.62% 90.70% 76.60% 70.21% 69.04%
44 M. Hasan et al.
In Table 1. we showed different persons and their recoded videos for our experi-
mental evaluation with different gestures. We categorized our experiments into 5 dif-
ferent ways. Here individual walking, running, only ankle movement and combined
walking can be tracked with absolute condence. We have few observations on
combined running scenario. We considered 4 peoples and their captured LiDAR videos
for performance evaluation. For validation we considered near about 4 s videos of
every person in different gestures where from each video we considered 4251 frames.
From the above table we see that this system performs relatively at on different
running situations. But compared with other camera-based systems the performance is
impressive. A gait analysis and person height estimation based on ankle movements is
also performed on the dataset and we interestingly found some consequences with
walking and running patterns.
5 Conclusion
In the cyber world a person is being tracked every time, everywhere. But when the
question is privacy, people want a space from eyes around him, even it is in real or
virtual world. On the other hand, questions of surveillance cannot be ignored. Here
LiDAR plays as a kernel of these issues. Without disclosing persons identity our
approach tracks a person and identies his/her movements. Now our system is ready
for commercial use. We will enhance our study to estimate the property of human
activities using LiDAR. We are walking on Gait analysis using Ankle level 2D LiDAR.
In future we will integrate the tracking and analyzing into one system.
References
1. Chen, L., Ai, H., Zhuang, Z., Shang, C.: Multiple people tracking with deeply learned
candidate selection and person re-identication. In: Proceedings of Multimedia and Expo
(ICME) (2018)
2. Rashed, M.G., Suzuki, R., Yonezawa, T., Lam, A., Kobayashi, Y., Kuno, Y.: Robustly
tracking people with LIDARs in a crowded museum for behavioral analysis. IEICE Trans.
Fundam. Electron. Commun. Comput. Sci. E100 A, 2458 (2017)
3. Oyama, T., Yoshida, E., Kobayashi, Y., Kuno, Y.: Tracking visitors with sensor poles for
robots museum guide tour. In: Proceedings of Human System Interactions (HSI), Sopot,
pp. 645650 (2013)
4. Oyama, T., Yoshida, E., Kobayashi, Y., Kuno, Y.: Tracking a Robot and Visitors in a
Museum Using Sensor Poles. Proc. of Frontiers of Computer Vision, pp. 3641, (2013)
5. Misu, K., Miura, J.: Specic person detection and tracking by a mobile robot using 3D
LIDAR and ESPAR antenna. In: Proceedings of Intelligent Autonomous Systems
(IAS) (2014)
6. Zhi, Y., Sun, L., Duckctr, T., Bellotto, N.: Multisensor online transfer learning for 3D
LiDAR-based human detection with a mobile robot. In: Proceedings of Intelligent Robots
and Systems (IROS), pp. 76357640 (2018)
Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis 45
7. Koide, K., Miura, J.: Person identication based on the matching of foot strike timings
obtained by LRFs and a smartphone. In: Proceedings of Intelligent Robots and Systems
(IROS), pp: 41874192 (2016)
8. Álvarez-Aparicio, C., Guerrero-Higueras, Á.M., Rodríguez-Lera, F.J., Clavero, J.G., Rico,
F.M., Matellán, V.: People detection and tracking using LIDAR sensors. Robotics 8,75
(2019)
9. Dimitrievski, M., Veelaert, P., Philips, W.: Behavioral pedestrian tracking using a camera
and LiDAR sensors on a moving vehicle. Sensors 19, 391 (2019)
10. Sualeh, M., Kim, G.-W.: Dynamic Multi-LiDAR Based Multiple Object Detection and
Tracking. Sensors. 19, 6, 1474 (2019)
11. Gálai, B., Benedek, C.: Feature selection for Lidar-based gait recognition. In: Proceedings of
Computational Intelligence for Multimedia Understanding (IWCIM), Prague, pp. 15 (2015)
12. Benedek, C., Nagy, B., Gálai, B., Jankó, Z.: Lidar-based gait analysis in people tracking and
4D visualization. In: Proceedings of European Signal Processing Conference (EUSIPCO),
Nice, pp. 11381142 (2015)
13. Li, Q., Chen, S., Wang, C., Li, X., Wen, C., Cheng, M., Li, J.: LO-Net: deep real-time lidar
odometry. In: Proceedings of Computer Vision and Pattern Recognition (CVPR), pp. 8473
8482 (2019)
14. Qiu, I., Cui, Z., Zhang, Y., Zhang, X., Liu, S., Zeng, B., Pollefeys, M.: DeepLiDAR: deep
surface normal guided depth prediction for outdoor scene from sparse liDAR data and
single-color image. In: Proceedings of Computer Vision and Pattern Recognition (CVPR),
pp. 33133322 (2019)
46 M. Hasan et al.
... Even though these fusions enhance the system performance, its s computational complexities can be ignored. Hasan M. et al. [93] developed a simple 2D LiDAR-based ankle-level person tracking system that used only LiDAR data. They empirically crafted the motion history images (MHI) from numeric lidar values and enhanced their study for gait analysis. ...
... There are two types of LiDAR sensors available, and 3D LiDAR is more effective for visual information, but very little research was found that worked with 2D LiDAR sensors. One of the fundamental applications is cost because 2D LiDAR sensors are very cheap and Fig. 5. 2D LiDAR-based person tracking system [93]. Table 4 Comparison with existing Approaches and Precisions [94]. ...
... Euclidean Clustering [107] 0.5 m 64.5 % DBSCAN [93] Adaptive 93.7 % Depth Clustering [108] 10 o 39.2 % Run Clustering [109] Params SLR 51.7 % Online Learning [102] Adaptive 89.8 % EDBSCAN [94] Adaptive 93.9 % EOPTICS [94] Adaptive 96.9 % easy to process. Another concerning issue is privacy. ...
Article
Object detection, Person tracking, and Person property estimation (PPE) are identical innovation areas trying to improve their accuracy in different parameters to fit various real applications. For many years, so much research has been done in these fields. Many scientists also used many more techniques and algorithms. But most of the innovations were deeply based on image-based analysis, where cameras were the critical components of data acquisition. Over the years, new technologies arrived, and different types of research are happening. Rather than cameras, some other sensors, like infrared, depth cameras, and very recently LiDAR sensors, are used to estimate person properties, track them, as well as to detect them. Especially, height, age, gender, region, etc., parameters can be measured as person property. Eventually, 3D object detection by LiDAR will be a state-of-the-art research field with the advent of autonomous driving initiations. We studied many articles and found enthusiastic outcomes with these sensor setups to understand contemporary technology and its efficacy. We categorized these research articles into video camera-based studies and other sensor-based studies. So many surveys have been done on video-based analysis, even with deep learning techniques. Another sensor-based research is very recent, and we do not get enough study on it. We thought to summarize these studies in a survey article, especially LiDAR-based analysis. This article covered most of the recent possible sensor-based studies of detection, person tracking and property estimation except cameras (all, RGB, RGB-D, etc.) based learning.
... We demonstrated a person tracking [5] system using 2D LiDAR in ankle level for gait analysis. Determining moving objects in front of the LiDAR sensor as a person was a challenging job. ...
... These Image datasets were used to estimate the person property [7] through a residual neural network. We previously developed a tracking system [5] based on our modi ed density-based clustering approaches EDBSCAN and EOPTICS [6]. Now we developed our recognition system to identify a person through gait analysis. ...
... We used the Kalman lter for tracking the persons moving in front of the sensor. Our previously published article [5] introduced person tracking based on LiDAR. We enhanced density-based clustering algorithms in our nest article [6] and showed a signi cant improvement over the traditional density-based approach. ...
Preprint
Full-text available
Video-based recognition techniques are solemnly effective, and it comes to a new era of research nowadays. Yet again, it suffers some bottlenecks indeed. Situations, surroundings, and momentums may be disgraceful with all new inventions. So, to solve the drawbacks of technology is to imply a new technology on it. Biometric features are very authentic and high valued measures for human identifications. Most of the techniques are dependent on close contact with the subject. A gait is a pattern that performs by walking from the individual. Almost all studies of gait-based person identifications are performed by RGB or RGB-D cameras. Very few studies were done by using LiDAR data. Applying 2D LiDAR images for individual tracking and identification is superb when video surveillances fail to perform accurately due to environmental and imposed difficulties (i.e., disaster, rain, fog, smoke, snow, occlusion, cost, etc.). This research performed a comprehensive exhibition of 2D LiDAR data with a rigorous self-made dataset and customized residual neural network. We considered different experimental setups and found exciting precisions there. Our system is appropriate for recognizing a person based on his ankle level 2D LiDAR data.
... Many researchers have proposed various methods 1 IDF1 is the ratio of correctly identified detections over the average number of ground-truth and computed detections. using different types of devices such as radio frequency (RF) [15], light detection and ranging (LiDAR) [16], [17], and cameras [18]- [21]. Bluetooth and Wi-Fi are widely used for close-contact tracing owing to the wide availability of smartphones. ...
... Refs. [16], [17] proposed target localization using LiDAR fixed in the target environment. However, we need to deploy LiDAR while incurring deployment cost although it can localize and track targets accurately. ...
Article
Full-text available
The Coronavirus disease 2019 (COVID-19) is still prevalent in the world. Exercise is important to maintain our health while dealing with infectious diseases. Social distancing is more important during exercise because we may not be able to wear masks to avoid breathing problems, heatstroke, etc. To maintain social distancing during exercise, we develop a close-contact detection system using a single camera especially for sports in schools and gyms. We rely on a single camera because of the deployment cost. The system recognizes people from a video and estimates the interpersonal distance for close-contact detection. The challenge is the occlusion of people, which leads to false negatives in close-contact detection. To solve the problem, we leverage the observation that most false negatives in human detection are caused by occlusion owing to other people. This is because there are few obstacles in sports facilities. Based on the above observation, we assume that a person still exists near the last detected position even when s/he disappears in the proximity of other people. For evaluation, we recorded 834 videos that were 112 min long in total including various scenarios with 2724 close-contacts. The results show that the F1-score of close-contact detection and tracking are 83.6% and 67.3%, respectively. We also confirmed that the start and end time errors are within 1 s for more than 80% of the close-contacts.
... In our previous work [16], we tried to establish a LiDARbased tracking system that can track pedestrians properly. We used the DBSCAN algorithm for clustering the data and KF to track a person's path. ...
... Online Learning [20] proposed an adaptive threshold selection method in 2020 and found 89.8% accuracy. Our previous approach used only DBSCAN Euclidean clustering [11] 0.5 m 64.5% DBSCAN [16] Adaptive 93.7% Depth clustering [18] 10 • 39.2% Run clustering [19] Params SLR 51.7% Online learning [20] Adaptive 89.8% Our (EDBSCAN) Adaptive 93.9% ...
Article
Full-text available
Along with the progress of deep learning techniques, people tracking using video cameras became easy and accurate. However, privacy and security issues are not enough to be concerned with vision‐based monitoring. People may not be tolerated surveillance cameras installed everywhere in our daily life. A camera‐based system may not work robustly in unusual situations such as smoke, fogs, or darkness. To cope with these problems, we propose a two‐dimensional (2D) LiDAR‐based people tracking technique based on clustering algorithms. A LiDAR sensor is a prominent approach for tracking people without disclosing their identity, even under challenging conditions. For tracking people, we propose modified density‐based spatial clustering of applications with noise (DBSCAN) and ordering points to identify cluster structure (OPTICS) algorithms for clustering 2D LiDAR data. We have confirmed that our approach significantly improves the accuracy and robustness of people tracking through the experiments. © 2021 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.
... All research, as mentioned earlier, was conducted on 3D LiDAR or RGB camera-based images. Our previous approaches [13,14] were based on a 2D LiDAR image to find an effective tracking system by adopting EDBSCAN and EOPTICS algorithms. Here, based on the LiDAR image, we find a way to estimate a person's height and age using deep learning techniques. ...
Chapter
Video-based estimation plays a very significant role in person identification and tracking. The emergence of new technology and increased computational capabilities make the system robust and accurate day by day. Different RGB and dense cameras are used in these applications over time. Video-based analyses are offensive, and individual identity is leaked. As an alternative to visual capturing, now LiDAR shows its credentials with utmost accuracy. Besides privacy issues but critical natural circumstances also can be addressed with LiDAR sensing. Some susceptible scenarios like heavy fog and smoke in the environment are downward performed with typical visual estimation. In this study, we figured out a way of estimating a person's property, i.e., height and age, etc., based on LiDAR data. We placed different 2D LiDARs in ankle levels and captured persons' movements. These distance data are being processed as motion history images. We used deep neural architecture for estimating the properties of a person and achieved significant accuracies. This 2D LiDAR-based estimation can be a new pathway for critical reasoning and circumstances. Furthermore, computational cost and accuracies are very influential over traditional approaches.
Chapter
Older adults want to remain independent with dignity for as long as possible. Gait assessment service plays an essential role in elderly care and rehabilitation by evaluating gait impairment, to provide suitable and continuous treatments. Despite over a decade of research and development in gait assessment, accurate and reliable gait assessment service for older adults in use are few. We propose an automatic gait impairment assessment service, for community-dwelling older adults, by combining multiple LiDAR (Light Detection and Ranging) sensing with 11-meter walking test. Multiple sensors fusion strategy is employed to sense and interpret gaits in a complementary way. Leveraging scan-matching technology and foot tracking method, the gait assessment service can achieve high accuracy with reasonable cost and no privacy issue. The experiment results show obvious differences of disease-specific motor symptoms in comparing groups. The potential merit of gait assessment service in daily use is also explored in this study.KeywordsGait impairmentgait assessmentpoint cloud alignmentfoot tracking
Article
Full-text available
We performed an electronic database search of published works from 2012 to mid-2021 that focus on human gait studies and apply machine learning techniques. We identified six key applications of machine learning using gait data: 1) Gait analysis where analyzing techniques and certain biomechanical analysis factors are improved by utilizing artificial intelligence algorithms, 2) Health and Wellness, with applications in gait monitoring for abnormal gait detection, recognition of human activities, fall detection and sports performance, 3) Human Pose Tracking using one-person or multi-person tracking and localization systems such as OpenPose, Simultaneous Localization and Mapping (SLAM), etc., 4) Gait-based biometrics with applications in person identification, authentication, and re-identification as well as gender and age recognition 5) “Smart gait” applications ranging from smart socks, shoes, and other wearables to smart homes and smart retail stores that incorporate continuous monitoring and control systems and 6) Animation that reconstructs human motion utilizing gait data, simulation and machine learning techniques. Our goal is to provide a single broad-based survey of the applications of machine learning technology in gait analysis and identify future areas of potential study and growth. We discuss the machine learning techniques that have been used with a focus on the tasks they perform, the problems they attempt to solve, and the trade-offs they navigate.
Article
Full-text available
The tracking of people is an indispensable capacity in almost any robotic application. A relevant case is the @home robotic competitions, where the service robots have to demonstrate that they possess certain skills that allow them to interact with the environment and the people who occupy it; for example, receiving the people who knock at the door and attending them as appropriate. Many of these skills are based on the ability to detect and track a person. It is a challenging problem, particularly when implemented using low-definition sensors, such as Laser Imaging Detection and Ranging (LIDAR) sensors, in environments where there are several people interacting. This work describes a solution based on a single LIDAR sensor to maintain a continuous identification of a person in time and space. The system described is based on the People Tracker package, aka PeTra, which uses a convolutional neural network to identify person legs in complex environments. A new feature has been included within the system to correlate over time the people location estimates by using a Kalman filter. To validate the solution, a set of experiments have been carried out in a test environment certified by the European Robotic League.
Article
Full-text available
Environmental perception plays an essential role in autonomous driving tasks and demands robustness in cluttered dynamic environments such as complex urban scenarios. In this paper, a robust Multiple Object Detection and Tracking (MODT) algorithm for a non-stationary base is presented, using multiple 3D LiDARs for perception. The merged LiDAR data is treated with an efficient MODT framework, considering the limitations of the vehicle-embedded computing environment. The ground classification is obtained through a grid-based method while considering a non-planar ground. Furthermore, unlike prior works, 3D grid-based clustering technique is developed to detect objects under elevated structures. The centroid measurements obtained from the object detection are tracked using Interactive Multiple Model-Unscented Kalman Filter-Joint Probabilistic Data Association Filter (IMM-UKF-JPDAF). IMM captures different motion patterns, UKF handles the nonlinearities of motion models, and JPDAF associates the measurements in the presence of clutter. The proposed algorithm is implemented on two slightly dissimilar platforms, giving real-time performance on embedded computers. The performance evaluation metrics by MOT16 and ground truths provided by KITTI Datasets are used for evaluations and comparison with the state-of-the-art. The experimentation on platforms and comparisons with state-of-the-art techniques suggest that the proposed framework is a feasible solution for MODT tasks.
Article
Full-text available
In this paper, we present a novel 2D–3D pedestrian tracker designed for applications in autonomous vehicles. The system operates on a tracking by detection principle and can track multiple pedestrians in complex urban traffic situations. By using a behavioral motion model and a non-parametric distribution as state model, we are able to accurately track unpredictable pedestrian motion in the presence of heavy occlusion. Tracking is performed independently, on the image and ground plane, in global, motion compensated coordinates. We employ Camera and LiDAR data fusion to solve the association problem where the optimal solution is found by matching 2D and 3D detections to tracks using a joint log-likelihood observation model. Each 2D–3D particle filter then updates their state from associated observations and a behavioral motion model. Each particle moves independently following the pedestrian motion parameters which we learned offline from an annotated training dataset. Temporal stability of the state variables is achieved by modeling each track as a Markov Decision Process with probabilistic state transition properties. A novel track management system then handles high level actions such as track creation, deletion and interaction. Using a probabilistic track score the track manager can cull false and ambiguous detections while updating tracks with detections from actual pedestrians. Our system is implemented on a GPU and exploits the massively parallelizable nature of particle filters. Due to the Markovian nature of our track representation, the system achieves real-time performance operating with a minimal memory footprint. Exhaustive and independent evaluation of our tracker was performed by the KITTI benchmark server, where it was tested against a wide variety of unknown pedestrian tracking situations. On this realistic benchmark, we outperform all published pedestrian trackers in a multitude of tracking metrics.
Conference Paper
Full-text available
Online multi-object tracking is a fundamental problem in time-critical video analysis applications. A major challenge in the popular tracking-by-detection framework is how to associate unreliable detection results with existing tracks. In this paper, we propose to handle unreliable detection by collecting candidates from outputs of both detection and tracking. The intuition behind generating redundant candidates is that detection and tracks can complement each other in different scenarios. Detection results of high confidence prevent tracking drifts in the long term, and predictions of tracks can handle noisy detection caused by occlusion. In order to apply optimal selection from a considerable amount of candidates in real-time, we present a novel scoring function based on a fully convolutional neural network, that shares most computations on the entire image. Moreover, we adopt a deeply learned appearance representation, which is trained on large-scale person re-identification datasets, to improve the identification ability of our tracker. Extensive experiments show that our tracker achieves real-time and state-of-the-art performance on a widely used people tracking benchmark.
Article
Full-text available
This introduces a method which uses LIDAR to identify humans and track their positions, body orientation, and movement trajectories in any public space to read their various types of behavioral responses to surroundings. We use a network of LIDAR poles, installed at the shoulder level of typical adults to reduce potential occlusion between persons and/or objects even in large-scale social environments. With this arrangement, a simple but effective human tracking method is proposed that works by combining multiple sensors' data so that large-scale areas can be covered. The effectiveness of this method is evaluated in an art gallery of a real museum. The result revealed good tracking performance and provided valuable behavioral information related to the art gallery.
Conference Paper
In this paper we introduce a new approach on gait analysis based on data streams of a Rotating Multi Beam (RMB) Lidar sensor. The gait descriptors for training and recognition are observed and extracted in realistic outdoor surveillance scenarios, where multiple pedestrians walk concurrently in the field of interest, while occlusions or background noise may affects the observation. The proposed algorithms are embedded into an integrated 4D vision and visualization system. Gait features are exploited in two different components of the workflow. First, in the tracking step the collected characteristic gait parameters support as biometric descriptors the re-identification of people, who temporarily leave the field of interest, and re-appear later. Second, in the visualization module, we display moving avatar models which follow in real time the trajectories of the observed pedestrians with synchronized leg movements. The proposed approach is experimentally demonstrated in eight multi-target scenes.