Figure - available from: Remote Sensing
This content is subject to copyright.
Illustration of how an FMCW radar acquires range and Doppler information, taking sawtooth wave as an example. f d is Doppler shift while τ is time delay. Adopted from [73].

Illustration of how an FMCW radar acquires range and Doppler information, taking sawtooth wave as an example. f d is Doppler shift while τ is time delay. Adopted from [73].

Source publication
Article
Full-text available
Radar, as one of the sensors for human activity recognition (HAR), has unique characteristics such as privacy protection and contactless sensing. Radar-based HAR has been applied in many fields such as human–computer interaction, smart surveillance and health assessment. Conventional machine learning approaches rely on heuristic hand-crafted featur...

Similar publications

Article
Full-text available
p>In recent years, many researchers have studied the HAR (Human Activity Recognition) system. HAR using smart home sensor is based on computing in smart environment, and intelligent surveillance system conducts intensive research on peripheral support life. The previous system studied in some of the activities is a fixed motion and the methodology...

Citations

... The following sections will provide an in-depth exploration [50][51][52][53][54][55][56][57][58] Tw-see [59], ABAR [60] of these prominent data acquisition techniques, elucidating their underlying principles, practical applications, and future research directions in the domain of sports science. ...
... In addition to the well-established motion capture technologies such as optical, wearable, and computer vision-based systems, researchers are exploring innovative methods to capture and analyze human motion. These emerging technologies, including audio-based [56], radar-based [57,58], and WiFi-based motion capture [59], offer unique advantages and have the potential to overcome certain limitations of traditional systems. ...
Article
Full-text available
This paper presents a comprehensive review of state-of-the-art motion capture techniques for digital human modeling in sports, including traditional optical motion capture systems, wearable sensor capture systems, computer vision capture systems, and fusion motion capture systems. The review explores the strengths, limitations, and applications of each technique in the context of sports science, such as performance analysis, technique optimization, injury prevention, and interactive training. The paper highlights the significance of accurate and comprehensive motion data acquisition for creating high-fidelity digital human models that can replicate an athlete’s movements and biomechanics. However, several challenges and limitations are identified, such as limited capture volume, marker occlusion, accuracy limitations, lack of diverse datasets, and computational complexity. To address these challenges, the paper emphasizes the need for collaborative efforts from researchers and practitioners across various disciplines. By bridging theory and practice and identifying application-specific challenges and solutions, this review aims to facilitate cross-disciplinary collaboration and guide future research and development efforts in harnessing the power of digital human technology for sports science advancement, ultimately unlocking new possibilities for athlete performance optimization and health.
... • Human identify and recognition [124], [125] • Radar-and WiFi-based sensing [95], [126]- [128] • Improvement on the spatial resolution of wireless sensing • Multi-object-multi-task sensing ⊙ ▲ ♣ ■ 1 In order to clearly indicate the above questions may arise in which kinds of ISAC scenarios, we artificially categorize them into two generic types, e.g., ⊙ : monostatic deployment and ♣ : ...
... In this paper, we critically appraise the recent advances and formulate ten open challenges in ISAC systems, some of which have already had some initial progress, while others are still in the exploratory phase. Following the narrative of "Theory-System-Network-Application", we summarize the structure of this paper as well as the aforementioned open questions and the relevant ISAC scenarios in Fig. 2. As a benefit of concerted community effort, the ISAC philosophy has evolved from a compelling theoretical concept to a practical engineering challenge [23], [52], [66], [67], [76]- [79], [92], [95], [121], [124], [131], [136]- [139]. To further pave the way for its successful evaluation, we critically appraise the recent advances and summarize the above ten questions in TABLE I. ...
... • Feature Extraction: Radio features are obtained through manual feature engineering or automatic machine learning (ML) algorithms [18].Manual feature engineering typically relies on the amplitude and phase of received signals, as they are influenced by human activity [18], [124]. Timevarying Doppler/micro-Doppler frequency shifts are effective for single-person activity recognition, representing radial velocities of various body parts [183]. ...
Article
Full-text available
It is anticipated that integrated sensing and communications (ISAC) would be one of the key enablers of next-generation wireless networks (such as beyond 5G (B5G) and 6G) for supporting a variety of emerging applications. In this paper, we provide a comprehensive review of the recent advances in ISAC systems, with a particular focus on their foundations, physical-layer system design, networking aspects and ISAC applications. Furthermore, we discuss the corresponding open questions of the above that emerged in each issue. Hence, we commence with the information theory of sensing and communications (S&C), followed by the information-theoretic limits of ISAC systems by shedding light on the fundamental performance metrics. Next, we discuss their clock synchronization and phase offset problems, the associated Pareto-optimal signaling strategies, as well as the associated super-resolution physical-layer ISAC system design. Moreover, we envision that ISAC ushers in a paradigm shift for the future cellular networks relying on network sensing, transforming the classic cellular architecture, cross-layer resource management methods, and transmission protocols. In ISAC applications, we further highlight the security and privacy issues of wireless sensing. Finally, we close by studying the recent advances in a representative ISAC use case, namely the multi-object multi-task (MOMT) recognition problem using wireless signals.
... Compared to video-based motion capture systems, radar-based motion capture not only offers the same advantages but also enables data collection while ensuring privacy protection. This unique technological advantage makes radar-based motion capture highly promising with great technical potential [65]. ...
Article
Full-text available
Motion capture technology plays a crucial role in optimizing athletes’ skills, techniques, and strategies by providing detailed feedback on motion data. This article presents a comprehensive survey aimed at guiding researchers in selecting the most suitable motion capture technology for sports science investigations. By comparing and analyzing the characters and applications of different motion capture technologies in sports scenarios, it is observed that cinematography motion capture technology remains the gold standard in biomechanical analysis and continues to dominate sports research applications. Wearable sensor-based motion capture technology has gained significant traction in specialized areas such as winter sports, owing to its reliable system performance. Computer vision-based motion capture technology has made significant advancements in recognition accuracy and system reliability, enabling its application in various sports scenarios, from single-person technique analysis to multi-person tactical analysis. Moreover, the emerging field of multimodal motion capture technology, which harmonizes data from various sources with the integration of artificial intelligence, has proven to be a robust research method for complex scenarios. A comprehensive review of the literature from the past 10 years underscores the increasing significance of motion capture technology in sports, with a notable shift from laboratory research to practical training applications on sports fields. Future developments in this field should prioritize research and technological advancements that cater to practical sports scenarios, addressing challenges such as occlusion, outdoor capture, and real-time feedback.
... Depth cameras struggle with background interference, while RGB cameras excel in recognition but risk user privacy and light interference. Wearable sensors collect data via inertial measurement units (IMUs) but can be uncomfortable; radar sensors, in contrast, offer privacy and are unaffected by light or obstructions, providing greater versatility [6]. Due to their noncontact nature and ability to protect privacy, radar sensors are increasingly used in fields such as human-computer interaction, smart living, and health management for HAR [7,8]. ...
Article
Full-text available
Activity recognition is one of the significant technologies accompanying the development of the Internet of Things (IoT). It can help in recording daily life activities or reporting emergencies, thus improving the user’s quality of life and safety, and even easing the workload of caregivers. This study proposes a human activity recognition (HAR) system based on activity data obtained via the micro-Doppler effect, combining a two-stream one-dimensional convolutional neural network (1D-CNN) with a bidirectional gated recurrent unit (BiGRU). Initially, radar sensor data are used to generate information related to time and frequency responses using short-time Fourier transform (STFT). Subsequently, the magnitudes and phase values are calculated and fed into the 1D-CNN and Bi-GRU models to extract spatial and temporal features for subsequent model training and activity recognition. Additionally, we propose a simple cross-channel operation (CCO) to facilitate the exchange of magnitude and phase features between parallel convolutional layers. An open dataset collected through radar, named Rad-HAR, is employed for model training and performance evaluation. Experimental results demonstrate that the proposed 1D-CNN+CCO-BiGRU model demonstrated superior performance, achieving an impressive accuracy rate of 98.2%. This outperformance of existing systems with the radar sensor underscores the proposed model’s potential applicability in real-world scenarios, marking a significant advancement in the field of HAR within the IoT framework.
... Millimeter wave radar has been widely used in human activity recognition due to robustness to variations in illu-mination and weather conditions, and privacy protection [2,4,5]. By analyzing the callback signals from the millimeter wave radar, it is possible to measure changes in the distance to the body and thus analyse information about its structure and movement. ...
... Such mapping has reduced radar-based HAR to a computer vision problem of image classification and has triggered the development of several machine learning/deep learning classification approaches. Research in HAR has advanced from traditional machine learning-based classification using hand-crafted features to automatic abstract feature extraction through deep learning [15]. Commonly used deep learning architectures for HAR are convolutional neural network (CNN), long-short term memory (LSTM), and autoencoders [16], [17]. ...
Article
Recently, several deep-learning (DL) techniques using different types of 2-D representations of radar returns, have been developed for radar-based human activity recognition (HAR). Most of these DL techniques involve a fusion approach (either at the feature level (intermediate) or at the decision level (late)) as the information obtained from one 2-D radar representation supplements the information obtained from another 2-D radar representation for enhanced HAR. The inputs to these fusion-based DL techniques are RGB (red, green, blue) images of 2-D representations of radar returns. The information contained in the 2-D representations is completely mapped to color (RGB) domains. However, none of the DL techniques exploit this color information explicitly for HAR. This work proposes a novel lightweight multi-domain multi-level fused patch-based learning model that exploits individual color domain information of RGB images of 2-D representations, namely, range-time, range-Doppler, and spectrograms for enhanced HAR using radars. This work proposes a novel domain-level (early) fusion of 2-D representations of radar returns based on the color domain information. Individual color planes (R, G, B) of 2-D representations are fused together to form consolidated 3-channel images that serve as input to an isotropic patch-based learning model called convolutional mixer (convMixer). The early (domain) fused 3-channel images are used as inputs to an attentional feature level (intermediate) fusion-based convMixer models. The performance of the proposed model is evaluated using a publicly available radar signatures dataset of human activities. The proposed model outperforms the state-of-the-art significantly using a location-wise testing strategy, which eliminates the possibility of data leakage.
... Human activity recognition is an active research area where machine learning and transfer learning techniques are applied to various sensor modalities, including radar. The papers [12] and [13] explore these techniques, with the former focusing on deep learning for radar-based activity recognition and the latter providing a broader view of the application of both deep learning and transfer learning in the field. ...
Preprint
Full-text available
Due to their large bandwidth, relatively low cost, and robust performance, Ultra-Wideband (UWB) radio chips can be used for a wide variety of applications, including localization, communication, and radar. This article offers an exhaustive survey of recent progress in UWB radar technology. The goal of this survey is to provide a comprehensive view of the technical fundamentals and emerging trends in UWB radar. Our analysis is categorized into multiple parts. Firstly, we explore the fundamental concepts of UWB radar technology from a technology and standardization point of view. Secondly, we examine the most relevant UWB applications and use cases, such as device-free localization, activity recognition, presence detection, and vital sign monitoring, discussing each time the bandwidth requirements , processing techniques, algorithms, latest developments, relevant example papers, and trends. Next, we steer readers toward relevant datasets and available radio chipsets. Finally, we discuss ongoing challenges and potential future research avenues. As such, this overview paper is designed to be a cornerstone reference for researchers charting the course of UWB radar technology over the last decade.
... Movement is visualized by the interruptions seen on the plots, where there is a break in the data homogeneity. Automatic detection of human presence can be implemented using machine learning techniques, which compare several samples taken in a variety of scenarios [21], [22]. However, the model must be trained for different types of areas, and some of the samples need the place to be empty, which is not always possible. ...
... Comparing the proposed system to other reports, for instance machine-learning based methods [5], [21], [22] require large training datasets, therefore not allowing a quick deployment and general scenario visualization. The standard deviation here used is also easier implemented and faster than analytical methods such as EMD [6] and SVD [23]. ...
... Various technologies have been developed for human activity recognition (HAR) and fall detection [15]. However, non-contact mmwave-based radar technology has garnered considerable attention in recent years due to its numerous advantages [16], such as its portability, low cost, and ability to operate in different ambient and temperature conditions. Furthermore, it provides more privacy compared to traditional cameras and is more convenient than wearable devices [17,18]. ...
Article
Full-text available
Telemedicine has the potential to improve access and delivery of healthcare to diverse and aging populations. Recent advances in technology allow for remote monitoring of physiological measures such as heart rate, oxygen saturation, blood glucose, and blood pressure. However, the ability to accurately detect falls and monitor physical activity remotely without invading privacy or remembering to wear a costly device remains an ongoing concern. Our proposed system utilizes a millimeter-wave (mmwave) radar sensor (IWR6843ISK-ODS) connected to an NVIDIA Jetson Nano board for continuous monitoring of human activity. We developed a PointNet neural network for real-time human activity monitoring that can provide activity data reports, tracking maps, and fall alerts. Using radar helps to safeguard patients’ privacy by abstaining from recording camera images. We evaluated our system for real-time operation and achieved an inference accuracy of 99.5% when recognizing five types of activities: standing, walking, sitting, lying, and falling. Our system would facilitate the ability to detect falls and monitor physical activity in home and institutional settings to improve telemedicine by providing objective data for more timely and targeted interventions. This work demonstrates the potential of artificial intelligence algorithms and mmwave sensors for HAR.
... As the demand for radar-based systems continues to grow, the exploration of deep learning techniques has led to the emergence of several applications [10]- [13] leveraging micro-Doppler signatures. One notable application of deep learning with micro-Doppler signatures is in motion recognition and classification. ...
Article
Full-text available
Micro-Doppler (MD) signature includes unique characteristics by different-sized body parts such as arms, legs, and torso. Existing radar identification systems have made an effort to classify the identification of humans using these characteristics presented in MD signatures while achieving a remarkable performance of classification. However, we argue that the radar identification system also should be extended to perform more fine-grained tasks to achieve the flexibility of the identification. In this paper, we introduce a radar human localization (RHL) task, which involves temporally localizing human identifications within untrimmed MD signatures. To enable RHL, we have constructed a micro-Doppler dataset referred to as IDRad-TBA. Furthermore, we propose Causal Localization Network (CLNet) as the RHL baseline system built upon the IDRad-TBA dataset. CLNet employs a novel temporal causal prediction approach for MD signature localization. Experimental results validate the effectiveness of CLNet in performing the RHL task. Our project is available at: https://github.com/dbstjswo505/CLNet.