Fig 1 - uploaded by Jean Meunier
Content may be subject to copyright.
Camera configuration.  

Camera configuration.  

Source publication
Article
Full-text available
Faced with the growing population of seniors, developed countries need to establish new healthcare systems to ensure the safety of elderly people at home. Computer vision provides a promising solution to analyze personal behavior and detect certain unusual events such as falls. In this paper, a new method is proposed to detect falls by analyzing hu...

Contexts in source publication

Context 1
... this section, we first introduce the data set to illustrate the main difficulties of realistic video sequences. To better test our system, each action was taken from several view points. Fig. 1 shows the configuration of the cameras in the ...
Context 2
... ROC curves show that our recognition results are good for each camera independently. But we also tried to improve our results by combining the results of all cameras. This was done with an ensemble classifier as shown in Fig. ...
Context 3
... to detect falls [8], [9] because of its simplicity. The bounding box should change from a vertical to a horizontal orientation after the fall. this inconsistency problem could be to normalize the 2-D velocity by the person's 2-D size. To estimate this size, we compute the best fitting ellipse of the silhouette using moments [14] as shown in Fig. 11. The normalized 2-D vertical velocity is then obtained by dividing it by the size of the ellipse major axis. It should be high during the fall and low just after the fall. For a fair comparison, our 3-component GMM classifier is used with the new features representing the fall and the same inactivity period after the fall. Table II(a) ...
Context 4
... ratio of the bounding box gave poor results. Indeed, segmentation errors can affect the bounding box, because of shadows, highlights, occlusions, object carrying or simply if the person extends his arms as shown in Fig. 11. The 2-D vertical velocity is sensitive to the camera view point, the velocity is higher when the person is near the camera. Normalizing the 2-D vertical velocity does not improve the recognition results mainly because the size of the person is unreliable as explained for the bounding box aspect ratio. However, with a better assessment ...
Context 5
... mean matching cost ¯ C can be sensitive in case of moving object. Fig. 12 shows an example where ¯ C is incorrect when the person moved an armchair. Since ¯ C is based on shape context, and the context is not the same when an object moves, the matching cost increases similarly to a fall. The full Procrustes distance D f is more robust in this case since it measures the shape deformation of reliable matched ...
Context 6
... simulated a large object occlusion in a video sequence to analyze the effect of occlusions on the Full Procrutes distance and the mean matching cost curves. The curves, shown in Fig. 13, were slightly disturbed with this major occlusion but did not generate high peaks similar to falls. Notice that in the case of a severe occlusion (not enough edge points for silhouette matching), the algorithm stops and restarts when the silhouette ...

Citations

... The emergence of visual sensors introduced a new dimension to this research field, as well. Rougier et al., (2011) [20] implemented sophisticated techniques to track and analyze human silhouettes through video sequences. In a similar vein, Diraco et al., (2010) [21] investigated the use of the ToF camera for fall detection. ...
... The emergence of visual sensors introduced a new dimension to this research field, as well. Rougier et al., (2011) [20] implemented sophisticated techniques to track and analyze human silhouettes through video sequences. In a similar vein, Diraco et al., (2010) [21] investigated the use of the ToF camera for fall detection. ...
Article
Full-text available
Falls detection approaches struggle with both Big Data scalability and upholding individual privacy, this research work proposed a novel approach for posture recognition followed by fall detection, taking advantage of the synergy between Random Forests and Uniform Local Binary Patterns (uLBP) histograms for an accurate and fast posture identification while respecting privacy. Additionally, it referred to deep recurrent neural networks distributed on a Hadoop and Spark platform for time series analysis in fall detection. This combination of methods allowed us to achieve acceptable real-time monitoring precision. This study, therefore addressed two objectives simultaneously: efficiency and scalability in posture recognition using Random Forests and uLBP, and fall detection relying on the recurrent neural network (RNN) for time series processing. The suggested solution is designed for home telemonitoring, where scalability and effective data management are supported through Hadoop/Spark. The integration of these technologies promotes reliable detection without any privacy violation, paving the way for a wider adoption of home monitoring systems for an increasing population of dependent individuals.
... These approaches utilize computer vision techniques to extract significant features, such as silhouettes [12], bounding boxes [13], deflection angles [3] or aspect ratio [14], from the frames. Researchers have employed various techniques to identify falls, including shape matching [15] or head tracking [16]. Conventional video-based approaches often require subject extraction, which can be affected by image noise. ...
Article
Full-text available
This paper presents a method of vision-based model for detecting and classifying human falls in video sequences. We used BlazePose to detect and extract 33 body landmarks of a human body; then, we selected 4 points to represent the upper body. Then, we draw a straight line "r" to calculate the angle of the upper body, linear velocity, and angular velocity to help determine if the person detected has fallen. These data are similar to the data obtained from gyroscope and accelerometer sensors. We then use the capabilities of CNN and LSTM to construct a model for fall detection. In addition, we used DeepSORT to track people in the video and identify who fell. We conducted experiments on three datasets, and our model achieved a high accuracy rate of 96.66%, recall of 89.95%, the precision of 96.72% and F1-score of 93.08%.
... Before depth cameras became widely available and affordable, several groups analyzed RGB camera images to detect falls. (5)(6)(7) However, using a single camera has numerous limitations, such as blind spots and a restricted angle; multicamera systems were thus proposed to improve fall detection. (8,9) Microsoft's introduction of Kinect marked a significant shift in the field. ...
... Videos are randomly selected as training and testing sets in a 2-fold cross-validation setup. Table 3 shows the experiment results of our method and results reported in [4][5][6][7][8][9]. Our method achieves one of the best sensitivity among the existing methods. ...
... Our method achieves one of the best sensitivity among the existing methods. [5] and [8] have better sensitivity, but their specificities are lower than our method. We also achieve the second best specificity after [7], but [7] has a very low sensitivity of only 80.60%. ...
Preprint
Full-text available
This paper introduces the AltumView Sentinare smart activity sensor for senior care and patient monitoring. The sensor uses an AI chip and deep learning algorithms to monitor the activity of people, collect activity statistics, and notify caregivers when emergencies such as falls are detected. To protect privacy, only skeleton (stick figure) animations are transmitted instead of videos. The sensor is highly affordable, accessible, and versatile. It was a CES 2021 Innovation Award Honoree, and has been selected by Amazon as one of only three fall detection devices integrated into its Alexa Together urgent response service, and has received very positive reviews from Amazon customers. It has also been used in different senior care settings in about ten different countries. The paper presents the main features of the system, the evidences and lessons learned from its practical applications, and future directions.
... Existing works on resident identification usually employ intrusive sensors such as cameras and microphones. Therefore, visual-based solutions are proposed for recognizing residents' identities [6,46,49]. These solutions process images by computer vision algorithms. ...
Article
Full-text available
We propose a novel resident identification framework to identify residents in a multi-occupant smart environment. The proposed framework employs a feature extraction model based on the concepts of positional encoding. The feature extraction model considers the locations of homes as a graph. We design a novel algorithm to build such graphs from layout maps of smart environments. The Node2Vec algorithm is used to transform the graph into high-dimensional node embeddings. A Long Short-Term Memory model is introduced to predict the identities of residents using temporal sequences of sensor events with the node embeddings. Extensive experiments show that our proposed scheme effectively identifies residents in a multi-occupant environment. Evaluation results on two real-world datasets demonstrate that our proposed approach achieves 94.5% and 87.9% accuracy, respectively.
... These techniques enable the identification and detection of moving objects, capturing the dynamics of visual stimuli and enhancing the fidelity of the generated spike train. In addition, the extraction of ROIs from the visual input allows for focused analysis of specific moving objects or areas, thereby enhancing the interpretability and efficiency of the dataset [3]. This targeted approach enables a deeper understanding of the visual processing mechanisms employed by biological systems and provides valuable insights for developing more effective SNN models. ...
Conference Paper
Full-text available
In this paper, we propose a novel system that combines computer vision techniques with SNNs to detect spike vision-based multi-object and tracking. Our system integrates computer vision techniques for robust and accurate detection and tracking, extracts regions of interest (ROIs) for focused analysis, and simulates spiking neurons for biologically inspired representation. Our approach advances the understanding of visual processing and empowers the development of efficient SNN models. In addition, our approach has achieved state-of-the-art results in visual processing tasks, showcasing the effectiveness and superiority of our approach. Extensive experiments and evaluations have been conducted to demonstrate the effectiveness and superiority of our proposed architecture and algorithm. The results obtained from our system are provided in this paper, showcasing the revolutionary performance that validates the efficacy of our approach and establishes it as a promising solution in the field of SNNs.
... • Fall detection: Fall detection [4,5] leads to methods which detect the happening of fall. The systems operate on the principle of pattern recognition. ...
... Multiple cameras [4], a single camera, a 2D [17,18] cameras,3D time of flight camera, three-dimensional of images with depth data are all subcategories of camera-based systems [19,20]. The multi-camera system rebuilds a 3D image, evaluates the person's volume distribution along the vertical axis, and notify when the majority of the volume is close to the ground for a predetermined amount of time. ...
... In contrast to wearable and ambientbased detection systems, camera-based systems are still widely utilized because they provide various advantages in terms of robustness and the absence of human involvement after installation. These devices are typically charged by power outlets or may be with a backup power source (battery pack) [4]. ...
Article
Full-text available
To ensure healthy lives and promoting well-being for all in the society at all ages is one of the goals of United Nations. Specially, health of elderly people plays an important factor in productivity and prosperity of any country. According to reports, there will be over two billion elderly people worldwide by 2050. Most of elderly people live independently and need some system to protect them from any kind of fall. As old people are highly susceptible to fall due to weak body structure as well as some external conditions, researchers from academia and industries are developing fall detections systems (FDS) or devices to prevent them from fall. Hence, this paper majorly aims to review the papers on fall detection systems (FDS) to protect elderly people from any kind of fall. Papers selected for this study spans from 2017- 2023. FDS will be helpful to sustain the health of elderly persons. In view of strengthening research in this domain, this study gives an integrated and a critical review of work done in this area for both wearable, non-wearable systems and hybrid systems with research directions as the advent of new technologies like deep learning, computer vision, Internet of Things (IoT) and big data may improve the existing approaches/systems.
... A number of techniques for fall detection and activity recognition using one or more cameras have been proposed. Although Auvinet et al. [3] found that multiview cameras enhanced accuracy, Rougier et al. [4] found that this improvement came at the expense of higher complexity and duplication costs. Recently, depth sensors like Kinect of low-cost have been deployed to overcome the aforementioned issues. ...
... The proposed framework is compared with the various state-of-the-art methods in terms of accuracy, precision, sensitivity and specificity, as shown in Table 1. Some works like [19][20][21][22] do not consider accuracy and precision for their performance evaluation. We have experimented and used accuracy, precision, sensitivity and specificity as our performance evaluation parameters. ...
Chapter
Fall has been a significant threat to life if not attended to immediately. Elderly patients have problems with their motor activities, and patients with Alzheimer’s, Dementia, and Autism often suffer from falls. This paper presents a novel video-based fall detection approach using a deep neural network. We have designed an attention-guided CNN-LSTM neural network that detects falls in a video sequence. The convolutional neural network (CNN) is used to extract the spatial features of input data. In contrast, long short-term memory (LSTM) is used to learn the temporal relations between the frames. A multiplicative attention mechanism has been employed after the CNN layer that will extract the enhanced and focused features of input data. After that, an attention map is created based on the output of the context vector and fed to the LSTM layer for temporal feature learning. The experiments are carried out on the UR Fall detection dataset to validate the efficacy of the proposed algorithm.KeywordsCNNLSTMAttention mechanismFall detectionElderly patient monitoring
... Vision-based approaches for identifying falls generally use data from various cameras like single RGB cameras, infrared cameras, depth cameras, and 3D-based methods using camera arrays. Early works on fall detection adopted tracking silhouettes of the person from surveillance cameras [3], [4]. The deformation of the human shape was then quantified from the silhouettes based on shape analysis. ...