Figure - available from: EURASIP Journal on Image and Video Processing
This content is subject to copyright. Terms and conditions apply.
Difference of object coordinates in fall situation

Difference of object coordinates in fall situation

Source publication
Article
Full-text available
Conventional surveillance systems for preventing accidents and incidents do not identify 95% thereof after 22 min when one person monitors a plurality of closed circuit televisions (CCTV). To address this issue, while computer-based intelligent video surveillance systems have been studied to notify users of abnormal situations when they happen, it...

Citations

... These techniques have broad applications in various industries and everyday life. For instance, they are used in relation to fire detection such as intelligent video surveillance systems [11]. as well as non-fire related tasks like intelligent video surveillance systems [12]. ...
... For instance, in surveillance scenarios, the system can effectively track and transmit live video feed to a central command center where security personnel can monitor the situation and take immediate actions. Moreover, the system can be utilized in robotics to enable real-time tracking of moving objects or people, providing the robot with the ability to follow them or avoid obstacles in their path [24]. Furthermore, the system can be augmented with additional features, such as depth estimation or 3D mapping, to enable more complex tasks in augmented or virtual reality [25,24]. ...
... Moreover, the system can be utilized in robotics to enable real-time tracking of moving objects or people, providing the robot with the ability to follow them or avoid obstacles in their path [24]. Furthermore, the system can be augmented with additional features, such as depth estimation or 3D mapping, to enable more complex tasks in augmented or virtual reality [25,24]. Depth estimation provides distance information that can be useful for obstacle avoidance or navigation tasks. ...
... Several previous studies related to loitering detection used two approaches. The first approach uses handcrafted features [3][4][5][6][7][8], while the second uses non-handcrafted features (deep learning) [9,10]. In the handcrafted feature approach, the steps can be divided into three parts: (1) the person-detection process, (2) the feature extraction used to distinguish normal videos and videos that contain people loitering based on the detected person's movements, and (3) the video classification process. ...
... Another feature that can be used is the angle formed by the movement of a person in a sequenced frame. Angle refers to the change in the center of gravity (CoG) of the detected person in a particular frame with the same person in the next frame [5]. The larger the angle formed, the greater the chance the person will do the lottery and vice versa. ...
... Figure 17. Distance and angle measurement used in method [5]. ...
Article
Full-text available
As one of the essential modules in intelligent surveillance systems, loitering detection plays an important role in reducing theft incidents by analyzing human behavior. This paper introduces a novel strategy for detecting the loitering activities of humans in the monitoring area for an intelligent surveillance system based on a vision sensor. The proposed approach combines spatial and temporal information in the feature extraction stage to decide whether the human movement can be regarded as loitering. This movement has been previously tracked using human detectors and particle filter tracking. The proposed method has been evaluated using our dataset consisting of 20 videos. The experimental results show that the proposed method could achieve a relatively good accuracy of 85% when utilizing the random forest classifier in the decision stage. Thus, it could be integrated as one of the modules in an intelligent surveillance system.
... As mentioned above, there are many applications that face this problem. Such applications include: updating automobile traffic information [7], when to repeat a measurement of atmospheric data for weather forecasting [8]; determining the optimal time to update routing tables in data networks [9]; when to update medical testing (such as cancer screenings, mammograms, infectious disease testing such as COVID-19 tests, etc.) [10]; updating stock prices [11]; updating a transaction database [12]; updating data regarding a security or surveillance system [13]; updating cached pages [14]; refreshing website data [15]; updating the relative position of drones in a swarm [16]; updating measures of reputation [17]; status update policies under an energy harvesting setting [4] [5] [6]; and many more. These applications range over a wide variety of systems. ...
Article
Full-text available
In this paper we examine the general problem of determining when to update information that can go out-of-date. Not updating frequently enough results in poor decision making based on stale information. Updating too often results in excessive update costs. We study the tradeoff between having stale information and the cost of updating that information. We use a general model, some versions of which match an idealized version of the Age of Information (AoI) model. We first present the assumptions, and a novel methodology for solving problems of this sort. Then we solve the case where the update cost is fixed and the time-value of the information is well understood. Our results provide simple and powerful insights regarding optimal update times. We further look at cases where there are delays associated with sending a request for an update and receiving the update, cases where the update source may be stale, cases where the information cannot be used during the update process, and cases where update costs can change randomly.
... Surveillance data which includes images and videos is provided by surveillance systems as support to investigators in the investigation of crimes [1]. Moreover, the widespread use of the surveillance systems has lessened the intensity of people's fears and crimes for overall safety of the public [2]. However, it is a challenging task with a whole lot of time needed to process and analyze these images and videos for the tracking and monitoring of a person in non-overlapping cameras [3]; this problem is in addition to several other factors such as change in illumination and posture, overlapping, occlusion and complex background that negatively influence the performance of the person re-identification system in real life scenarios [4], thereby giving different appearance look to the same person. ...
... By using similarity function for matching person, the queried library image is compared with each searched library image; this process returns the person image with the highest similarity as the final recognition result [5]. In general, three critical stages are involved in the person re-identification system, namely (1) automatic person detection, (2) person features extraction and (3) classification stage. Several researchers on deep learning applications to person re-identification systems have recommended the extraction and learning of effective feature representations from the detected person body to mitigate the unwanted background objects for a robust person re-identification system [6]- [8]. ...
... The proposed SOLO approach consists of the following steps: (1) reformulating the person instance segmentation as: (a) prediction of category and (b) mask generation tasks for each person instance, (2) dividing the input person image into a uniform grids, i.e., G×G grid cells in such a way that a grid cell can predict the category of the semantic and masks of the person instances provided the center of the person falls into the grid cell and (3) conducting person segmentation. Discriminating features of individual persons are obtained by extraction using convolution neural networks. ...
Article
Full-text available
Analyzing and judging of captured and retrieved images of the targets from the surveillance video cameras for person re-identification have been a herculean task for computer vision that is worth further research. Hence, re-identification of single persons by locations based on single objects by locations (SOLO) model is proposed in this paper. To achieve the re-identification goal, we based the training of the re-identification model on synchronized stochastic gradient descent (SGD). SOLO is capable of exploiting the contextual cues and segmenting individual persons by their motions. The proposed approach consists of the following steps: (1) reformulating the person instance segmentation as: (a) prediction of category and (b) mask generation tasks for each person instance, (2) dividing the input person image into a uniform grids, i.e., G×G grid cells in such a way that a grid cell can predict the category of the semantic and masks of the person instances provided the center of the person falls into the grid cell and (3) conducting person segmentation. Discriminating features of individual persons are obtained by extraction using convolution neural networks. On person re-identification Market-1501 dataset, SOLO model achieved mAP of 84.1% and 93.8% rank-1 identification rate, higher than what is achieved by other comparative algorithms such as PL-Net, SegHAN, Siamese, GoogLeNet, and M3L (IBN-Net50). On person re-identification CUHK03 dataset, SOLO model achieved mAP of 82.1 % and 90.1% rank-1 identification rate, higher than what is achieved by other comparative algorithms such as PL-Net, SegHAN, Siamese, GoogLeNet, and M3L (IBN-Net50). These results show that SOLO model achieves best results for person re-identification, indicating high effectiveness of the model. The research contributions are: (1) Application of synchronized stochastic gradient descent (SGD) to SOLO training for person re-identification and (2) Single objects by locations using semantic category branch and instance mask branch instead of detect-then-segment method, thereby converting person instance segmentation into a solvable problem of single-shot classification.
Article
Full-text available
Recent years have seen a dramatic increase in the use of artificial intelligence (AI) in suspicious activity recognition (SAR). To better understand the research work and recent trends in AI-based SAR, the paper carries out a bibliometric study to analyze the publications based on the recent developments and contributions of authors, publication source, country, and institutions, identifying the most productive items, and the partnership among each. The search on the Scopus database retrieved 1713 documents related to AI-based SAR. In this study, all document types from Scopus were included in the analysis. VOSviewer was used to perform coupling, cluster, and co-citation network analysis to identify research hotspots, while bibliometrix was used to generate keyword analysis, including word clouds, word dynamics, theme trends, and Sankey diagrams, to understand the evolution and future direction of the research field. This paper contributes valuable insights for researchers and audiences worldwide regarding emerging research areas.
Chapter
Progressions in Medical, Industrial 4.0, and training require more client communication with the information in reality. Extended reality (XR) development could be conceptualized as a wise advancement and a capable data variety device fitting for far off trial and error in image processing. This innovation includes utilizing the Head Mounted Devices (HMD) built-in with a functionalities like data collection, portability and reproducibility. This article will help to understand the different methodologies that are used for 3Dimensional (3D) mode of interaction with the particular data set in industries to refine the system and uncovers the bugs in the machineries. To identify the critical medical issues, the future technology can give a comfort for the medic to diagnose it quickly. Educators currently utilizing video animation are an up-rising pattern. Important methods used for various applications like, Improved Scale Invariant Feature Transform (SIFT), Block Orthogonal Matching Pursuit (BOMP), Oriented fast and Rotated Brief (ORB) feature descriptor and Kanade-Lucas-Tomasi (KLT), Semi-Global Block Matching (SGBM) algorithm etc., In high-speed real time camera, the position recognition accuracy of an object is less than 65.2% as an average in motion due to more noise interferences in depth consistency. So, more optimization is needed in the algorithm of depth estimation. Processing time of a target tracking system is high (10.670 s), that must be reduced to provide increased performance in motion tracking of an object in real-time. XR is a key innovation that is going to work with a change in perspective in the manner in which clients collaborate with information and has just barely as of late been perceived as a feasible answer for tackling numerous basic requirements.KeywordsExtended reality (XR)Machine learning algorithms3D reconstructionMotion trackingImage depth analysisVirtual reality (VR)Augmented reality (AR)Mixed reality (MR)