Block diagram of motion detection. 

Block diagram of motion detection. 

Source publication
Article
Full-text available
Present work is an in depth study to detect flames in video by processing the data captured by an ordinary camera. Previous vision based methods were based on color difference, motion detection of flame pixel and flame edge detection. This paper focuses on optimizing the flame detection by identifying gray cycle pixels nearby the flame, which is ge...

Context in source publication

Context 1
... detection is used to detect any occurrences of movement in a sample video. Block diagram of motion detection system is as in Figure 5. Using MATLAB/Simulink, a motion detector model is built based on this block diagram. We took two sequential images from video frames. After applying basic two methods edge detection and color detection we get probable area of fire pixel then we compare the RGB value to of frame1 to the frame 2 for corresponding pixel and if pixel value differs then motion detector will show motion and will give resultant output to the operator. In fig. 6(a) and 6(b) frame 1 and frame 2 are sequential images and after mapping the corresponding pixels in both of the frames, motion detector compares R, G, and B value of corresponding pixels and give the resultant output to the combination of operator. Gray-cycle detection is used to detect occurrences of smoke pixel in the selected area which is half above the area, detected by color detection method. Block diagram of gray-cycle detection system is as in Fig. 7. Using MATLAB/Simulink, a Gray-cycle detector model is built based on this block diagram. The method we are going to apply on the area ( PQRS ) and the area of fire pixel which we get from edge and color detection method. Gray-cycle pixel have some properties in terms of RGB. This method will check these properties inside this area( PQRS ) and then provide result to the operator. Area detection method is used to detect dispersion of fire pixel area in the sequential frames. Block diagram of area detection System is as in Figure 9. Using MATLAB/Simulink, a Area detector model is built based on this block diagram. In this method, We took two sequential images which comes out from color detector then we check dispersion in minimum and maximum coordinate of X and Y axis, acquired from color detector. In this method we are comparing fire pixel area of two sequential frames as shown in Fig. 10(a) and Fig. 10(b) on the basis of minimum value of x & y and maximum value of x & y. In case of fire, if any extreme value of x and y axis will increase for next frame i.e frame 2, then there is area dispersion takes place and system will provide output to the operator. After that operator will perform operation on the basis of logic combination selected by the system. We collected a number of sequential image frames from two original created videos which consist both fire and non-fire images. All the fire images are detected by our fire detection system and also by existing fire detection system. We have observed the difference in false alarm detection where the non fire images have been detected as fire ...

Similar publications

Article
Full-text available
Image segmentation and recognition through edge detection techniques are addressed intensively as the number of applications that contribute to image area arises. Edge detection techniques are emerged as subtle methods for image recognition paradigm. The aim of this paper is to evaluate and compare the performance of four well-known edge detector m...
Article
Full-text available
In this paper a novel Color image real edge detection algorithm is proposed using simple Circular Shifting that manipulate entire image at a time rather than template based. In this approach, as a pre-processing step, real complement of each channel is taken and The circular shift operations are applied in all directions to identify the edge pixels...
Article
Full-text available
With the rapid development of IoT technology, it is a new trend to combine edge computing with smart medicine in order to better develop modern medicine, avoid the crisis of information “sibling,” and meet the requirements of timeliness and computational performance of the massive data generated by edge devices. However, edge computing is somewhat...
Preprint
Full-text available
p>Small-scale ocean fronts play a significant role in absorbing the excess heat and CO2 generated by climate change, yet their dynamics are not well understood. Existing in-situ and remote sensing measurements of the ocean have inadequate spatial and temporal coverage to map small-scale ocean fronts globally. Additionally, conventional algorithms t...
Article
Full-text available
Knowledge is hidden in images in form of objects, structures, patterns and their relationships, which are acquired through devices associated with various artifacts including blurring and noise. This paper presents a model-independent method for local blur-scale estimation based on a novel hypothesis that gradients inside a blur-scale region follow...

Citations

... Due to rapid developments in digital camera technology and video processing techniques, conventional fire detection methods are going to be replaced by computer vision-based systems [4]. Computer vision-based fire detection systems overcome this limitation in that they do not identify flammability on a product-by-product basis. ...
... Yadav [4] focuses on optimizing the flame detection by identifying gray cycle pixels nearby the flame, which is generated because of smoke and of spreading of fire pixel and the area spread of flame. These techniques can be used to reduce false alarms along with fire detection methods. ...
... Yadav [4] perform several methods of fire detection. Result shows that the system performance for fire detection comprising of only color detection is 81.74% ...
Article
p class="Keywords">Computer vision-based fire detection systems overcome this limitation in that they do not identify flammability on a product-by-product basis. In this study, fire detection was carried out using the YCbCr, RGB, and HSV map approach. The offered system uses color segmentation as a component of fire detection analysis. These three colors space segments will then be extracted to determine the presence of fire in the image used. A rule which consists of five rules based on color space condition had been constructed for classification of a pixel classified as fire. If a pixel satisfies these five rules, the pixels belong to fire class.This paper consists of 6 steps, including image acquisition, image pre-processing, image segmentation, feature extraction, image classification, and GUI creation. GUI provides a visual interface that is intuitive and easy for the user to understand the proposed system. By using button and another visual elements, users can interact with the system efficiently. Based on the tests carried out, the proposed system can detect images of fire in dark and light conditions. Performance testing is done by collecting a set of fire images on the internet. Performance is judged based on how many errors are generated when detecting fire. Performance is categorized into five types, including very good, good, fair, poor, and very poor.</p
... This new system simulates an existing fire detection technique by adding optimization to reduce false alarms and improve accuracy. The percentage of system performance offered is 92.31%, with a false alarm rate of 7.69% [14]. ...
Article
This study was aiming at helping visually impaired people to detect and estimate the fire distance. Blind people had difficulty knowing the existence of fire at a safe distance; hence the possibility of burning could occur. The color models and blob analysis methods were used to detect the presence of fire in the blind path. Before the fire detection stage, the cascade of the HSV and RGB color models was applied to segment the reddish fire color. The size and shape of a dynamic fire were the parameters used in this paper to distinguish fire from non-fire objects. Changes in the area of the fire object obtained at the Blob analysis stage per 10 frames were the main contributions and novelty in this paper. After the fire is detected, the calculation of the fire distance to a blind person was completed using a pinhole model. This research used 35 data videos with a resolution of 480x640 pixels. The results showed that the fire detection system and the distance estimation achieved an accuracy of 88.86% and the MSE of 0.0358, respectively.
... At the point when we utilize Faster R-CNN to distinguish smoke in the genuine scene, smoke pictures acquired from backwoods fire scenes can be utilized to build the rich training data set. To improve efficiency, [8][9][10][11][12] consolidates the technique with customary sensors and different calculations. ...
Article
Full-text available
Fire hazards are common while handling combustible fluids. It frequently brings human deaths and loss of properties. Research on proficient fire identification and stifling discovery frameworks has become a hotly debated issue in industries. In any case, shows frameworks are either PLC or microcontroller or computer-vision frameworks based frameworks. The yield prompts a higher bogus caution rate or the framework will a rule-based. In this paper, we propose an AI with conventional sensor-based ways to deal with identifying a fire in indoor and open-air situations. Various topologies of data from the video gained by the cameras and sensors are consolidated and investigated by the proposed framework to expand the general unwavering quality of the approach and decrease the bogus positives distinguished by the framework
... Yadav et al. [6] put forward an optimisation scheme based on grey cycle. Grey cycle/motion detection and area dispersion constitute the new detection method, finally, they get the detection accuracy by 92.31%. ...
Article
Full-text available
This study investigated the potential for using principal component analysis (PCA) to improve real‐time forest fire detection with popular algorithms, such as YOLOv3 and SSD. Before YOLOv3/SSD training, the authors utilised PCA to extract features. Results showed that PCA with YOLOv3 increased the mean average precision (mAP) by 7.3%. PCA with SSD increased the mAP by 4.6%. These results suggest that PCA to be a robust tool for improving different objective detection networks.
... The confusion matrix for ratio threshold value 14% is given below: So, at 14% ratio threshold value, the accuracy of 93.89% is obtained from testing on 180 videos. This is better than [17] which had an accuracy 93.75% and Gaurav Yadav et al [18] with an accuracy of 92.31%. ...
... In addition to color and dynamic features, features associated with the geometric shape of a segmented region are also used in the recognition of regions that may contain flames [16][17][18][19][20] . For example, in [16] , the color, texture and contour features of a suspected region are obtained based on the HSI color model, and a neural network based classifier analyzes the features and identifies regions that contain flames. ...
... To simplify the notation, we use firep(x, y) in the rest of the paper to characterize whether the pixel at (x, y) is a flame pixel or not. firep(x, y) is 1 if the pixel is a flame pixel, otherwise it is 0. To evaluate the recognition accuracy of this approach, we compare the recognition result of this approach with those of the approaches developed in [2,10] , and [17][18][19] . Most of these approaches recognize flame pixels with RGB and HSI based models. ...
... These interference sources may cause a high false alarm rate if we only consider the color features in flame detection. Therefore, other types of features of flames are also needed to be extracted and analyzed to improve the recognition accuracy.For example, many researchers have focused on the study of features related to the geometric shapes of flame regions, such as roundness, cusp number[17][18][19] . However,similar to the color features, these features are also static features and do not contain the information on the motion of flames. ...
Article
Full-text available
Recently, video based flame detection has become an important approach for early detection of fire under complex circumstances. However, the detection accuracy of most existing methods remains unsatisfactory. In this paper, we develop a new algorithm that can significantly improve the accuracy of flame detection in video images. The algorithm segments a video image and obtains areas that may contain flames by combining a two-step clustering based approach with the RGB color model. A few new dynamic and hierarchical features associated with the suspected regions, including the flicker frequency of flames, are then extracted and analyzed. The algorithm determines whether a suspected region contains flames or not by processing the color and dynamic features of the area altogether with a BP neural network. Testing results show that this algorithm is robust and efficient, and is able to significantly reduce the probability of false alarms.
... In this there may be a chance of false fire detection also occur. For that [21] Gaurav Yadav, et al, developed a fire detection using image processing technique. In this they detect the flame by identifying the gray cycle pixel when there is smoke is spread over the area. ...
... Y is the luma component and Cb and Cr are the blue difference and red-difference chroma components.Y is refered as luminance. It is not an absolute color space.Y which is luminance,meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries [12].YCbCr is not an complete color space. YCbCr Most of the images will be in R G B color space.There is need to convert the RGB image into YCbCr .Because the flame color of those pictures are spread more uniformly in YCbCr color space [13] The conversion formula from RGB color space to YCbCr is as formula one follow. ...
... The fisrt rule says that Red component in each pixels which are greater than red threshold value will be extracted from the image.Second rule concentrates on the Red color than intensities of all other colors.Since R is a major color in the fire with more brightnes[7] [12].Sometimes the background with high illumination may influence the saturation of flames.To reduce the false alarms, the extract the flame intensity greater than a specified threshold. RTH value can be choosed experimentally.In this experiment 180 is chosen as RTH ...
... Consequently, the color of flame can provide useful information for the deduction of the flame boundary. In terms of RGB values, this fact can correspond to the following interrelation between R, G and B color channels: R > G> B (Yadav et al, 2012). However, the criterion is not a general one, which can lead to the unavoidable error in specifying the flame boundary. ...
Article
Full-text available
This work describes a RGB digital image processing approach of emulsified jet fuel flame, which allows the characterization of the combustion phenomenon in the case of new fuels through color chemiluminiscence measurements. By applying RGB techniques, the image processing of the flame reveals useful parameters in an effective and cost-efficient technique for the determination of relevant chemical species, such as CH* and C2*, equivalence ratio, and temperature estimation. Second generation emulsified aviation fuels containing water-jet fuel have been a challenge for simultaneous thrust augmentation and pollution diminution, with subsequent cost reduction and fossil fuel dependence. Testing new fuels would normally require expensive equipment and reliable investigation techniques, while image processing proved to be a reliable method for the estimation of combustion chemical species and temperature in the case of classic fuels. For the combustion behavior of emulsified jet fuel, a co-annular spray burner was used, allowing the complex investigation with UV-VIS spectrometer and flame photography. RGB image processing techniques showed good agreement with more complicated diagnosis tools, such as spectrometers.
... Using R and G elements, a correlation between G/R ratio and temperature distribution can be found, where, as temperature increases, the G/R ration also increases. So, due to this, color of flame can provide useful information to guess on the temperature of a fire and also fire phase [2]. The algorithm uses both RGB and YCbCr color space for the detection of fire pixel intensity. ...
... The color red shows dominance in a fire image and so it should be more stressed than the other components, because R becomes the significant color channel in an RGB image of flames. This imposes another condition for R as to be over some predetermined threshold, RT [2]. So, input image is converted to the YCbCr model before detection is carried out. ...
... Where Imax is the maximum intensity value in set defined by the combination of Y, Cb, and Cr channels. The equation in (2) normalizes all the samples to the interval of [0 1]. So that their difference are in the range of [- 1 1] which is used in membership function definitions as shown in Figure 2. Given a set of inputs from Y(x, y) -Cb(x, y) and Cr(x, y) -Cb(x, y), the crisp output of the fuzzy system is computed as follows: first, the inputs are fuzzified based on the membership functions shown in Figure 2a and Figure 2b. ...