Stages of the automatic eye region occlusion. Given a face image (a), first the eye landmarks are found by means of MTCNN (b). Afterwards, the most external points of the eyes and the ratios (c) are used to calculate the corners of the occlusion mask (d).

Stages of the automatic eye region occlusion. Given a face image (a), first the eye landmarks are found by means of MTCNN (b). Afterwards, the most external points of the eyes and the ratios (c) are used to calculate the corners of the occlusion mask (d).

Source publication
Article
Full-text available
Recently, research communities on Computer Vision and biometrics have shown a lot of interest in face verification and classification methods. Fighting against Child Sexual Exploitation Material (CSEM) is one of the applications that might benefit most from these advances. In CSEM, discriminative parts of the face, i.e. mostly the eyes, are often o...

Context in source publication

Context 1
... (P 1x , P 1y ) and (P 2x , P 2y ) be the coordinates of P 1 and P 2 , respectively. Then, the corners of the occlusion rectangle are defined as: Figure 3 depicts the occluded face creation process. We want to remark that during the occlusion process, several faces were not detected by MTCNN, e.g., non-frontal faces in the CFPW dataset. ...

Similar publications

Article
Full-text available
Image forgeries created using easy-to-use image editing tools like Adobe Photoshop have become prime sources of fake news, and are often used in malevolent ways in politics, courtrooms, and in scientific publishing as well. To reduce the problems caused by manipulated images in these domains, it is important to have reliable and fast mechanisms in...

Citations

... Progress achieved in detecting and recognizing still images and videos using frontal view, lateral face view, or facial emotions, including dissatisfaction, happiness, and gloominess [9], [10]. Recently, the FR system has drawn more research aimed at improving the recognition process and has two primary tasks: identification and verification [11], [12]. In biometrics for human authentication, FR plays a crucial role as it is one of the biometric techniques used in banks, laptops, and industries for security purposes [13], [14]. ...
Article
Full-text available
Accurate automatic face recognition (FR) has only become a practical goal of biometrics research in recent years. Detection and recognition are the primary steps for identifying faces in this research, and The Viola-Jones algorithm implements to discover faces in images. This paper presents a neural network solution called modify bidirectional associative memory (MBAM). The basic idea is to recognize the image of a human's face, extract the face image, enter it into the MBAM, and identify it. The output ID for the face image from the network should be similar to the ID for the image entered previously in the training phase. The tests have conducted using the suggested model using 100 images. Results show that FR accuracy is 100% for all images used, and the accuracy after adding noise is the proportions that differ between the images used according to the noise ratio. Recognition results for the mobile camera images were more satisfactory than those for the Face94 dataset.
... Progress achieved in detecting and recognizing still images and videos using frontal view, lateral face view, or facial emotions, including dissatisfaction, happiness, and gloominess [9], [10]. Recently, the FR system has drawn more research aimed at improving the recognition process and has two primary tasks: identification and verification [11], [12]. In biometrics for human authentication, FR plays a crucial role as it is one of the biometric techniques used in banks, laptops, and industries for security purposes [13], [14]. ...
Article
Full-text available
Accurate automatic face recognition (FR) has only become a practical goal of biometrics research in recent years. Detection and recognition are the primary steps for identifying faces in this research, and The Viola-Jones algorithm implements to discover faces in images. This paper presents a neural network solution called modify bidirectional associative memory (MBAM). The basic idea is to recognize the image of a human's face, extract the face image, enter it into the MBAM, and identify it. The output ID for the face image from the network should be similar to the ID for the image entered previously in the training phase. The tests have conducted using the suggested model using 100 images. Results show that FR accuracy is 100% for all images used, and the accuracy after adding noise is the proportions that differ between the images used according to the noise ratio. Recognition results for the mobile camera images were more satisfactory than those for the Face94 dataset.
... The rapid development of profound learning resulted in large margins for detecting deep learning algorithms for objects over traditional feature extraction algorithms. Several studies have been performed for the detection of action in the ambient assisted living (AAL) environment [6]. Traditional approaches to object detection are generally based on manufactured properties for the location of objects in each image. ...
Article
Full-text available
Facial features play a vital role in the real-time cloud-based applications. Since, most of the conventional models are difficult to detect heterogeneous facial features due to high computational memory and time for the internet of things (IoT) based video surveillance mechanisms. Video based facial features identification and extraction include a large number of candidates features which are difficult to detect the contextual similarity of the facial key points due to noise and computational memory. In order to resolve these issues, a hybrid multiple features extraction measures are implemented on the real-time video dataset to extract key points using the cloud-based classifier. In this work, a hybrid classifier is used to classify the key facial points in the cloud computing environment. Experimental results show that the proposed hybrid multiple feature extraction-based frameworks have better computational efficiency in terms of error rate, recall, precision, and accuracy than the conventional models.
... A few literature surveys based on face detection are detailed below, Rubel et al. [21] have introduced the one-shot frequency dominant neighborhood framework to find the occluded faces. Here, it performs two scenarios: verification occluded Face and classification occluded Face. ...
Article
Full-text available
Generally, face detection or prediction and tracking technology is the most critical research direction for target tracking and identifying criminal activities. However, crime detection in a surveillance system is complex to use. Moreover, preprocessing layer takes more time and needs to get pure-quality data. This research designed a novel, Crow Search-based Recurrent Neural Scheme to enhance the prediction performance of occlusion faces and improve classification results. Thus, the developed model was implemented in the Python tool, and the online COFW dataset was collected and trained for the system. Furthermore, enhance the performance of prediction accuracy and classify the person accurately by using Crow search fitness. Thus, the designed optimization technique tracks and searches the person's location and predicts the occlusion faces using labels. Finally, developed model experimental outcomes show better performance in predicting the occlusion faces, and the attained results are validated with prevailing models. The designed model gained 98.75% accuracy, 99% recall, and 98.56% precision for predicting occlusion faces. It shows the efficiency of the developed model and attains better performance while comparing other models.
... MTArcFace [73] 99.78% MTCNN + FaceNet [74] 64.23% MaskNet [75] 93.80% HSNet-61 [76] 91.20% OSF-DNS [77] 99.46% Attention-based [78] 95.00% Cropping-based [79] 92.61% FaceNet [52] 97.25% LPD [80] 97.94% MTCNN [81] 98.50% Convolutional Neural Networks [82] 90.40% ...
Article
Full-text available
The paper presents an evaluation of a Pareto-optimized FaceNet model with data preprocessing techniques to improve the accuracy of face recognition in the era of mask-wearing. The COVID-19 pandemic has led to an increase in mask-wearing, which poses a challenge for face recognition systems. The proposed model uses Pareto optimization to balance accuracy and computation time, and data preprocessing techniques to address the issue of masked faces. The evaluation results demonstrate that the model achieves high accuracy on both masked and unmasked faces, outperforming existing models in the literature. The findings of this study have implications for improving the performance of face recognition systems in real-world scenarios where mask-wearing is prevalent. The results of this study show that the Pareto optimization allowed improving the overall accuracy over the 94% achieved by the original FaceNet variant, which also performed similarly to the ArcFace model during testing. Furthermore, a Pareto-optimized model no longer has a limitation of the model size and is much smaller and more efficient version than the original FaceNet and derivatives, helping to reduce its inference time and making it more practical for use in real-life applications.
... Hamid et al. [15] proposed a perceptual hash algorithm, based on the difference of Laplacian pyramids, which was able to detect minute-level tampering. Biswas et al. [16] proposed a perceptual hashing for face verification, which was able to protect against AERO (Adversarial Eye Region Occlusion) attack. Wang et al. [17] proposed a perceptual hash method for image tampering detection and localization which used hybrid features to generate hash sequence. ...
Article
Full-text available
The implicit prerequisite for using HRRS images is that the images can be trusted. Otherwise, their value would be greatly reduced. As a new data security technology, subject-sensitive hashing overcomes the shortcomings of existing integrity authentication methods and could realize subject-sensitive authentication of HRRS images. However, shortcomings of the existing algorithm, in terms of robustness, limit its application. For example, the lack of robustness against JPEG compression makes existing algorithms more passive in some applications. To enhance the robustness, we proposed a Transformer-based subject-sensitive hashing algorithm. In this paper, first, we designed a Transformer-based HRRS image feature extraction network by improving Swin-Unet. Next, subject-sensitive features of HRRS images were extracted by this improved Swin-Unet. Then, the hash sequence was generated through a feature coding method that combined mapping mechanisms with principal component analysis (PCA). Our experimental results showed that the robustness of the proposed algorithm was greatly improved in comparison with existing algorithms, especially the robustness against JPEG compression.
... The dimensions of the rectangle covering the eyes are the 25% of the height and the 95% of the width of the bounding box containing the face. While the dimensions of the rectangle to cover the mouth correspond to the 25% of the height and the 55% of the width of the bounding box containing the face, as illustrated in Fig. 2. See the paper [28] for more details. ...
Chapter
Full-text available
Age estimation is a valuable forensic tool for criminal investigators since it helps to identify minors or possible offenders in Child Sexual Exploitation Materials (CSEM). Nowadays, Deep Learning methods are considered state-of-the-art for general age estimation. However, they have low performance in predicting the age of minors and older adults because of the few examples of these age groups in the existing datasets. Moreover, facial occlusion is used by offenders in certain CSEM, trying to hide the identity of the victims, which may also affect the performance of age estimators. In this work, we assess the performance of six deep-learning-based age estimators on non-occluded and occluded facial images. We selected FG-Net and APPA-REAL datasets to evaluate the models under non-occluded conditions. To assess the models under occluded conditions, we created synthetically occluded versions of the non-occluded datasets by drawing eye and mouth black masks to simulate the conditions observed in some CSEM images. Experimental results showed that the evaluated age estimators are affected more by eye occlusion than by mouth occlusion. Also, facial occlusion affects more the accuracy of the age estimation of minors and the elderly compared to other age groups. We expect that this study could become an initial benchmark for age estimation under non-occluded and occluded conditions, especially for forensic applications like victim profiling on CSEM where age estimation is essential.
... The downsampled DWT of I(p,q) with the low-pass filter (scaling function) of row s(p) and column s(q) and high-pass filter (wavelet function) of row t(p) and column t(q) can be observed in Equations (21) and (22) [56]. The process for the first decomposition level can be seen in connector C of Figure 1: ...
Article
Full-text available
This work aimed to find the most discriminative facial regions between the eyes and eyebrows for periocular biometric features in a partial face recognition system. We propose multiscale analysis methods combined with curvature-based methods. The goal of this combination was to capture the details of these features at finer scales and offer them in-depth characteristics using curvature. The eye and eyebrow images cropped from four face 2D image datasets were evaluated. The recognition performance was calculated using the nearest neighbor and support vector machine classifiers. Our proposed method successfully produced richer details in finer scales, yielding high recognition performance. The highest accuracy results were 76.04% and 98.61% for the limited dataset and 96.88% and 93.22% for the larger dataset for the eye and eyebrow images, respectively. Moreover, we compared the results between our proposed methods and other works, and we achieved similar high accuracy results using only eye and eyebrow images.
... Due to the current situation where citizens are subjected to social isolation due to the appearance of a deadly virus of the current year, it has been chosen to give a work management in a virtual way, which leads to having to hold meetings or work By video call and for the judicial sector it is essential to have the veracity of identities (Niu & Chen, 2018), which is why a way was devised to identify a person during the course of a video call. Python is a very flexible and very broad programming language that today is having a fairly large reception by developers and data scientists, in addition to being a simple programming language suitable for performing complex logical operations in order to create Optimal applications that satisfy the solution to computer problems and that allow the evolutionary advance of technology focused on software (Biswas et al., 2021). For this reason, the facial recognition tool for video calls was programmed by applying artificial intelligence and computer vision algorithms compatible with the Python platform. ...
Conference Paper
Full-text available
This article describes the design and development of the application for real-time facial recognition, documents the entire development of this tool and the tests performed, in order to provide a solution to the problem of identity theft, establishing the objective of creating a software tool to authenticate identities during video calls, which was built thanks to the DLIB tool, a library that helps to detect objects, in this case the face of a person. A specialized biometric facial recognition software was implemented, which provides knowledge to carry out research related to biometric facial recognition.
... However, it is worth noting that it presents a higher complexity than most known models for face recognition. With a more direct application, Biswas et al. [33] proposed a method to tackle Child Sexual Exploitation Material. They state that abusers have their eyes covered in the majority of the content and that traditional face recognition systems fail at recognizing their identity. ...
Article
Full-text available
Over the years, the evolution of face recognition (FR) algorithms has been steep and accelerated by a myriad of factors. Motivated by the unexpected elements found in real-world scenarios, researchers have investigated and developed a number of methods for occluded face recognition (OFR). However, due to the SarS-Cov2 pandemic, masked face recognition (MFR) research branched from OFR and became a hot and urgent research challenge. Due to time and data constraints, these models followed different and novel approaches to handle lower face occlusions, i.e., face masks. Hence, this study aims to evaluate the different approaches followed for both MFR and OFR, find linked details about the two conceptually similar research directions and understand future directions for both topics. For this analysis, several occluded and face recognition algorithms from the literature are studied. First, they are evaluated in the task that they were trained on, but also on the other. These methods were picked accordingly to the novelty of their approach, proven state-of-the-art results, and publicly available source code. We present quantitative results on 4 occluded and 5 masked FR datasets, and a qualitative analysis of several MFR and OFR models on the Occ-LFW dataset. The analysis presented, sustain the interoperable deployability of MFR methods on OFR datasets, when the occlusions are of a reasonable size. Thus, solutions proposed for MFR can be effectively deployed for general OFR.