Contexts in source publication

Context 1
... approach for face detection is designed to robust the variations that can occur in face illumination, shape, color, pose and orientation. An overview of the presented face detection algorithm is depicted in Fig. 1 which contains three major stages: 1) Face localization for finding face candidate. 2) Extraction of the expected face features from the resulted candidate. 3) Confirmation of the detected face using neural network technique. The algorithm first uniformly distribute the brightness based on light control technique, the benefit of using ...
Context 2
... we can locate possible areas for eyes and mouth regions, based on their feature maps derived from chrominance and luminance components. Our approach considers only the area covered by a mask that is built by filling the inside region of the outer layout, this can be done by applying a large size of erosion and dilation on the segmented areas. Fig. 10 shows an example of the face ...
Context 3
... color of the mouth region contains stronger red component and weaker blue component than any other facial region, since the mouth region chrominance component "a*" from "CIE L*a*b*" and "V" from YUV color space is different than any other facial region color that we can adopt as a key for mouth extraction. Fig. 11 shows an example of "a*" and "V" channel for the selected face. Skin detector passes only the skin tones, resulting in concentration of the color intensity in strict ranges inside its histogram. Fig. 12 (a, and b) shows this concentration for "a*" and "V" histograms. ...
Context 4
... from "CIE L*a*b*" and "V" from YUV color space is different than any other facial region color that we can adopt as a key for mouth extraction. Fig. 11 shows an example of "a*" and "V" channel for the selected face. Skin detector passes only the skin tones, resulting in concentration of the color intensity in strict ranges inside its histogram. Fig. 12 (a, and b) shows this concentration for "a*" and "V" histograms. ...
Context 5
... stretching will cause the colors to be expanded and cover all histogram ranges, the stretching process will enhance the image contrast by making the high intensity colors (mouth colors) expanded toward the high intensity values, while the low intensity colors (skin colors) will be expanded toward the low intensity values as shown in Fig. 12c and Fig. ...
Context 6
... last step, some of the binary filters (median, erosion and dilatations) are applied to filter out the small segments. Fig. 13 shows the expected mouth region extraction ...
Context 7
... as mouth detection, the suitable color spaces that separate eyes colors from the other face region is the cyan channel from CMY space and U channel from YUV color space, Fig. 14 shows an example of "C" and "U" channels for the selected ...
Context 8
... detection algorithm is briefly shown in Fig. 15, the resulted areas from each color space are combined together by OR operation, where the C channel is used to detect the eyes in small faces and the U channel for the large face ...
Context 9
... generated from face feature extraction stage is preprocessed before applying the neural test by the following steps: 1. Rotating the face features to have a frontal face view that depends on the mouth and the eyes positions. 2. Applying face feature geometrical test (The mouth or any part of it must lie inside the selected rectangle as shown in Fig. 16, if this test failed the rectangular region will be considered as a non-face region). 3. Resize them to 40x40 pixel. 4. Enhancing its light using light control technique. Fig. 17 shows the preprocessing to the input neural network image. If the output of face shape neural network is one and the output of non-face shape neural network ...
Context 10
... depends on the mouth and the eyes positions. 2. Applying face feature geometrical test (The mouth or any part of it must lie inside the selected rectangle as shown in Fig. 16, if this test failed the rectangular region will be considered as a non-face region). 3. Resize them to 40x40 pixel. 4. Enhancing its light using light control technique. Fig. 17 shows the preprocessing to the input neural network image. If the output of face shape neural network is one and the output of non-face shape neural network is zero, then the object is a face; otherwise, it is not a face. The decision rule is: The training was done using 1600 face and 3656 non- face images. The mean value of the sum of ...
Context 11
... photo collections usually contain color images that are taken under varying lighting conditions as well as with complex backgrounds. Further, these images may have quality variations and contain multiple faces with variations in color, position, scale, rotation, orientation, pose, and facial expression. We present detection results in Fig. 19 on the Computational Vision Group face dataset (Computational Vision Groupface Dataset), The IMM Face Database ( Nordström et al., 2004), and our database that contains a lot of images with multiple faces of different sizes with a wide variety of facial variations. The algorithm can detect both dark skin-tone and bright skin- tone ...
Context 12
... and our database that contains a lot of images with multiple faces of different sizes with a wide variety of facial variations. The algorithm can detect both dark skin-tone and bright skin- tone because of the illumination invariant skin. Varying lighting conditions do not affect our algorithm because the enhancement produced by light control. Fig. 19 shows that our algorithm can detect multiple faces of different sizes with a wide variety of facial variations. Further, the algorithm can detect both dark skin- tone and bright skin-tone where it depends on chrominance of multi color spaces, different rotation due to face geometrical correction founded in the preprocessing stage to ...

Similar publications

Preprint
Full-text available
Adversarial examples are artificially crafted to mislead deep learning systems into making wrong decisions. In the research of attack algorithms against multi-class image classifiers, an improved strategy of applying category explanation to the generation control of targeted adversarial example is proposed to reduce the perturbation noise and impro...