Fig 3 - uploaded by Chiunhsiun Lin
Content may be subject to copyright.
An example of the human skin color segmentation.  

An example of the human skin color segmentation.  

Source publication
Conference Paper
Full-text available
In this paper, we introduce a novel approach for automatic detection of human faces embedded in dissimilar lighting. The proposed system consists of two primary parts. The first part is to convert the input RGB color images to a binary image directly using color segmentation. Because the absolute values of r, g, and b are totally different with the...

Contexts in source publication

Context 1
... shown in Fig. 3(a) is some examples (13 kinds of human skin colors), where Fig. 3(a) depicts the r, g, and b values of the colors of human skin. Fig. 3(b) is the original 32-bit-color map and the colors of human skin map. Shown in Fig. 3(c) is the result of generated by applying the above three rules to Fig. 3(b). The pixels with skin color are assigned ...
Context 2
... shown in Fig. 3(a) is some examples (13 kinds of human skin colors), where Fig. 3(a) depicts the r, g, and b values of the colors of human skin. Fig. 3(b) is the original 32-bit-color map and the colors of human skin map. Shown in Fig. 3(c) is the result of generated by applying the above three rules to Fig. 3(b). The pixels with skin color are assigned to pure white color (with R=G=B=255), and the others are assigned ...
Context 3
... shown in Fig. 3(a) is some examples (13 kinds of human skin colors), where Fig. 3(a) depicts the r, g, and b values of the colors of human skin. Fig. 3(b) is the original 32-bit-color map and the colors of human skin map. Shown in Fig. 3(c) is the result of generated by applying the above three rules to Fig. 3(b). The pixels with skin color are assigned to pure white color (with R=G=B=255), and the others are assigned to pure black color (with R=G=B=0). Then, we use the result of human ...
Context 4
... shown in Fig. 3(a) is some examples (13 kinds of human skin colors), where Fig. 3(a) depicts the r, g, and b values of the colors of human skin. Fig. 3(b) is the original 32-bit-color map and the colors of human skin map. Shown in Fig. 3(c) is the result of generated by applying the above three rules to Fig. 3(b). The pixels with skin color are assigned to pure white color (with R=G=B=255), and the others are assigned to pure black color (with R=G=B=0). Then, we use the result of human skin color segmentation as the input image, and transform it to a binary image with ...
Context 5
... shown in Fig. 3(a) is some examples (13 kinds of human skin colors), where Fig. 3(a) depicts the r, g, and b values of the colors of human skin. Fig. 3(b) is the original 32-bit-color map and the colors of human skin map. Shown in Fig. 3(c) is the result of generated by applying the above three rules to Fig. 3(b). The pixels with skin color are assigned to pure white color (with R=G=B=255), and the others are assigned to pure black color (with R=G=B=0). Then, we use the result of human skin color segmentation as the input image, and transform it to a binary image with pure white color (with value = 1) and pure black color (with value = 0). Fig. ...

Citations

... Integral Image is concerned with the identification of objects using sub-window whereas Adaboost is used to check the suitability/ correctness of identified image in the rectangle. Cascade Classifier works sequentially until the best suitable match of the input image is found [9][10]. The detected objects by Cascade Classifier are stored in a specific database. ...
Conference Paper
Object Detection techniques have been enumerably used in a variety of real-time applications such as robotics, crime investigation, transportation, etc. Object classification is an important task in computer vision and is the pro-cess of tagging objects into predefined and semantically significant classes us-ing trained datasets. A framework for object detection using the Viola-Jone al-gorithm for object classification is proposed in this paper. The architecture of the framework encompasses image acquisition, image pre-processing, classifi-cation and extraction of objects, and computation of related measures. The ex-periment is performed on a dataset containing 120 images of human faces with different angles, poses, and light conditions. It is worth mentioning that the fac-es as objects are recognized successfully from the set of input images with a rate of 96.67%. Moreover, the objects on faces such as eyes, nose, and mouth are detected successfully with an average accuracy of 93.10%, 93.10%, and 90.80% respectively. The attributes/ measures of these objects are vital for computer recognition and hence, the properties/ measures of respective objects are computed at the end. This framework will be useful in several real-life ap-plications; especially in criminal investigation applications for identification of persons/ criminals and computer portrait designing, etc.
... One of the most well-known supervised learning ANNs is the multilayer feedforward neural network (MLFFNN), which has previously proven its potential as a universal tool for approximation and prediction of data [51,52], and therefore, was employed in the current work. In MLFFNNs, neurons in different layers are connected to each other and transmit the signals stimulated by the data only in the forward direction, i.e., from the input to the output layer, such that there is no recurrent or backward connection in the architecture of the network. ...
Article
This paper presents an artificial intelligence (AI) framework proposed to predict the optimum composition of the NiTi shape memory alloy (SMA) to be used in dental applications. A multilayer feed forward neural network (MLFFNN) was adopted for machine learning (ML) model to train the readily available experimental data in literature on the Ni ion release from a variety of NiTi compositions into artificial saliva (AS) solutions to predict the NiTi SMA composition to exhibit the lowest amount of Ni ion release into oral cavity. As a result, 51.5 at.% Ni – balance Ti composition was predicted to be the optimum NiTi SMA composition to release the lowest amount of Ni ions into the oral cavity, which was supported by the validation experiments utilizing static immersion experiments carried out in AS and the post-mortem inductively coupled plasma mass spectrometer (ICP-MS) analysis of the immersion fluids. The findings of the work presented herein not only demonstrate that the proposed AI framework successfully predicts the most biocompatible NiTi SMA for dental applications, but also open a venue for the utility of the current AI framework in the design of other medical alloys and SMAs for a variety of applications.
... (3) (4) represents the odd position rows which will take values of different columns (columns of 1,4,7,... as one set, 2,5,8,... as another set, and 3,6,9,... as another set) as c1 , c2 and c3 ...
... In this process the face detected and the background noise is removed from the image. The researchers have experimented a lot of methodologies to detect the face such as: based on facial feature [2,3], based on color skin [4][5][6], based on neural network [7,8], based on Adaboost algorithm [9,10]. In this paper the face detection is not required because the database had the face images only. ...
Conference Paper
This paper aims to experimental evaluation of different methodologies to recognize human face based on different facial expression. The face and facial images were captured locally, as the experiment is aimed to be done in India domain. The features were extracted based on two techniques, viz, Discrete Wavelet Transform (DWT) and Local Binary Pattern (LBP). The range of extracted feature is 150,300,600,1200 and 2400. Further, mean and standard deviation are computed for feature vector generation. Support Vector Machine (SVM) is used for classification/recognition. The experiment was carried out on different range of features as 160×15 samples. The result varies from 72% to 100% for various ranges of features. The performance of the proposed system is found to be satisfactory as compared to the existing system
... As a third step, face recognition that takes the face images from output of detection part. Final step is person identity as a result of recognition part [3]. An illustration of the steps for the face recognition system is given in Figure 1.1 Acquiring images to computer from camera and computational medium (environment) via frame grabber is the first step in face recognition system applications. ...
... The input image, in the form of digital data, is sent to face detection algorithm part of software for extracting each face in the image. Many methods are available for detecting faces in the images in the literature [1][2][3][4][5][6][7][8][9][10]. The available methods could be classified into two main groups as; knowledge-based and appearancebased methods. ...
... (a) Security (access control to buildings, airports/seaports,ATM machines and border checkpoints [2]; computer/ network security; email authentication on multimedia workstations). (b) Surveillance (a large number of CCTVs can be monitored to look for known criminals, drug offenders, etc. and authorities can be notified when one is located; for example, this procedure was used at the Super Bowl 2001 game at Tampa, Florida [3]; in another instance, according to a CNN report, two cameras linked to state and national databases of sex offenders, missing children and alleged abductors have been installed recently at Royal Palm Middle School in Phoenix, Arizona [4]). (c) General identity verification (electoral registration, banking, electronic commerce, identifying new borne, national IDs, passports, drivers' licenses, employee IDs). ...
Article
Full-text available
In this paper, feature extraction and facial recognition are studied in order to resolve problems like highdimension problem, small size samples and no-linear separable problem that exist in facial recognition technology. In the part of feature extraction we use a HGPP algorithm, to extract the input features in building a face recognition system. The neural network, which represents brilliant performance on small training sets, non-linear separable and high-dimension pattern recognition problems in the recognition stage, is used for pattern classification. The proposed approach is validated with the ORL database. Experimental results demonstrate the effectiveness of this method in the performance of face recognition.
... The process is time consuming and difficult to detect face in different image sizes. In appearance based methods, face characteristics are learned from a set of representative face and non-face images using statistical analysis and machine learning techniques and used to perform face detection [1,7,9,10,11,12]. Such methods require rigorous training and intensive computational power. ...
Conference Paper
Full-text available
Proposed here is a new face detection technique based on binary image. In this method, a color input image is converted to a gray image and then denoised using low pass filter. The denoised image is transformed with local window standard deviation and then binarized using adaptive thresholding method. The generated binary image has prominent boundary and facial features like eyebrow, eyes, nose and mouth regions which can be easily detected using morphological operations. The binary image is scanned vertically to find the probable region containing the face. Actual face location is found by detecting the location of eye-brow in the probable face region scanning horizontally. The method has been tested on several single shot database images and it is giving good performance of about 93%. It is also fast as compared to existing face detection schemes which require complex processing and involves training databases.
... A comprehensive introduction to MLP can be found in [49]. Many researchers including [18,42,50,51,52] used MLP for skin segmentation purposes. In this study we have used a network of five layers including the input layer which receives the input data from three color components in color space, three hidden layers and the output layer which designate the skin and non-skin classes. ...
Article
Full-text available
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications.
... If some of these features are found in the candidate image, then the candidate image will be considered as face. Two eyes and mouth generate isosceles triangle, and distances between eye to eye and mid point of eyes distance to mouth are equal [6]. Some filtering operations are applied to extract feature candidates and steps are listed below: ...
Conference Paper
Full-text available
A face recognition system is one of the biometric information processes, its applicability is easier and working range is wider than other systems like; fingerprint, iris scanning, signature, etc. The detection methods are designed to extract features of face region out of a digital image. The output face image of the detection algorithm should be similar to the recognition input image. Face detection is performed on live acquired images without any application field in mind. The developed system uses white balance correction, skin like region segmentation, facial feature extraction and face image extraction on a face candidate. System is also capable of detecting multiple faces in live acquired images.
... There are several methods have been proposed for facial expression recognition system that is eigenfaces using neural network [5] , the Eigenvector spaces that have been calculated independently for each class are considered; then the similarity between the images is obtained by measuring the mean-square error for the reconstructed image for each class [6], the classification of emotions using Support Vector Machine model [7] and the gabor filter with neural network is used to classify the five different facial expressions [8]. Skin pixel detection using RGB color image is presented by [9]. RGB, YCbCr, CEILAB and HSV color models is used to segment the skin color, and then skin region is tested whether the face or not [10]. ...
... Facial features are eyebrows, eyes, mouth, nose, nose tip, cheek, etc. The property is used to extract the eyes and mouth which, two eyes and mouth generate isosceles triangle, and distance between eye to eye and mid point of eyes distance to mouth is equal[2]. Laplacian of Gaussian (LoG) filter and some other filtering operations are perfomed to extract facial feature of face candidate[19].a.) Face Candidate Image b.) Face Image After Filtering Result of filtering operations on face candidate ...
Conference Paper
Full-text available
A face recognition system is one of the biometric information processes, its applicability is easier and working range is larger than others, i.e.; fingerprint, iris scanning, signature, etc. A face recognition system is designed, implemented and tested at Atılım University, Mechatronics Engineering Department. The system uses a combination of techniques in two topics; face detection and recognition. The face detection is performed on live acquired images without any application field in mind. Processes utilized in the system are white balance correction, skin like region segmentation, facial feature extraction and face image extraction on a face candidate. Then a face classification method that uses FeedForward Neural Network is integrated in the system. The system is tested with a database generated in the laboratory with 26 people. The tested system has acceptable performance to recognize faces within intended limits. System is also capable of detecting and recognizing multiple faces in live acquired images.