Fig 4 - uploaded by Athanasios Nikolaidis
Content may be subject to copyright.
Examples of erroneous results for facial features.

Examples of erroneous results for facial features.

Source publication
Article
Full-text available
The present paper describes a method for the extraction of facial features.

Contexts in source publication

Context 1
... have been encountered in cheek detection when the false symmetry of the ellipse leads to bad de"nition of the relevant subimage, and, thus, to an erroneous extraction of some other features considered as predominant. An example where hair is extracted instead of cheeks is shown in Fig. 4(a). Similar problems cause the AHT to fail in some cases of chin detection. The inability of the edge operator to detect weak edges caused by bad luminance is more obvious here. An example of wrong chin extraction is given in Fig. 4(b). The correct extraction of the eyebrows depends on the detection of the position of the eyes, as one ...
Context 2
... of some other features considered as predominant. An example where hair is extracted instead of cheeks is shown in Fig. 4(a). Similar problems cause the AHT to fail in some cases of chin detection. The inability of the edge operator to detect weak edges caused by bad luminance is more obvious here. An example of wrong chin extraction is given in Fig. 4(b). The correct extraction of the eyebrows depends on the detection of the position of the eyes, as one would expect. In our experiments the prototype was chosen to be of height equal to 0.125 of the distance between the eyes and of width equal to 0.5 of the same distance. If at least one eye is detected at a wrong position, and ...
Context 3
... position, and especially when the distance between the eyes is bigger than the actual one, the size of the block to be matched may be incorrect. This may lead to the extraction of some feature other than the desired one. Even when eyes are correctly extracted, a problem may arise if hair covers the forehead and the eyebrows, as it is shown in Fig. 4(c). The percentage of correct detection of the chin in Table 1 is rather low because of the lack of su$cient edge information in that region of the ...

Similar publications

Conference Paper
Full-text available
Dimension reduction methods are often applied in machine learning and data mining problems. Linear subspace methods are the commonly used ones, such as principal component analysis (PCA), Fisher’s linear discriminant analysis (FDA), et al. In this paper, we describe a novel feature extraction method for binary classification problems. Instead of fi...
Article
Full-text available
A crescente demanda por produtos industrializados acarreta aumento na extração de recursos naturais, especialmente, minerais. A atividade minerária gera impactos ambientais, cujos efeitos podem se prolongar por muitos anos. Os resíduos e rejeitos descartados são importantes fontes de impacto da mineração. Fatores de ordem econômica, social e ambien...
Article
Full-text available
Distinct feature extraction methods are simultaneously used to describe single-channel Electroencephalography (EEG) based biometrics. This study proposes a new strategy to extract features from EEG signals. Based on the time and frequency information, the statistics features are obtained from the EEGs. For the dichotomize process, the support vecto...

Citations

Article
Les personnes dirigent souvent leur attention vers les objets avec lesquels ils interagissent. Une premiere etape que doivent franchir les systemes informatiques pour s'adapter aux utilisateurs et ameliorer leurs interactions avec eux est de localiser leur emplacement, et en particulier la position de leur tete dans l'image. L'etape suivante est de suivre leur foyer d'attention. C'est pourquoi nous nous interessons aux techniques permettant d'estimer et de suivre le regard des utilisateurs, et en particulier l'orientation de leur tete. Cette these presente une approche completement automatique et independante de l'identite de la personne pour estimer la pose d'un visage a partir d'images basse resolution sous conditions non contraintes. La methode developpee ici est evaluee et validee avec une base de donnees d'images echantillonnee. Nous proposons une nouvelle approche a 2 niveaux qui utilise les apparences globales et locales pour estimer l'orientation de la tete. Cette methode est simple, facile a implementer et robuste a l'occlusion partielle. Les images de visage sont normalisees en taille dans des images de faible resolution a l'aide d'un algorithme de suivi de visage. Ces imagettes sont ensuite projetees dans des memoires autoassociatives et entraineespar la regle d'apprentissage de Widrow-Hoff. Les memoires autoassociatives ne necessitent que peu de parametres et evitent l'usage de couches cachees, ce qui permet la sauvegarde et le chargement de prototypes de poses du visage humain. Nous obtenons une premiere estimation de l'orientation de la tete sur des sujets connus et inconnus. Nous cherchons ensuite dans l'image les traits faciaux saillants du visage pertinents pour chaque pose. Ces traits sont decrits par des champs receptifs gaussiens normalises a l'echelle intrinseque. Ces descripteurs ont des proprietes interessantes et sont moins couteux que les ondelettes de Gabor. Les traits saillants du visage detectes par les champs receptifs gaussiens motivent la construction d'un modele de graphe pour chaque pose. Chaque noeud du graphe peut etre deplace localement en fonction de la saillance du point facial qu'il represente. Nous recherchons parmi les poses voisines de celle trouvee par les memoires autoassociatives le graphe qui correspond le mieux a l'image de test. La pose correspondante est selectionnee comme la pose du visage de la personne sur l'image. Cette methode n'utilise pas d'heuristique, d'annotation manuelle ou de connaissances prealables sur le visage et peut etre adaptee pour estimer la pose d'autres objets deformables.
Article
In this dissertation, we focus on two related parts of a 3D face recognition system with wireless transportation. In the first part, the core components of the system, namely, the feature extraction and classification component, are introduced. In the feature extraction component, range images are taken as inputs and processed in order to extract features. The classification component uses the extracted features as inputs and makes classification decisions based on trained classifiers. In the second part, we consider the wireless transportation problem of range images, which are captured by scattered sensor nodes from target objects and are forwarded to the core components (i.e., feature extraction and classification components) of the face recognition system. Contrary to the conventional definition of being a transducer, a sensor node can be a person, a vehicle, etc. The wireless transportation component not only brings flexibility to the system but also makes the “proactive ” face recognition possible. For the feature extraction component, we first introduce the 3D Morphable Model. Then a 3D feature extraction algorithm based on the 3D Morphable Model is presented. The algorithm is insensitive to facial expression. Experimental results