Figure 2 - uploaded by Abdelalim Sadiq
Content may be subject to copyright.
Facial Features Points [4]

Facial Features Points [4]

Source publication
Conference Paper
Full-text available
Face detection and tracking is challenging problem in image processing and computer vision. In the last few years, it has received a great attention because of its many applications based on various methods in different fields such as law enforcement, security and so on. Face detection and tracking are the two processes done by using various approa...

Citations

Conference Paper
Full-text available
In this paper, a novel tool for facial and gesture analysis aiming to quantify different subjective measures employed in the Speech Language Pathologist area is proposed. Through an input video (from a simple monocular camera) showing a person's face, the developed tool can track the movements and expressions from it in order to extract useful morphological and gestural parameters that are of interest in different fields of study, such as, Speech Language Pathologist, Neurology, etc. A modified version of a 3D face model, named Candide-3, is employed in the tracking stage. Since the original 3D model cannot handle asymmetrical facial movements, a new set of animation units was implemented in order to effectively track asymmetrical gestures. To enhance the tracking accuracy, a fusion scheme is proposed in the facial gesture tracking stage by means of the combination of the 3D face model previously described and facial landmarks detected using deep learning models. This tool will be made open source, both the software application (oriented to health professionals, no need to have any programming knowledge), and the source code for the computer vision community. Several perceptual experiments were carried out, achieving promising results.
Article
Full-text available
People with reduced upper-limb mobility depend mainly on facial gestures to communicate with the world; nonetheless, current facial gesture-based interfaces do not take into account the reduction in mobility that most people with motor limitations experience during recovery periods. This study presents an alternative to overcome this limitation, a human-computer interface based on computer vision techniques over two types of images: images of the user’s face captured by a webcam and screenshots of a desktop application running on the foreground. The first type is used to detect, track, and estimate gestures, facial patterns in order to move and execute commands with the cursor, while the second one is used to ensure that the cursor moves to specific interaction areas of the desktop application. The interface was fully programmed in Python 3.6 using open source libraries and runs in the background in Windows operating systems. The performance of the interface was evaluated with videos of people using four interaction commands in WhatsApp Desktop. We conclude that the interface can operate with various types of lighting, backgrounds, camera distances, body postures, and movement speeds; and the location and size of the WhatsApp window does not affect its effectiveness. The interface operates at a speed of 1 Hz and uses 35 % of the capacity a desktop computer with an Intel Core i5 processor and 1.5 GB of RAM for its execution; therefore, this solution can be implemented in ordinary, low-end personal computers.