Figure 7 - uploaded by Michael Banf
Content may be subject to copyright.
The texture (800 × 800 texels) extracted from a 3D scan or an image using the 3D morphable model, is parameterized in cylindrical coordinates. Additional eye textures are inserted in the corners. In the face, the skin color is interpolated automatically into the eyes and will be blended with the eyeball textures along the eyelids.

The texture (800 × 800 texels) extracted from a 3D scan or an image using the 3D morphable model, is parameterized in cylindrical coordinates. Additional eye textures are inserted in the corners. In the face, the skin color is interpolated automatically into the eyes and will be blended with the eyeball textures along the eyelids.

Source publication
Article
Full-text available
This paper describes a model for example-based, photo-realistic rendering of eye movements in 3D facial animation. Based on 3D scans of a face with different gaze directions, the model captures the motion of the eyeball along with the deformation of the eyelids and the surrounding skin. These deformations are represented in a 3D morphable model. Un...

Contexts in source publication

Context 1
... each vertex k and each gaze direction i, located in with a scaling factor s that guarantees appropriate texture resolution, and a translation t u,i ,t v,i that will be discussed be- low. In our implementation, we embed the eyeball textures in the corners of the overall face texture (Figure 7). ...
Context 2
... may be a scan of a face with the eyes wide open, but also a 3D recon- struction of a face from a single image, computed with the algorithm [BV99] using high-resolution textures. The tex- ture generation is implemented as a rasterization from the 3D model into the texture space, using˜uusing˜ using˜u, ˜ v (corners in Fig- ure 7). ...

Citations

... The second class of approaches typically renders the eye region based on a 3D fitting model, which replaces the original eyes with synthetic eyeballs. Banf et al. [47] use an example-based approach for deforming the eyelids and sliding the iris across the model surface with a texture-coordinate interpolation. To fix the limitations caused by the use of a mesh, where the face and eyes are mixed, GazeDirector [12] separately deals with the face and eyeballs, synthesizing more high-quality images, especially for large redirection angles. ...
Preprint
Full-text available
This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze and high-resolution CelebHQGaze. Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module (GCM) and a Gaze Animation Module (GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module (CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both the gaze correction and the gaze animation tasks in both low and high-resolution face datasets in the wild and demonstrate the superiority of our method with respect to the state of the arts. Code is available at https://github.com/zhangqianhui/GazeAnimationV2
... To enable gaze redirection, the traditional approaches are typically based on 3D modeling [6,81] by fitting eye texture and shape against 3D morphable models, but they are not ideal for handling images with eyeglasses and the high variance of facial details. Some others [13,39,85,90] render a scene containing the face of a subject from a given viewpoint to mimic gazing at the camera. ...
... Gaze Redirection. Some traditional approaches are based on 3D modeling [2,36]. They use 3D morphable models to fit both texture and shape of the eye, and re-render the synthesized eyeballs superimposed on the source image. ...
Conference Paper
In this work, we present interpGaze, a novel framework for controllable gaze redirection that achieves both precise redirection and continuous interpolation. Given two gaze images with different attributes, our goal is to redirect the eye gaze of one person into any gaze direction depicted in the reference image or to generate continuous intermediate results. To accomplish this, we design a model including three cooperative components: an encoder, a controller and a decoder. The encoder maps images into a well-disentangled and hierarchically-organized latent space. The controller adjusts the magnitudes of latent vectors to the desired strength of corresponding attributes by altering a control vector. The decoder converts the desired representations from the attribute space to the image space. To facilitate covering the full space of gaze directions, we introduce a high-quality gaze image dataset with a large range of directions, which also benefits researchers in related areas. Extensive experimental validation and comparisons to several baseline methods show that the proposed interpGaze outperforms state-of-the-art methods in terms of image quality and redirection precision.
... Traditional methods are based on a 3D model with rerendering the entire input region [1,29]. These methods suffer from two major problems: it is not easy to render the entire input region and they have an excessive requirement for heavy instrumentation. ...
... Traditional methods are based on a 3D model with re-rendering the entire input region. [1] uses an example-based approach for deforming the eyelids and slides the iris across the model surface with texturecoordinate interpolation. GazeDirector [29] is modeling the eye region in 3D for recovering the shape, pose, and appearance of the eye, then it feeds an acquired dense flow field corresponding to eyelid motion to the input image to warp the eyelids. ...
Preprint
Full-text available
Gaze redirection aims at manipulating a given eye gaze to a desirable direction according to a reference angle and it can be applied to many real life scenarios, such as video-conferencing or taking groups. However, the previous works suffer from two limitations: (1) low-quality generation and (2) low redirection precision. To this end, we propose an innovative MultiModal-Guided Gaze Redirection~(MGGR) framework that fully exploits eye-map images and target angles to adjust a given eye appearance through a designed coarse-to-fine learning. Our contribution is combining the flow-learning and adversarial learning for coarse-to-fine generation. More specifically, the role of the proposed coarse branch with flow field is to rapidly learn the spatial transformation for attaining the warped result with the desired gaze. The proposed fine-grained branch consists of a generator network with conditional residual image learning and a multi-task discriminator to reduce the gap between the warped image and the ground-truth image for recovering the finer texture details. Moreover, we propose leveraging the gazemap for desired angles as an extra guide to further improve the precision of gaze redirection. Extensive experiments on a benchmark dataset show that the proposed method outperforms the state-of-the-art methods in terms of image quality and redirection precision. Further evaluations demonstrate the effectiveness of the proposed coarse-to-fine and gazemap modules.
... Like us, Banf and Blanz [2] used morphable models to redirect gaze. They fit a single-part face model to an image, and redirect gaze by deforming the eyelids using an examplebased approach, and sliding the iris across the model surface using texture-coordinate interpolation. ...
Article
Full-text available
We present GazeDirector, a new approach for eye gaze redirection that uses model-fitting. Our method first tracks the eyes by fitting a multi-part eye region model to video frames using analysis-by-synthesis, thereby recovering eye region shape, texture, pose, and gaze simultaneously. It then redirects gaze by 1) warping the eyelids from the original image using a model-derived flow field, and 2) rendering and compositing synthesized 3D eyeballs onto the output image in a photorealistic manner. GazeDirector allows us to change where people are looking without person-specific training data, and with full articulation, i.e. we can precisely specify new gaze directions in 3D. Quantitatively, we evaluate both model-fitting and gaze synthesis, with experiments for gaze estimation and redirection on the Columbia gaze dataset. Qualitatively, we compare GazeDirector against recent work on gaze redirection, showing better results especially for large redirection angles. Finally, we demonstrate gaze redirection on YouTube videos by introducing new 3D gaze targets and by manipulating visual behavior.
... Also in contrast to the standard procedure of modelling the eyeball as a sphere, Banf and Blanz [BB09] model the visible area of the eye as part of a continuous face mesh. A texture mapping approach is used to capture eye movements and occlusions by the eyelids. ...
Article
Full-text available
A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ‘The face is the portrait of the mind; the eyes, its informers’. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human–human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human–Robot Interaction and Human–Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.
... This performance data can then be composited or learned to subsequently animate a character's eyelids. Similar methods have also been successful when used in combination with gaze motion data, building on the high correlation of movement between the two 7,22 . However, such data is often closely coupled to a particular character geometry, and thus is not easily parameterized to create a reusable procedural model. ...
Article
Full-text available
When compared to gaze, animation of the eyelids has been largely overlooked in the computer graphics literature. Eyelid movement plays an important part both in conveying accurate gaze direction and in improving the visual appearance of virtual characters. Eyelids have two major motion components: lid saccades that follow the vertical rotation the eyes, and blinking. Derived from literature in ophthalmology and psychology, this paper presents parametric models for both motion types, and emphasizes their dynamic temporal behaviour. Experimental validation classifies model-generated animation as similar to that encoded from expensive motion captured data, and significantly exceeding linearly interpolated animation. Copyright © 2010 John Wiley & Sons, Ltd. Eyelid movement plays an important part both in conveying gaze direction and enhancing the appearance of virtual characters. The eyelids have two major motion components: lid saccades that follow the vertical rotation the eyes, and blinking. Derived from literature in ophthalmology and psychology, we present parametric models for both motion types, and emphasize their dynamic temporal behaviour. Experimental validation classifies model-generated animation as similar to that encoded from expensive motion captured data, and significantly more realistic than linearly interpolated animation.
Article
This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze ( $256 \times 256$ ) and high-resolution CelebHQGaze ( $512 \times 512$ ). Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module (GCM) and a Gaze Animation Module (GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module (CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both the gaze correction and the gaze animation tasks in both low and high-resolution face datasets in the wild and demonstrate the superiority of our method with respect to the state of the art.