Figure - available from: The Visual Computer
This content is subject to copyright. Terms and conditions apply.
Comparisons against different texture transfer approaches. a Input meme image. b The input user image. c and d are the two textured images after applying geometric transfer. e Result of NST [16]. f Result of Glow [25]. g Result of deep image analogy [29]. h Our method, which better transfers the expression textures than e while preserving the user’s identity more faithfully than f or g

Comparisons against different texture transfer approaches. a Input meme image. b The input user image. c and d are the two textured images after applying geometric transfer. e Result of NST [16]. f Result of Glow [25]. g Result of deep image analogy [29]. h Our method, which better transfers the expression textures than e while preserving the user’s identity more faithfully than f or g

Source publication
Article
Full-text available
Meme, usually represented by an image of exaggerated expressive face captioned with short text, are increasingly produced and used online to express people’s strong or subtle emotions. Meanwhile, meme mimic apps continuously appear, such as the meme filming feature in WeChat App that allow users to imitate meme expressions. Motivated by such scenar...

Similar publications

Preprint
Full-text available
In image processing, problems of separation and reconstruction of missing pixels from incomplete digital images have been far more advanced in past decades. Many empirical results have produced very good results, however, providing a theoretical analysis for the success of algorithms is not an easy task, especially, for inpainting and separating mu...

Citations

... Lample et al. [44] presented a new approach to generate variations of images by changing attribute values, which generates realistic images of high resolution without needing to apply a GAN to the decoder output. Tang et al. [45] proposed a method for expressive style transfer. Liu et al. [46] applied GAN to video-to-video translation. ...
Article
Full-text available
Multi-view face generation from a single image is an essential and challenging problem. Most of the existing methods need to use paired images when training models. However, collecting and labeling large-scale paired face images could lead to high labor and time cost. In order to address this problem, multi-view face generation via unpaired images is proposed in this paper. To avoid using paired data, the encoder and discriminator are trained, so that the high-level abstract features of the identity and view of the input image are learned by the encoder, and then, these low-dimensional data are input into the generator, so that the realistic face image can be reconstructed by the training generator and discriminator. During testing, multiple one-hot vectors representing the view are imposed to the identity representation and the generator is employed to map them to high-dimensional data, respectively, which can generate multi-view images while preserving the identity features. Furthermore, to reduce the number of used labels, semi-supervised learning is used in the model. The experimental results show that our method can produce photo-realistic multi-view face images with a small number of view labels, and makes a useful exploration for the synthesis of face images via unpaired data and very few labels.
... However, they focus on only correcting the shape of the whole face, not just the nose, and their sparse landmarks and dense pixels are not semantically informative enough to represent various nose shapes. Recently, Tang et al. [6] introduced dense semantic curve constraints for 3D face reconstruction and correction, which makes the reconstructed mesh better match the face contours in the input image. However, their method mainly works for expressive face regions, such as eyebrows, eyes, and mouth, where the curve features are simple and salient, as shown in the middle row of Fig. 1. ...
... There are three problems to be solved for effectively updating dense 3D-2D curve correspondences: (i) how to determine the 3D nose contour, due to self-occlusion and variations in nose shape and pose, (ii) how to extract a precise 2D nose contour using the non-salient curve features of the boundary of the nose region, and (iii) how to establish accurate correspondences between the 3D and 2D nose contours. To extract 3D contours, Tang et al. [6] used predefined vertex indices on a template mesh as a fixed 3D nose contour, but this method is not flexible for varied nose shapes and poses. Instead, we render the sparsely corrected nose into a depth map, which can naturally form self-occlusion edges. ...
... We heuristically use this edge as the 3D nose contour to update. For 2D contour extraction, Tang et al. [6] applied snakes [7] on a feature map, but the curve features here are not distinctive enough. We produce an enhanced feature map using an RGB-D foreground enhancement method [8], where we render a depth map using the sparsely corrected 3D face mesh. ...
Article
Full-text available
There is a steadily growing range of applications that can benefit from facial reconstruction techniques, leading to an increasing demand for reconstruction of high-quality 3D face models. While it is an important expressive part of the human face, the nose has received less attention than other expressive regions in the face reconstruction literature. When applying existing reconstruction methods to facial images, the reconstructed nose models are often inconsistent with the desired shape and expression. In this paper, we propose a coarse-to-fine 3D nose reconstruction and correction pipeline to build a nose model from a single image, where 3D and 2D nose curve correspondences are adaptively updated and refined. We first correct the reconstruction result coarsely using constraints of 3D-2D sparse landmark correspondences, and then heuristically update a dense 3D-2D curve correspondence based on the coarsely corrected result. A final refinement step is performed to correct the shape based on the updated 3D-2D dense curve constraints. Experimental results show the advantages of our method for 3D nose reconstruction over existing methods.
Chapter
The dispersion curves are used in the research to propose a novel approach for dealing with frame division computer vision problems. This framework combines boundary- and region-based framework division components with a curve-based optimization problem to discover collection of curves with the smallest size. A training algorithm approach is used to minimize the stated optimization problem. The collection of starting curves is transmitted towards the greatest division underneath the impact of boundary- and region-based pressures and is also constrained by a regular pressure, as per the computed equation of motion. Variations in topologies are addressed organically because of the stage setting, architecture. In addition, a connected multi-stage propagating is developed, which enforces the principle of independent excluded propagation curves while increasing resilience and convergence speed. The three main examples in machine learning, image and guided pattern recognition in poor eyesight, motion detection and control are used to evaluate the proposed framework.KeywordsShortest possible linesComputer visionNovel curve energyTopology