Figure - available from: Machine Vision and Applications
This content is subject to copyright. Terms and conditions apply.
Comparison of age progression and regression results achieved by GFA-GAN and PFA-GAN for a close transitions (i.e., at most two age classes difference between input and target age class) and b distant transitions (i.e., more than two age classes difference between input and target age class). The results are also compared to the ground truth FGNET images

Comparison of age progression and regression results achieved by GFA-GAN and PFA-GAN for a close transitions (i.e., at most two age classes difference between input and target age class) and b distant transitions (i.e., more than two age classes difference between input and target age class). The results are also compared to the ground truth FGNET images

Source publication
Article
Full-text available
We propose a novel approach that addresses face aging as an unsupervised image-to-image translation problem. The proposed approach achieves age progression (i.e., future looks) and regression (i.e., previous looks) of face images that belong to a specific age class by translating them to other (subsequent or precedent) age classes. It learns pairwi...

Similar publications

Article
Full-text available
Human skin aging is affected by various biological signaling pathways, microenvironment factors and epigenetic regulations. With the increasing demand for cosmetics and pharmaceuticals to prevent or reverse skin aging year by year, designing multiple-molecule drugs for mitigating skin aging is indispensable. In this study, we developed strategies f...

Citations

... For CelebFaces attribute (CelebA) dataset, the SSIM achieves 78.82% and for CUFS dataset, the SSIM achieves 81.45% which ensures accurate face synthesis and editing compared with existing methods such as GAN, SuperstarGAN and identity-sensitive GAN (IsGAN) models. 1206 generative adversarial networks to learn, extract and superimpose features [12]. Improving the architecture and methodology of training and usage of GANs in various tasks like extracting, classifying distinct details, and superimposing required characteristics on a given image [13], [14]. ...
Article
Full-text available
p> Face synthesis and editing has increased cumulative consideration by the improvement of generative adversarial networks (GANs). The proposed attentional GAN-deep attentional multimodal similarity modal (AttnGAN-DAMSM) model focus on generating high-resolution images by removing discriminator components and generating realistic images from textual description. The attention model creates the attention map on the image and automatically retrieves the features to produce various sub-areas of the image. The DAMSM delivers fine-grained image-text identical loss to generative networks. This study, first describe text phrases and the model will generate a photorealistic high-resolution image composed of features with high accuracy. Next, model will fine-tune the selected features of face images and it will be left to the control of the user. The result shows that the proposed AttnGAN-DAMSM model delivers the performance metrics like structural similarity index measure (SSIM), feature similarity index measure (FSIM) and frechet inception distance (FID) using CelebA and CUHK face sketch (CUFS) dataset. For CelebFaces attribute (CelebA) dataset, the SSIM achieves 78.82% and for CUFS dataset, the SSIM achieves 81.45% which ensures accurate face synthesis and editing compared with existing methods such as GAN, SuperstarGAN and identity-sensitive GAN (IsGAN) models. </p
... [57] is another lighweight architecture that was designed for benchmarking reasons. This network was also based on the methodology of the aforementioned RetinaNet but it also was constituted of Deformable Convolutions [79], SyncBN [59] and iBN [72], to make it more efficient computationally. ...
Article
Full-text available
A study conducted by the World Bank indicated that the global annual economic losses from the water leakage are estimated at US$ 14.6 billion. For this reason, locating and repairing water leaks as well as the maintenance of water pipelines is extremely important for the optimization and rationalization of water resources. The basic technique for inspecting water delivery infrastructure is the water audit but this technique does not provide any information about the location of the water leakage. This paper focuses on this gap, aiming to provide information not only for the location of the water leakage but also for the level of water pipe material degradation due to its corrosion before the leakage presents. Here, the identification of the extent and severity of the evolving defect of water pipes is performed through deep learning models using simulated and real Ground Penetrating Radar (GPR) data. Synthetic GPR images are generated, with underground water pipes that either present leakage or no in various steps of their corrosion, using gprMax software. Especially, this addresses as a solution YOLOv5 algorithm for the automatic detection of water pipes and leaks in the underground space and a conditional Generative Adversarial Network (cGAN) for the investigation of water pipe material degradation. The results reveal that the YOLOv5 algorithm distinguishes the regions of pipes in GPR data and classified correctly the pipes which present leakage or no, and they are better than the corresponding results of other literature baseline methods. In addition, as shown through extensive simulations on generated GPR data the proposed cGAN produces high quality results that contribute to revelation of the extent and severity of the evolving defect of pipeline due to its corrosion.
... Thus, the aging pattern between age groups is learned. It is a model de- [104] 2017 CACD, FG-NET, MORPH TNVP (FG-NET Temporal Non-Volume Preserving) Conversion Heljakka et al. [102] 2018 CACD, IMDb-Wiki 2-generator, 2-discriminator reversible image converter Fang et al. [105] 2020 MORPH, CACD Triple-GAN model Pantraki et al. [106] 2021 CACD, UTKFace GFA-GAN ve PFA-GAN pyramidal face aging GAN Huang et al. [101] 2021 IMDb-Wiki, PFA-GAN(progressive face aging GAN) Fig. 10. A Sequence-based Aging GAN model [101] . ...
... Pantraki et al. [106] proposed a new framework for facial age progression and regression, treating aging as a progressive procedure. They evaluated two variants of the proposed framework: GFA-GAN and PFA-GAN (pyramidal face aging GAN). ...
... Moreover, aging transformations between distant age classes are likely more drastic and intense than those between nearby age classes. In [25], face aging was addressed as an unsupervised image-to-image translation problem. The Pyramid Face Aging-GAN (PFA-GAN) was suggested in particular, which contains a pyramid weight-sharing method. ...
... The proposed xAI-CAAE framework is trained on a set of images that were collected from the CACD [15] and the UTKFace [7] dataset. This set of images was collected and used to train the face aging approach in [25]. It includes 21,267 face images distributed to seven age classes: 0-10, 11-18, 19-29, 30-39, 40-49, 50-59, and 60+ years old (the oldest person is 80 years old). ...
... It includes 21,267 face images distributed to seven age classes: 0-10, 11-18, 19-29, 30-39, 40-49, 50-59, and 60+ years old (the oldest person is 80 years old). The same split to age classes has been considered in many facing approaches [25,43]. Approximately the same number of images belongs to each age class. ...
Article
Full-text available
This paper deals with Generative Adversarial Networks (GANs) applied to face aging. An explainable face aging framework is proposed that builds on a well-known face aging approach, namely the Conditional Adversarial Autoencoder (CAAE). The proposed framework, namely, xAI-CAAE, couples CAAE with explainable Artificial Intelligence (xAI) methods, such as Saliency maps or Shapley additive explanations, to provide corrective feedback from the discriminator to the generator. xAI-guided training aims to supplement this feedback with explanations that provide a “reason” for the discriminator’s decision. Moreover, Local Interpretable Model-agnostic Explanations (LIME) are leveraged to provide explanations for the face areas that most influence the decision of a pre-trained age classifier. To the best of our knowledge, xAI methods are utilized in the context of face aging for the first time. A thorough qualitative and quantitative evaluation demonstrates that the incorporation of the xAI systems contributed significantly to the generation of more realistic age-progressed and regressed images.
... Age estimation is used to help implement the generation of aging face images [15]. The subsequent Pyramid Face Aging-GAN [22] combines a pyramid weight-sharing scheme is combined to ensure that face aging changes slightly between adjacent age groups and dramatically between distant age groups. Most existing GANbased methods [18,23] usually use pixel-level loss to train the model to preserve identity consistency and background information. ...
Article
Full-text available
Face aging is of great importance for the information forensics and security fields, as well as entertainment-related applications. Although significant progress has been made in this field, the authenticity, age specificity, and identity preservation of generated face images still need further discussion. To better address these issues, a Feature-Guide Conditional Generative Adversarial Network (FG-CGAN) is proposed in this paper, which contains extra feature guide module and age classifier module. To preserve the identity of the input facial image during the generating procedure, in the feature guide module, perceptual loss is introduced to minimize the identity difference between the input and output face image of the generator, and L2 loss is introduced to constrain the size of the generated feature map. To make the generated image fall into the target age group, in the age classifier module, an age-estimated loss is constructed, during which L-Softmax loss is combined to make the sample boundaries of different categories more obvious. Abundant experiments are conducted on the widely used face aging dataset CACD and Morph. The results show that target aging face images generated by FG-CGAN have promising validation confidence for identity preservation. Specifically, the validation confidence levels for age groups 20–30, 30–40, and 40–50 are 95.79%, 95.42%, and 90.77% respectively, which verify the effectiveness of our proposed method.
Article
Face age progression aims to change an individual’s face from a provided face image to forecast how that image will look in the future. Face aging is gaining much attention in today’s environment, which needs better security and a touchless unique identification mechanism. Researchers are focused on creating face processing algorithms to address the difficulty of producing realistic aged faces for smart system applications over the earlier decades. In the literature, the two basic needs of face age progression, aging accuracy, and identity preservation are not thoroughly addressed. According to the extraordinary gains in image synthesis made by deep generative methods and their significant influence on a wide variety of practical applications such as identifying missing persons using entertainment, childhood images, and so on, face age progression/regression has reawakened attention. The majority of present techniques concentrate on face age progression and is beneficial and productive in learning the transition across age groups utilizing paired data, i.e., face images of the similar individual at various ages. Through the motivation of the important success attained by Generative Adversarial Networks (GANs), this paper uses tactics to implement the improved Cycle GAN-based intelligent face age progression model. Initially, the standard datasets for the face progression are gathered, and the face is detected using the Viola-Jones object detection algorithm. Then, the pre-processing of the facial image is performed by median filtering and contrast enhancement techniques. Once the image is pre-processed, the Hyperparameter Tuning-Cycle GAN (HT-Cycle GAN) is adopted for face age progression. As an improvement, the hyperparameters of the Cycle GAN are optimized or tuned by the modified Galactic Swarm Optimization (GSO), known as Best Fitness-based Galactic Swarm Optimization (BF-GSO). From the evaluation of statistical analysis, the similarity score of BF-GSO-HT-CycleGAN is 0.80%, 3.33%, and 2.86% higher than cGAN, CycleGAN, and Dubbed FaceGAN, respectively. Here, the Dubbed FaceGAN is the 2nd greatest network. Furthermore, compared to traditional models utilizing distinct standard datasets, the experimental findings show that the suggested technique attains efficiency, accuracy, and flexibility.
Article
High fidelity and controllable manipulation are critical to facial video reconstruction in human digital twins. Current generative adversarial networks (GANs) have achieved impressive performance in realistic face generation with high resolution, motivating several recent works to perform face editing via pretrained GANs. However, existing works suffer identity loss and semantic entanglement while editing real faces. To tackle these limitations, we propose a framework to perform controllable facial editing in video reconstruction. First, we propose to train a semantic inversion network to embed the target attribute change into the latent space of GANs. Disentangled semantic manipulation is performed during the semantic inversion by changing only the target attribute with the other unrelated attributes kept. Furthermore, we propose a novel personalized GAN inversion for the real face cropped from videos via retraining the generator of GANs, which can embed the real face into the latent space of GANs and preserve identity details for the real face. Finally, the realistic edited face is fused back into the original video. We use the identity preservation rate and disentanglement rate to evaluate the performance of our controllable face editing. Both qualitative and quantitative evaluations show that our method achieves prominent identity preservation and semantic disentanglement in controllable face editing, outperforming recent state-of-the-art methods.