Figure - available from: Frontiers in Pediatrics
This content is subject to copyright.
Facial appearance of WBS patients (104 cases). The black bar is used to protect privacy.

Facial appearance of WBS patients (104 cases). The black bar is used to protect privacy.

Source publication
Article
Full-text available
Background: Williams-Beuren syndrome (WBS) is a rare genetic syndrome with a characteristic “elfin” facial gestalt. The “elfin” facial characteristics include a broad forehead, periorbital puffiness, flat nasal bridge, short upturned nose, wide mouth, thick lips, and pointed chin. Recently, deep convolutional neural networks (CNNs) have been succes...

Similar publications

Article
Full-text available
Williams-Beuren syndrome (WBS) is a genetic disorder associated with the hemizygous deletion of several genes in chromosome 7, encoding 26 proteins. Malfunction of these proteins induce multisystemic failure in an organism. While biological functions of most proteins are more or less established, the one of methyltransferase WBSCR27 remains elusive...

Citations

... A few recent studies have focused on facial recognition for the diagnosis of WBS [10][11][12][13][14][15]. In the past few years, our team has investigated facial dysmorphism in children suffering from WBS and has developed several models of facial recognition based on deep convolutional neural networks (CNNs) to diagnose it [16]. Nevertheless, their accuracy, recall, and F1 score need to be further improved, while their effectiveness in clinical practice needs to be verified. ...
... In 2021, our team developed five models of facial recognition based on deep CNN architectures to diagnose WBS. Of them, the VGG-19 model achieved the best performance, with an accuracy of 92.7% ± 1.3%, precision of 94% ± 5.6%, recall of 81.7% ± 3.6%, F1 score of 87.2% ± 2%, and area under the curve (AUC) of 89.6% ± 1.3% [16]. To further improve the effectiveness of these models, we developed six models of facial recognition to identify patients with WBS in this study. ...
... The ResNet architecture is computationally efficient, as it can obtain accurate results while using a few parameters. It was developed in 2015 for image recognition and won the [16]. The VGG models achieved the best performance, followed by the ResNet models. ...
Article
Full-text available
Williams-Beuren syndrome (WBS) is a rare genetic disorder characterized by special facial gestalt, delayed development, and supravalvular aortic stenosis or/and stenosis of the branches of the pulmonary artery. We aim to develop and optimize accurate models of facial recognition to assist in the diagnosis of WBS, and to evaluate their effectiveness by using both five-fold cross-validation and an external test set. We used a total of 954 images from 135 patients with WBS, 124 patients suffering from other genetic disorders, and 183 healthy children. The training set comprised 852 images of 104 WBS cases, 91 cases of other genetic disorders, and 145 healthy children from September 2017 to December 2021 at the Guangdong Provincial People’s Hospital. We constructed six binary classification models of facial recognition for WBS by using EfficientNet-b3, ResNet-50, VGG-16, VGG-16BN, VGG-19, and VGG-19BN. Transfer learning was used to pre-train the models, and each model was modified with a variable cosine learning rate. Each model was first evaluated by using five-fold cross-validation and then assessed on the external test set. The latter contained 102 images of 31 children suffering from WBS, 33 children with other genetic disorders, and 38 healthy children. To compare the capabilities of these models of recognition with those of human experts in terms of identifying cases of WBS, we recruited two pediatricians, a pediatric cardiologist, and a pediatric geneticist to identify the WBS patients based solely on their facial images. We constructed six models of facial recognition for diagnosing WBS using EfficientNet-b3, ResNet-50, VGG-16, VGG-16BN, VGG-19, and VGG-19BN. The model based on VGG-19BN achieved the best performance in terms of five-fold cross-validation, with an accuracy of 93.74% ± 3.18%, precision of 94.93% ± 4.53%, specificity of 96.10% ± 4.30%, and F1 score of 91.65% ± 4.28%, while the VGG-16BN model achieved the highest recall value of 91.63% ± 5.96%. The VGG-19BN model also achieved the best performance on the external test set, with an accuracy of 95.10%, precision of 100%, recall of 83.87%, specificity of 93.42%, and F1 score of 91.23%. The best performance by human experts on the external test set yielded values of accuracy, precision, recall, specificity, and F1 scores of 77.45%, 60.53%, 77.42%, 83.10%, and 66.67%, respectively. The F1 score of each human expert was lower than those of the EfficientNet-b3 (84.21%), ResNet-50 (74.51%), VGG-16 (85.71%), VGG-16BN (85.71%), VGG-19 (83.02%), and VGG-19BN (91.23%) models. Conclusion: The results showed that facial recognition technology can be used to accurately diagnose patients with WBS. Facial recognition models based on VGG-19BN can play a crucial role in its clinical diagnosis. Their performance can be improved by expanding the size of the training dataset, optimizing the CNN architectures applied, and modifying them with a variable cosine learning rate. What Is Known: • The facial gestalt of WBS, often described as “elfin,” includes a broad forehead, periorbital puffiness, a flat nasal bridge, full cheeks, and a small chin. • Recent studies have demonstrated the potential of deep convolutional neural networks for facial recognition as a diagnostic tool for WBS. What Is New: • This study develops six models of facial recognition, EfficientNet-b3, ResNet-50, VGG-16, VGG-16BN, VGG-19, and VGG-19BN, to improve WBS diagnosis. • The VGG-19BN model achieved the best performance, with an accuracy of 95.10% and specificity of 93.42%. The facial recognition model based on VGG-19BN can play a crucial role in the clinical diagnosis of WBS.
... The best-achieved accuracy was 92.7% ± 1.3% and AUROC of 0.896 ± 0.013. All of the DL models in this study performed better than expert human operators in diagnosing WBS (worst DL model accuracy 85.6% vs. best human accuracy 82.1%) [107]. ...
Article
Full-text available
Artificial intelligence (AI), encompassing machine learning (ML) and deep learning (DL), has revolutionized medical research, facilitating advancements in drug discovery and cancer diagnosis. ML identifies patterns in data, while DL employs neural networks for intricate processing. Predictive modeling challenges, such as data labeling, are addressed by transfer learning (TL), leveraging pre-existing models for faster training. TL shows potential in genetic research, improving tasks like gene expression analysis, mutation detection, genetic syndrome recognition, and genotype-phenotype association. This review explores the role of TL in overcoming challenges in mutation detection, genetic syndrome detection, gene expression, or phenotype-genotype association. TL has shown effectiveness in various aspects of genetic research. TL enhances the accuracy and efficiency of mutation detection, aiding in the identification of genetic abnormalities. TL can improve the diagnostic accuracy of syndrome-related genetic patterns. Moreover, TL plays a crucial role in gene expression analysis in order to accurately predict gene expression levels and their interactions. Additionally, TL enhances phenotype-genotype association studies by leveraging pre-trained models. In conclusion, TL enhances AI efficiency by improving mutation prediction, gene expression analysis, and genetic syndrome detection. Future studies should focus on increasing domain similarities, expanding databases, and incorporating clinical data for better predictions.
... The VGG network achieved the best performance with an accuracy of 79.78%. In 2021, Liu et al. [37] developed five models of facial recognition for William syndrome patients by using the VGG16, VGG19, ResNet18, ResNet34, and MobileNet-V2 architectures respectively. The VGG19 model achieved the best performance, followed by the VGG16 model. ...
Article
Full-text available
Background Noonan syndrome (NS) is a rare genetic disease, and patients who suffer from it exhibit a facial morphology that is characterized by a high forehead, hypertelorism, ptosis, inner epicanthal folds, down-slanting palpebral fissures, a highly arched palate, a round nasal tip, and posteriorly rotated ears. Facial analysis technology has recently been applied to identify many genetic syndromes (GSs). However, few studies have investigated the identification of NS based on the facial features of the subjects. Objectives This study develops advanced models to enhance the accuracy of diagnosis of NS. Methods A total of 1,892 people were enrolled in this study, including 233 patients with NS, 863 patients with other GSs, and 796 healthy children. We took one to 10 frontal photos of each subject to build a dataset, and then applied the multi-task convolutional neural network (MTCNN) for data pre-processing to generate standardized outputs with five crucial facial landmarks. The ImageNet dataset was used to pre-train the network so that it could capture generalizable features and minimize data wastage. We subsequently constructed seven models for facial identification based on the VGG16, VGG19, VGG16-BN, VGG19-BN, ResNet50, MobileNet-V2, and squeeze-and-excitation network (SENet) architectures. The identification performance of seven models was evaluated and compared with that of six physicians. Results All models exhibited a high accuracy, precision, and specificity in recognizing NS patients. The VGG19-BN model delivered the best overall performance, with an accuracy of 93.76%, precision of 91.40%, specificity of 98.73%, and F1 score of 78.34%. The VGG16-BN model achieved the highest AUC value of 0.9787, while all models based on VGG architectures were superior to the others on the whole. The highest scores of six physicians in terms of accuracy, precision, specificity, and the F1 score were 74.00%, 75.00%, 88.33%, and 61.76%, respectively. The performance of each model of facial recognition was superior to that of the best physician on all metrics. Conclusion Models of computer-assisted facial recognition can improve the rate of diagnosis of NS. The models based on VGG19-BN and VGG16-BN can play an important role in diagnosing NS in clinical practice.
... In addition, many rare diseases often present a characteristic pattern of facial features called "facial gestalt". With the recent advances in computer vision, the next-generation phenotyping (NGP) approaches that analyze a patient's frontal image have proven capable of diagnosing patients with rare disorders [17][18][19][20][21][22][23][24][25][26]. The Prioritization of Exome Data by Image Analysis (PEDIA) study has demonstrated that integrating facial and clinical feature analysis into variant prioritization significantly improves performance [27]. ...
Article
Full-text available
Genomic variant prioritization is crucial for identifying disease-associated genetic variations. Integrating facial and clinical feature analyses into this process enhances performance. This study demonstrates the integration of facial analysis (GestaltMatcher) and Human Phenotype Ontology analysis (CADA) within VarFish, an open-source variant analysis framework. Challenges related to non-open-source components were addressed by providing an open-source version of GestaltMatcher, facilitating on-premise facial analysis to address data privacy concerns. Performance evaluation on 163 patients recruited from a German multi-center study of rare diseases showed PEDIA’s superior accuracy in variant prioritization compared to individual scores. This study highlights the importance of further benchmarking and future integration of advanced facial analysis approaches aligned with ACMG guidelines to enhance variant classification.
... Doctors can make preliminary judgments on specific diseases by identifying the facial features of patients, which play a prompt role in the subsequent diagnosis and treatment of diseases. Hereditary diseases, such as Down syndrome [3], Turner syndrome [4], Nunan syndrome [5] and Williams-Beuren syndrome [6], Angelman syndrome [7], and Corneliade Lange syndrome [8], are the main causes of facial change or deformity. Meanwhile, there are some non-genetic diseases such as acromegaly [9], autism [10], and Alzheimer's disease [11]. ...
Article
Objective: This study aimed to systematically review the literature on facial recognition technology based on deep learning networks in disease diagnosis over the past ten years to identify the objective basis of this application. Methods: This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for literature search and retrieved relevant literature from multiple databases, including PubMed, on November 13, 2023. The search keywords included deep learning convolutional neural networks, facial recognition, and disease recognition. 208 articles on facial recognition technology based on deep learning networks in disease diagnosis over the past ten years were screened, and 22 articles were selected for analysis. The meta-analysis was conducted using Stata 14.0 software. Results: The study collected 22 articles with a total sample size of 57,539 cases, of which 43,301 were samples with various diseases. The meta-analysis results indicated that the accuracy of deep learning in facial recognition for disease diagnosis was 91.0% [95% CI (87.0%, 95.0%)]. Conclusion: The study results suggested that facial recognition technology based on deep learning networks has high accuracy in disease diagnosis, providing a reference for further development and application of this technology.
... To validate our new mouse model, we first characterized its anatomical and behavioral phenotypes, and compared these to symptoms observed in patients with Williams syndrome as well as with the previously published mouse models (Table S1). Patients with Williams syndrome display characteristic facial features (Williams et al., 1961;Hammond et al., 2005;Liu et al., 2021), and the previously published complete deletion (CD) mouse model has smaller skulls with reduced mandibles (Segura-Puimedon et al., 2014). Micro-computed tomography (µCT) revealed that also our mice show cranial aberrations, in particular a shorter nasal bone and a flattened parietal bone (Fig. 1C, S2). ...
Preprint
Full-text available
Williams syndrome is a developmental disorder caused by a microdeletion entailing loss of a single copy of 25-27 genes on chromosome 7q11.23. Patients with Williams syndrome suffer from cardiovascular and neuropsychological symptoms. So far, the structural abnormalities of the cardiovascular system in Williams syndrome have been attributed to the loss of a copy of the elastin (ELN) gene. In contrast, the neuropsychological consequences of Williams syndrome, including motor deficits, hypersociability and cognitive impairments, have been mainly attributed to altered expression of transcription factors like LIMK1, GTF2I and GTF2IRD1, while the potential secondary impact of altered cerebrovascular function has been largely ignored. To study the relation between the mutation underlying Williams syndrome and vascularization of not only the heart but also that of the brain, we generated a mouse model with a relatively long microdeletion, including the Ncf1 gene to reduce the confounding impact of hypertension. The affected mice had an elongated and tortuous aorta, but unlike in Eln haploinsufficient mice, there were no signs of structural cardiac hypertrophy. Our Williams syndrome mice had similar structural abnormalities in their coronary and brain vessels, showing disorganized extracellular matrices of the vessel walls. Moreover, our mouse model faithfully replicated both cardiovascular and neurological symptoms of Williams syndrome, highlighting that accurate non-invasive evaluation of complex vascular abnormalities is feasible. Altogether, we present evidence for vascular malformations that are similar in heart and brain, suggesting that cardiovascular and neurological symptoms can both by impacted by changes in the vascular structure in patients with Williams syndrome.
... Liu et al. [58] tested five different deep CNN architectures (VGG-16, VGG-19, ResNet-18, ResNet-34, and MobileNet-V2) using transfer learning from ImageNet and fine-tuning the neural networks with 340 images from Williams-Beuren Syndrome and control individuals (healthy and with other syndromes). The accuracy obtained by these architectures increased in the same direction as the increase in their number of parameters. ...
Article
Full-text available
Neurodevelopment disorders can result in facial dysmorphisms. Therefore, the analysis of facial images using image processing and machine learning techniques can help construct systems for diagnosing genetic syndromes and neurodevelopmental disorders. The systems offer faster and cost-effective alternatives for genotyping tests, particularly when dealing with large-scale applications. However, there are still challenges to overcome to ensure the accuracy and reliability of computer-aided diagnosis systems. This article presents a systematic review of such initiatives, including 55 articles. The main aspects used to develop these diagnostic systems were discussed, namely datasets - availability, type of image, size, ethnicities and syndromes - types of facial features, techniques used for normalization, dimensionality reduction and classification, deep learning, as well as a discussion related to the main gaps, challenges and opportunities.
... This examination takes a long time ranging from 2-3 weeks [6], but in some cases, it can take up to 8 weeks and the costs incurred are pretty significant. So, it is difficult for all groups to do this [7]. In addition, doctors sometimes perform a physical examination on specific physical characteristics such as facial features, and Geneticists mostly do physical examinations at an early stage. ...
... Therefore, computer-assisted development using artificial intelligence can be helpful in the accurate diagnosis of genetic diseases. Such artificial intelligence can increase the efficiency of early diagnosis work and provide valuable information to doctors and patients [7]. ...
... According to several previous studies, research has been carried out on the development of detecting genetic diseases using facial recognition [7], [19], [20]. In a study conducted by Liu et al. in 2021, researching the faces of patients with WBS using the convolutional neural network (CNN) showed promising results of 92.7% [7]. ...
Article
Full-text available
span lang="EN-US">Genetic diseases vary widely. Practitioners often face the complexity of determining genetic diseases. In distinguishing one genetic disease from another, it is difficult to do without a thorough test on the patient or also known as genetic testing. However, in some previous studies, genetic diseases have unique physical characteristics in sufferers. This leads to detecting differences in these physical characteristics to assist doctors in diagnosing people with genetic diseases. In recent years, facial recognition research has been quite active. Researchers continue to develop it from various existing methods, algorithms, approaches, and databases where the application is applied in various fields, one of which is medical imagery. Face recognition is one of the options for identifying disease. The condition of a person's face can be said to be a representation of a person's health. Where the accuracy in early detection can be pretty good, so face recognition is also one of the solutions that can be used to identify various genetic diseases in collaboration with artificial intelligence. This article review will focus more on the development of facial recognition in 2-dimensional images, showing that different methods can produce different results and face recognition can also overcome complex genetic disease variations. </span
... Machine learning is one kind of computer algorithm, but its highspeed development recently makes it considered to bring about the fourth industrial revolution (24). Such as the widely used face recognition technology, which based on the convolutional neural network (CNN) algorithm (25, 26). The reality of self-driving cars is also highly dependent on deep learning algorithms (27). ...
Article
Full-text available
Background Interbody cage subsidence is a common complication after instrumented posterior lumbar fusion surgery, several previous studies have shown that cage subsidence is related to multiple factors. But the current research has not combined these factors to predict the subsidence, there is a lack of an individualized and comprehensive evaluation of the risk of cage subsidence following the surgery. So we attempt to identify potential risk factors and develop a risk prediction model that can predict the possibility of subsidence by providing a Cage Subsidence Score (CSS) after surgery, and evaluate whether machine learning-related techniques can effectively predict the subsidence. Methods This study reviewed 59 patients who underwent posterior lumbar fusion in our hospital from 2014 to 2019. They were divided into a subsidence group and a non-subsidence group according to whether the interbody fusion cage subsidence occurred during follow-up. Data were collected on the patient, including age, sex, cage segment, number of fusion segments, preoperative space height, postoperative space height, preoperative L4 lordosis Angle, postoperative L4 lordosis Angle, preoperative L5 lordosis Angle, postoperative PT, postoperative SS, postoperative PI. The conventional statistical analysis method was used to find potential risk factors that can lead to subsidence, then the results were incorporated into stepwise regression and machine learning algorithms, respectively, to build a model that could predict the subsidence. Finally the diagnostic efficiency of prediction is verified. Results Univariate analysis showed significant differences in pre−/postoperative intervertebral disc height, postoperative L4 segment lordosis, postoperative PT, and postoperative SS between the subsidence group and the non-subsidence group ( p < 0.05). The CSS was trained by stepwise regression: 2 points for postoperative disc height > 14.68 mm, 3 points for postoperative L4 segment lordosis angle >16.91°, and 4 points for postoperative PT > 22.69°. If the total score is larger than 0.5, it is the high-risk subsidence group, while less than 0.5 is low-risk. The score obtains the area under the curve (AUC) of 0.857 and 0.806 in the development and validation set, respectively. The AUC of the GBM model based on the machine learning algorithm to predict the risk in the training set is 0.971 and the validation set is 0.889. The AUC of the avNNet model reached 0.931 in the training set and 0.868 in the validation set, respectively. Conclusion The machine learning algorithm has advantages in some indicators, and we have preliminarily established a CSS that can predict the risk of postoperative subsidence after lumbar fusion and confirmed the important application prospect of machine learning in solving practical clinical problems.
... The PSD (PhotoShop Document) features were extracted using the short-term Fourier transform (STFT). The PSD features are correlated with emotions in different bands, such as theta bands (4-7 Hz), alpha bands (8)(9)(10)(11)(12), beta bands (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), and gamma bands (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47). The 56 features (14 channels × 4 frequency bands) were used to represent the EEG signals. ...
Article
Full-text available
Computer recognition of human activity is an important area of research in computer vision. Human activity recognition (HAR) involves identifying human activities in real-life contexts and plays an important role in interpersonal interaction. Artificial intelligence usually identifies activities by analyzing data collected using different sources. These can be wearable sensors, MEMS devices embedded in smartphones, cameras, or CCTV systems. As part of HAR, computer vision technology can be applied to the recognition of the emotional state through facial expressions using facial positions such as the nose, eyes, and lips. Human facial expressions change with different health states. Our application is oriented toward the detection of the emotional health of subjects using a self-normalizing neural network (SNN) in cascade with an ensemble layer. We identify the subjects’ emotional states through which the medical staff can derive useful indications of the patient’s state of health.