-
- Jun-Jun Shen, Qin-Chang Chen, Yu-Lu Huang, Kai Wu, Liu-Cheng Yang, and Shu-Shui Wang.
- Department of Pediatric Surgery, Zhujiang Hospital, Southern Medical University, No. 253, Industrial Avenue Middle, Guangzhou 510282, Guangdong, China.
- Postgrad Med J. 2024 Jul 29.
BackgroundWilliams-Beuren syndrome, Noonan syndrome, and Alagille syndrome are common types of genetic syndromes (GSs) characterized by distinct facial features, pulmonary stenosis, and delayed growth. In clinical practice, differentiating these three GSs remains a challenge. Facial gestalts serve as a diagnostic tool for recognizing Williams-Beuren syndrome, Noonan syndrome, and Alagille syndrome. Pretrained foundation models (PFMs) can be considered the foundation for small-scale tasks. By pretraining with a foundation model, we propose facial recognition models for identifying these syndromes.MethodsA total of 3297 (n = 1666) facial photos were obtained from children diagnosed with Williams-Beuren syndrome (n = 174), Noonan syndrome (n = 235), and Alagille syndrome (n = 51), and from children without GSs (n = 1206). The photos were randomly divided into five subsets, with each syndrome and non-GS equally and randomly distributed in each subset. The proportion of the training set and the test set was 4:1. The ResNet-100 architecture was employed as the backbone model. By pretraining with a foundation model, we constructed two face recognition models: one utilizing the ArcFace loss function, and the other employing the CosFace loss function. Additionally, we developed two models using the same architecture and loss function but without pretraining. The accuracy, precision, recall, and F1 score of each model were evaluated. Finally, we compared the performance of the facial recognition models to that of five pediatricians.ResultsAmong the four models, ResNet-100 with a PFM and CosFace loss function achieved the best accuracy (84.8%). Of the same loss function, the performance of the PFMs significantly improved (from 78.5% to 84.5% for the ArcFace loss function, and from 79.8% to 84.8% for the CosFace loss function). With and without the PFM, the performance of the CosFace loss function models was similar to that of the ArcFace loss function models (79.8% vs 78.5% without PFM; 84.8% vs 84.5% with PFM). Among the five pediatricians, the highest accuracy (0.700) was achieved by the senior-most pediatrician with genetics training. The accuracy and F1 scores of the pediatricians were generally lower than those of the models.ConclusionsA facial recognition-based model has the potential to improve the identification of three common GSs with pulmonary stenosis. PFMs might be valuable for building screening models for facial recognition. Key messages What is already known on this topic: Early identification of genetic syndromes (GSs) is crucial for the management and prognosis of children with pulmonary stenosis (PS). Facial phenotyping with convolutional neural networks (CNNs) often requires large-scale training data, limiting its usefulness for GSs. What this study adds: We successfully built multi-classification models based on face recognition using a CNN to accurately identify three common PS-associated GSs. ResNet-100 with a pretrained foundation model (PFM) and CosFace loss function achieved the best accuracy (84.8%). Pretrained with the foundation model, the performance of the models significantly improved, although the impact of the type of loss function appeared to be minimal. How this study might affect research, practice, or policy: A facial recognition-based model has the potential to improve the identification of GSs in children with PS. The PFM might be valuable for building identification models for facial detection.© The Author(s) 2024. Published by Oxford University Press on behalf of Fellowship of Postgraduate Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.