-
- Yu Hsu, Cheng-Ying Chou, Yu-Cheng Huang, Yu-Chieh Liu, Yong-Long Lin, Zi-Ping Zhong, Jun-Kai Liao, Jun-Ching Lee, Hsin-Yu Chen, Jang-Jaer Lee, and Shyh-Jye Chen.
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan; Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan.
- J Formos Med Assoc. 2024 Jul 12.
Background/PurposeThe global incidence of lip and oral cavity cancer continues to rise, necessitating improved early detection methods. This study leverages the capabilities of computer vision and deep learning to enhance the early detection and classification of oral mucosal lesions.MethodsA dataset initially consisting of 6903 white-light macroscopic images collected from 2006 to 2013 was expanded to over 50,000 images to train the YOLOv7 deep learning model. Lesions were categorized into three referral grades: benign (green), potentially malignant (yellow), and malignant (red), facilitating efficient triage.ResultsThe YOLOv7 models, particularly the YOLOv7-E6, demonstrated high precision and recall across all lesion categories. The YOLOv7-D6 model excelled at identifying malignant lesions with notable precision, recall, and F1 scores. Enhancements, including the integration of coordinate attention in the YOLOv7-D6-CA model, significantly improved the accuracy of lesion classification.ConclusionThe study underscores the robust comparison of various YOLOv7 model configurations in the classification to triage oral lesions. The overall results highlight the potential of deep learning models to contribute to the early detection of oral cancers, offering valuable tools for both clinical settings and remote screening applications.Copyright © 2024 Formosan Medical Association. Published by Elsevier B.V. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.