-
- FirouzabadiShahryar RajaiSRFunctional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran., Roozbeh Tavanaei, Ida Mohammadi, Alireza Alikhani, Ali Ansari, Mohammadhosein Akhlaghpasand, Bardia Hajikarimloo, Raymund L Yong, and Konstantinos Margetis.
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
- World Neurosurg. 2025 Feb 4: 123742123742.
BackgroundUnderstanding the BRAF alterations preoperatively could remarkably assist in predicting tumor behavior, which leads to a more precise prognostication and management strategy. Recent advances in artificial intelligence (AI) have resulted in effective predictive models. Therefore, for the first time, this study aimed to review the performance of machine learning (ML) and deep learning (DL) models in predicting the BRAF alterations in LGGs using imaging data.MethodsPubMed/MEDLINE, Embase, and the Cochrane Library were systematically searched for studies published up to June 1, 2024, and evaluated the performance of AI models in predicting BRAF alterations in LGGs using imaging data. Pooled sensitivity, specificity, and area under the receiver operating characteristics (AUROC) were meta-analyzed.ResultsA total of 6 studies with 951 patients were included in this systematic review. The pooled AUROC of internal validation cohorts for their best-performing model was 84.44 %, with models detecting BRAF mutation, BRAF fusion, BRAF fusion from mutation, and BRAF wild type producing similar AUROCs of 90.75%, 84.59%, 82.33%, and 82%, respectively. The best-performing models had pooled sensitivities of 80.3%, 87.51%, and 74.14% and pooled specificities of 88.57%, 70.41%, and 83.98% for detection of BRAF fusion from mutation, BRAF fusion, and BRAF mutation, respectively.ConclusionsAI models may perform relatively well in predicting BRAF alterations in LGG using imaging data and appear to be capable of high sensitivities and specificities. However, future studies with larger sample sizes implementing different ML or DL algorithms are required to reduce imprecision.Copyright © 2025 The Author(s). Published by Elsevier Inc. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:

- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.