-
- Han Yang, Hongjian He, and Jianhui Zhong.
- Center for Brain Imaging Science and Technology, Zhejiang University, Hangzhou, China.
- Lancet. 2016 Oct 1; 388 Suppl 1: S36.
BackgroundRecent studies on MRI-based discrimination of schizophrenia have shown promise. Considering that, in general, pathological changes should display characteristic markers in both brain function and anatomy, machine-learning methods using multimodal MRI features of brain networks might provide a deeper insight into the pathological mechanism of schizophrenia. We aimed to assess the discriminative power of multimodal MRI features and to extract these features of high discriminative information for classification of patients with schizophrenia.MethodsMRI datasets were collected and distributed by the Mind Research Network (MRN). Multimodal data, including structural and resting-state functional scans, were obtained from 86 volunteers (46 healthy controls and 40 patients with schizophrenia) by the University of New Mexico, USA. During the preprocessing done by the MRN, 28 resting-state networks were first identified by group independent component analysis. Functional network connectivity was then calculated as the Pearson's correlation coefficient between any pair of resting-state network time courses, and 378 between-network connectivities were obtained. Additionally, 32 anatomical features were identified from anatomical data through independent component analysis. In a further analysis, the overall functional and structural features were combined in conjunction with a support vector machine to discriminate patients with schizophrenia from healthy controls. Maximum-uncertainty linear discriminant analysis was introduced to extract highly discriminative features. Informed consent was obtained and research ethics was approved according to institutional guidelines at the University of New Mexico, USA.FindingsOur results showed that the classifier with combined features of structural and functional MRI data achieved higher accuracy than the single-modal features (accuracy 77·91% vs 72·09%). Furthermore, 74 (86%) of 86 participants were correctly classified with 10% of features selected through feature selection, which indicated that a large proportion of functional connectivity features were redundant to classification. Further discriminative analyses showed the important roles that several brain networks (eg, precuneus, cerebellum) have in discrimination of schizophrenia from multiple perspectives, which might reveal underlying pathological mechanisms.InterpretationHere we applied a data-driven method based on multimodal characterisation to develop a classifier and find discriminative features of schizophrenia. The study not only provides further evidence for the so-called disconnection hypothesis of schizophrenia, but also shows that this method is able to extract useful information from neuroimaging data, suggesting its potential ability to identify important biomarkers and improve current diagnosis of schizophrenia.FundingNational Natural Science Foundation of China (81401473). Data collection by the Mind Research Network was funded by a Center of Biomedical Research Excellence (COBRE) grant (5P20RR021938/P20GM103472) from the National Institutes of Health.Copyright © 2016 Elsevier Ltd. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.