-
- Meher R Juttukonda, Bryant G Mersereau, Yasheng Chen, Yi Su, Brian G Rubin, BenzingerTammie L STLSMallinckrodt Institute of Radiology, Washington University, St. Louis, MO 63130, USA; Department of Neurological Surgery, Washington University, St. Louis, MO 63130, USA., David S Lalush, and Hongyu An.
- Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC 27599, USA; Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
- Neuroimage. 2015 May 15; 112: 160-168.
AimMR-based correction for photon attenuation in PET/MRI remains challenging, particularly for neurological applications requiring quantitation of data. Existing methods are either not sufficiently accurate or are limited by the computation time required. The goal of this study was to develop an MR-based attenuation correction method that accurately separates bone tissue from air and provides continuous-valued attenuation coefficients for bone.Materials And MethodsPET/MRI and CT datasets were obtained from 98 subjects (mean age [±SD]: 66yrs [±9.8], 57 females) using an IRB-approved protocol and with informed consent. Subjects were injected with 352±29MBq of (18)F-Florbetapir tracer, and PET acquisitions were begun either immediately or 50min after injection. CT images of the head were acquired separately using a PET/CT system. Dual echo ultrashort echo-time (UTE) images and two-point Dixon images were acquired. Regions of air were segmented via a threshold of the voxel-wise multiplicative inverse of the UTE echo 1 image. Regions of bone were segmented via a threshold of the R2* image computed from the UTE echo 1 and UTE echo 2 images. Regions of fat and soft tissue were segmented using fat and water images decomposed from the Dixon images. Air, fat, and soft tissue were assigned linear attenuation coefficients (LACs) of 0, 0.092, and 0.1cm(-1), respectively. LACs for bone were derived from a regression analysis between corresponding R2* and CT values. PET images were reconstructed using the gold standard CT method and the proposed CAR-RiDR method.ResultsThe RiDR segmentation method produces mean Dice coefficient±SD across subjects of 0.75±0.05 for bone and 0.60±0.08 for air. The CAR model for bone LACs greatly improves accuracy in estimating CT values (28.2%±3.0 mean error) compared to the use of a constant CT value (46.9%±5.8, p<10(-6)). Finally, the CAR-RiDR method provides a low whole-brain mean absolute percent-error (MAPE±SD) in PET reconstructions across subjects of 2.55%±0.86. Regional PET errors were also low and ranged from 0.88% to 3.79% in 24 brain ROIs.ConclusionWe propose an MR-based attenuation correction method (CAR-RiDR) for quantitative PET neurological imaging. The proposed method employs UTE and Dixon images and consists of two novel components: 1) accurate segmentation of air and bone using the inverse of the UTE1 image and the R2* image, respectively and 2) estimation of continuous LAC values for bone using a regression between R2* and CT-Hounsfield units. From our analysis, we conclude that the proposed method closely approaches (<3% error) the gold standard CT-scaled method in PET reconstruction accuracy.Copyright © 2015 Elsevier Inc. All rights reserved.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.