Med Phys
-
Multicenter Study Comparative Study
Multi-site quality and variability analysis of 3D FDG PET segmentations based on phantom and clinical image data.
Radiomics utilizes a large number of image-derived features for quantifying tumor characteristics that can in turn be correlated with response and prognosis. Unfortunately, extraction and analysis of such image-based features is subject to measurement variability and bias. The challenge for radiomics is particularly acute in Positron Emission Tomography (PET) where limited resolution, a high noise component related to the limited stochastic nature of the raw data, and the wide variety of reconstruction options confound quantitative feature metrics. Extracted feature quality is also affected by tumor segmentation methods used to define regions over which to calculate features, making it challenging to produce consistent radiomics analysis results across multiple institutions that use different segmentation algorithms in their PET image analysis. Understanding each element contributing to these inconsistencies in quantitative image feature and metric generation is paramount for ultimate utilization of these methods in multi-institutional trials and clinical oncology decision making. ⋯ Analysis results underline the importance of PET scanner reconstruction harmonization and imaging protocol standardization for quantification of lesion volumes. In addition, to enable a distributed multi-site analysis of FDG PET images, harmonization of analysis approaches and operator training in combination with highly automated segmentation methods seems to be advisable. Future work will focus on quantifying the impact of segmentation variation on radiomics system performance.
-
To describe in detail a dataset consisting of serial four-dimensional computed tomography (4DCT) and 4D cone beam CT (4DCBCT) images acquired during chemoradiotherapy of 20 locally advanced, nonsmall cell lung cancer patients we have collected at our institution and shared publicly with the research community. ⋯ Due to high temporal frequency sampling, redundant (4DCT and 4DCBCT) data at similar timepoints, oversampled 4DCBCT, and fiducial markers, this dataset can support studies in image-guided and image-guided adaptive radiotherapy, assessment of 4D voxel trajectory variability, and development and validation of new tools for image registration and motion management.
-
Electrical Impedance Tomography (EIT) is an imaging modality used to generate two-dimensional cross-sectional images representing impedance change in the thorax. The impedance of lung tissue changes with change in air content of the lungs; hence, EIT can be used to examine regional lung ventilation in patients with abnormal lungs. In lung EIT, electrodes are attached around the circumference of the thorax to inject small alternating currents and measure resulting voltages. In contrast to X-ray computed tomography (CT), EIT images do not depict a thorax slice of well defined thickness, but instead visualize a lens-shaped region around the electrode plane, which results from diffuse current propagation in the thorax. Usually, this is considered a drawback, since image interpretation is impeded if 'off-plane' conductivity changes are projected onto the reconstructed two-dimensional image. In this paper we describe an approach that takes advantage of current propagation below and above the electrode plane. The approach enables estimation of the individual conductivity change in each lung lobe from boundary voltage measurements. This could be used to monitor disease progression in patients with obstructive lung diseases, such as chronic obstructive pulmonary disease (COPD) or cystic fibrosis (CF) and to obtain a more comprehensive insight into the pathophysiology of the lung. ⋯ The presented approach enhances common reconstruction methods by providing information about anatomically assignable units and thus facilitates image interpretation, since impedance change and thus ventilation of each lobe is directly determined in the reconstructions.
-
Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as "U-net." ⋯ In conclusion, we applied a deep-learning method, U-net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U-net-based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation.