Doctor of Philosophy, Massachusetts Institute of Technology (2017)
B.S.E., National Taiwan University (2010)
Greg Zaharchuk, Postdoctoral Faculty Sponsor
While sampled or short-frame realizations have shown the potential power of deep learning to reduce radiation dose for PET images, evidence in true injected ultra-low-dose cases is lacking. Therefore, we evaluated deep learning enhancement using a significantly reduced injected radiotracer protocol for amyloid PET/MRI.Eighteen participants underwent two separate 18F-florbetaben PET/MRI studies in which an ultra-low-dose (6.64 ± 3.57 MBq, 2.2 ± 1.3% of standard) or a standard-dose (300 ± 14 MBq) was injected. The PET counts from the standard-dose list-mode data were also undersampled to approximate an ultra-low-dose session. A pre-trained convolutional neural network was fine-tuned using MR images and either the injected or sampled ultra-low-dose PET as inputs. Image quality of the enhanced images was evaluated using three metrics (peak signal-to-noise ratio, structural similarity, and root mean square error), as well as the coefficient of variation (CV) for regional standard uptake value ratios (SUVRs). Mean cerebral uptake was correlated across image types to assess the validity of the sampled realizations. To judge clinical performance, four trained readers scored image quality on a five-point scale (using 15% non-inferiority limits for proportion of studies rated 3 or better) and classified cases into amyloid-positive and negative studies.The deep learning-enhanced PET images showed marked improvement on all quality metrics compared with the low-dose images as well as having generally similar regional CVs as the standard-dose. All enhanced images were non-inferior to their standard-dose counterparts. Accuracy for amyloid status was high (97.2% and 91.7% for images enhanced from injected and sampled ultra-low-dose data, respectively) which was similar to intra-reader reproducibility of standard-dose images (98.6%).Deep learning methods can synthesize diagnostic-quality PET images from ultra-low injected dose simultaneous PET/MRI data, demonstrating the general validity of sampled realizations and the potential to reduce dose significantly for amyloid imaging.
View details for DOI 10.1007/s00259-020-05151-9
View details for PubMedID 33416955
View details for Web of Science ID 000568290500455
View details for Web of Science ID 000568290501578
We aimed to evaluate the performance of deep learning-based generalization of ultra-low-count amyloid PET/MRI enhancement when applied to studies acquired with different scanning hardware and protocols.Eighty simultaneous [18F]florbetaben PET/MRI studies were acquired, split equally between two sites (site 1: Signa PET/MRI, GE Healthcare, 39 participants, 67?±?8 years, 23 females; site 2: mMR, Siemens Healthineers, 64?±?11 years, 23 females) with different MRI protocols. Twenty minutes of list-mode PET data (90-110 min post-injection) were reconstructed as ground-truth. Ultra-low-count data obtained from undersampling by a factor of 100 (site 1) or the first minute of PET acquisition (site 2) were reconstructed for ultra-low-dose/ultra-short-time (1% dose and 5% time, respectively) PET images. A deep convolution neural network was pre-trained with site 1 data and either (A) directly applied or (B) trained further on site 2 data using transfer learning. Networks were also trained from scratch based on (C) site 2 data or (D) all data. Certified physicians determined amyloid uptake (+/-) status for accuracy and scored the image quality. The peak signal-to-noise ratio, structural similarity, and root-mean-squared error were calculated between images and their ground-truth counterparts. Mean regional standardized uptake value ratios (SUVR, reference region: cerebellar cortex) from 37 successful site 2 FreeSurfer segmentations were analyzed.All network-synthesized images had reduced noise than their ultra-low-count reconstructions. Quantitatively, image metrics improved the most using method B, where SUVRs had the least variability from the ground-truth and the highest effect size to differentiate between positive and negative images. Method A images had lower accuracy and image quality than other methods; images synthesized from methods B-D scored similarly or better than the ground-truth images.Deep learning can successfully produce diagnostic amyloid PET images from short frame reconstructions. Data bias should be considered when applying pre-trained deep ultra-low-count amyloid PET/MRI networks for generalization.
View details for DOI 10.1007/s00259-020-04897-6
View details for PubMedID 32535655
PURPOSE: Our goal is to use a generative adversarial network (GAN) with feature matching and task-specific perceptual loss to synthesize standard-dose amyloid PET images of high quality and including accurate pathological features from ultra-low-dose PET images only.METHODS: 40 PET datasets from 39 participants were acquired with a simultaneous PET/MRI scanner following injection of 330±30 MBq of the amyloid radiotracer 18F-florbetaben. The raw list-mode PET data were reconstructed as the standard-dose ground truth and were randomly undersampled by a factor of 100 to reconstruct 1% low-dose PET scans. A 2D encoder-decoder network was implemented as the generator to synthesize a standard-dose image and a discriminator was used to evaluate them. The two networks contested with each other to achieve high visual quality PET from the ultra-low-dose PET. Multi-slice inputs were used to reduce noise by providing the network with 2.5D information. Feature matching was applied to reduce hallucinated structures. Task-specific perceptual loss was designed to maintain the correct pathological features. The image quality was evaluated by peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error (RMSE) metrics with and without each of these modules. Two expert radiologists were asked to score image quality on a five-point scale and identified the amyloid status (positive or negative).RESULTS: With only low-dose PET as input, the proposed method significantly outperformed Chen etal.'s method  (which shows the best performance in this task) with the same input (PET-only model) by 1.87 dB in PSNR, 2.04% in SSIM, and 24.75% in RMSE. It also achieved comparable results to Chen etal.'s method which used additional MRI inputs (PET-MR model). Experts' reading results showed that the proposed method could achieve better overall image quality and maintain better pathological features indicating amyloid status than both PET-only and PET-MR models proposed by Chen etal.CONCLUSION: Standard-dose amyloid PET images can be synthesized from ultra-low-dose images using GAN. Applying adversarial learning, feature matching, and task-specific perceptual loss are essential to ensure image quality and the preservation of pathological features. This article is protected by copyright. All rights reserved.
View details for DOI 10.1002/mp.13626
View details for PubMedID 31131901
View details for Web of Science ID 000478733401398
Purpose To reduce radiotracer requirements for amyloid PET/MRI without sacrificing diagnostic quality by using deep learning methods. Materials and Methods Forty data sets from 39 patients (mean age ± standard deviation [SD], 67 years ± 8), including 16 male patients and 23 female patients (mean age, 66 years ± 6 and 68 years ± 9, respectively), who underwent simultaneous amyloid (fluorine 18 [18F]-florbetaben) PET/MRI examinations were acquired from March 2016 through October 2017 and retrospectively analyzed. One hundredth of the raw list-mode PET data were randomly chosen to simulate a low-dose (1%) acquisition. Convolutional neural networks were implemented with low-dose PET and multiple MR images (PET-plus-MR model) or with low-dose PET alone (PET-only) as inputs to predict full-dose PET images. Quality of the synthesized images was evaluated while Bland-Altman plots assessed the agreement of regional standard uptake value ratios (SUVRs) between image types. Two readers scored image quality on a five-point scale (5 = excellent) and determined amyloid status (positive or negative). Statistical analyses were carried out to assess the difference of image quality metrics and reader agreement and to determine confidence intervals (CIs) for reading results. Results The synthesized images (especially from the PET-plus-MR model) showed marked improvement on all quality metrics compared with the low-dose image. All PET-plus-MR images scored 3 or higher, with proportions of images rated greater than 3 similar to those for the full-dose images (-10% difference [eight of 80 readings], 95% CI: -15%, -5%). Accuracy for amyloid status was high (71 of 80 readings [89%]) and similar to intrareader reproducibility of full-dose images (73 of 80 [91%]). The PET-plus-MR model also had the smallest mean and variance for SUVR difference to full-dose images. Conclusion Simultaneously acquired MRI and ultra-low-dose PET data can be used to synthesize full-dose-like amyloid PET images. © RSNA, 2018 Online supplemental material is available for this article.
View details for PubMedID 30526350
View details for DOI 10.1002/jmri.26000
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners.Continuous-valued linear attenuation coefficient maps ("?-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the ?-maps. The accuracy of this probabilistic atlas-based continuous-valued ?-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the ?-maps generated from the data acquired at three time points.The proposed method produced continuous-valued ?-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based ?-maps. The absolute RC comparing the resulting PET volumes was 1.76?±?2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the ?-maps obtained at the three visits being 0.65?±?0.95 %.Accurate and highly reproducible continuous-valued head ?-maps can be generated from MR data using a probabilistic atlas-based approach.
View details for DOI 10.1007/s00259-016-3489-z
View details for PubMedID 27573639
View details for PubMedCentralID PMC5285302
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
View details for DOI 10.1088/0031-9155/61/18/6668
View details for Web of Science ID 000398071000009
View details for PubMedID 27541810
View details for PubMedCentralID PMC5095621
We present an approach for head MR-based attenuation correction (AC) based on the Statistical Parametric Mapping 8 (SPM8) software, which combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (? maps) from MR data in integrated PET/MR scanners.Coregistered anatomic MR and CT images of 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray matter, white matter, cerebrospinal fluid, bone, soft tissue, and air), which were then nonrigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomic MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients to be used for AC of PET data. The method was validated on 16 new subjects with brain tumors (n = 12) or mild cognitive impairment (n = 4) who underwent CT and PET/MR scans. The ? maps and corresponding reconstructed PET images were compared with those obtained using the gold standard CT-based approach and the Dixon-based method available on the Biograph mMR scanner. Relative change (RC) images were generated in each case, and voxel- and region-of-interest-based analyses were performed.The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain linear attenuation coefficients (RC, 1.38% ± 4.52%) compared with the gold standard. Similar results (RC, 1.86% ± 4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and region-of-interest-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87% ± 5.0% and 2.74% ± 2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0% ± 10.25% and 9.38% ± 4.97%, respectively). Areas closer to the skull showed the largest improvement.We have presented an SPM8-based approach for deriving the head ? map from MR data to be used for PET AC in integrated PET/MR scanners. Its implementation is straightforward and requires only the morphologic data acquired with a single MR sequence. The method is accurate and robust, combining the strengths of both segmentation- and atlas-based approaches while minimizing their drawbacks.
View details for DOI 10.2967/jnumed.113.136341
View details for Web of Science ID 000344209200013
View details for PubMedID 25278515
View details for PubMedCentralID PMC4246705
DNA methylation plays an important role in regulating cell growth and disease development. Methylation profiles are examined by bisulfite conversion; however, the lack of markers for bisulfite conversion efficiency and appropriate internal control genes remains a major challenge. To address these issues, we utilized two bioinformatics approaches, coefficients of variances and resampling tests, to identify probes showing stable methylation levels from several independent microarray datasets. Mass spectrometry validated the consistently high methylation levels of the five probes (N4BP2, EGFL8, CTRB1, TSPAN3, and ZNF690) in 13 human tissue types from 24 cell lines. Linear associations between detected methylation levels and methyl concentrations of DNA samples were further demonstrated in three genes (N4BP2, EGFL8, and CTRB1). To summarize, we identified five genes which may serve as internal controls for methylation studies by analyzing large-scale microarray data, and three of them can be used as markers for evaluating the efficiency of bisulfite conversion.
View details for DOI 10.1038/srep04351
View details for Web of Science ID 000332621500003
View details for PubMedID 24619003
View details for PubMedCentralID PMC3950633
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (?-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The ?-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT ?-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The ?-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based ?-maps across all subjects were higher than those for DUTE-based ?-maps; the atlas-based ?-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.
View details for PubMedID 24753982
View details for PubMedCentralID PMC3992209