Image quality comparison between single energy and dual energy CT protocols for hepatic imaging
Y. Yao, J. M. Ng, A. J. Megibow, N. J. Pelc
Medical Physics, vol. 43, no. 8, pp. 4877-4890, 2016
Purpose: Multi-detector computed tomography (MDCT) enables volumetric scans in a single breath hold and is clinically useful for hepatic imaging. For simple tasks, conventional single energy (SE) computed tomography (CT) images acquired at the optimal tube potential are known to have better quality than dual energy (DE) blended images. However, liver imaging is complex and often requires imaging of both structures containing iodinated contrast media, where atomic number differences are the primary contrast mechanism, and other structures, where density differences are the primary contrast mechanism. Hence it is conceivable that the broad spectrum used in a dual energy acquisition may be an advantage. In this work we are interested in comparing these two imaging strategies at equal-dose and more complex settings.
Methods: We developed numerical anthropomorphic phantoms to mimic realistic clinical CT scans for medium size and large size patients. MDCT images based on the defined phantoms were simulated using various SE and DE protocols at pre- and post-contrast stages. For SE CT, images from 60 kVp through 140 with 10 kVp steps were considered; for DE CT, both 80/140 and 100/140 kVp scans were simulated and linearly blended at the optimal weights. To make a fair comparison, the mAs of each scan was adjusted to match the reference radiation dose (120 kVp, 200 mAs for medium size patients and 140 kVp, 400 mAs for large size patients). Contrast-to-noise ratio (CNR) of liver against other soft tissues was used to evaluate and compare the SE and DE protocols, and multiple pre- and post-contrasted liver-tissue pairs were used to define a composite CNR. To help validate the simulation results, we conducted a small clinical study. Eighty-five 120 kVp images and 81 blended 80/140 kVp images were collected and compared through both quantitative image quality analysis and an observer study.
Results: In the simulation study, we found that the CNR of pre-contrast SE image mostly increased with increasing kVp while for post-contrast imaging 90 kVp or lower yielded higher CNR images, depending on the differential iodine concentration of each tissue. Similar trends were seen in DE blended CNR and those from SE protocols. In the presence of differential iodine concentration (i.e., post-contrast), the CNR curves maximize at lower kVps (80–120), with the peak shifted rightward for larger patients. The combined pre- and post-contrast composite CNR study demonstrated that an optimal SE protocol has better performance than blended DE images, and the optimal tube potential for SE scan is around 90 kVp for a medium size patients and between 90 and 120 kVp for large size patients (although low kVp imaging requires high x-ray tube power to avoid photon starvation). Also, a tin filter added to the high kVp beam is not only beneficial for material decomposition but it improves the CNR of the DE blended images as well. The dose adjusted CNR of the clinical images also showed the same trend and radiologists favored the SE scans over blended DE images.
Conclusions: Our simulation showed that an optimized SE protocol produces up to 5% higher CNR for a range of clinical tasks. The clinical study also suggested 120 kVp SE scans have better image quality than blended DE images. Hence, blended DE images do not have a fundamental CNR advantage over optimized SE images.
Multisource inverse-geometry CT. Part II. X-ray source design and prototype
V. B. Neculaes, A. Caiafa, Y. Cao, B. De Man, P. M. Edic, K. Frutschy, S. Gunturi, L. Inzinna, J. Reynolds, M. Vermilyea, D. Wagner, X. Zhang, Y. Zou, N. J. Pelc, B. Lounsberry
Medical Physics, vol. 43, no. 8, pp. 4617-4627, 2016
Purpose: This paper summarizes the development of a high-power distributed x-ray source, or “multisource,” designed for inverse-geometry computed tomography (CT) applications [see B. De Man et al., “Multisource inverse-geometry CT. Part I. System concept and development,” Med. Phys. 43, 4607–4616 (2016)]. The paper presents the evolution of the source architecture, component design (anode, emitter, beam optics, control electronics, high voltage insulator), and experimental validation.
Methods: Dispenser cathode emitters were chosen as electron sources. A modular design was adopted, with eight electron emitters (two rows of four emitters) per module, wherein tungsten targets were brazed onto copper anode blocks—one anode block per module. A specialized ceramic connector provided high voltage standoff capability and cooling oil flow to the anode. A matrix topology and low-noise electronic controls provided switching of the emitters.
Results: Four modules (32 x-ray sources in two rows of 16) have been successfully integrated into a single vacuum vessel and operated on an inverse-geometry computed tomography system. Dispenser cathodes provided high beam current (>1000 mA) in pulse mode, and the electrostatic lenses focused the current beam to a small optical focal spot size (0.5 × 1.4 mm). Controlled emitter grid voltage allowed the beam current to be varied for each source, providing the ability to modulate beam current across the fan of the x-ray beam, denoted as a virtual bowtie filter. The custom designed controls achieved x-ray source switching in <1 μs. The cathode-grounded source was operated successfully up to 120 kV.
Conclusions: A high-power, distributed x-ray source for inverse-geometry CT applications was successfully designed, fabricated, and operated. Future embodiments may increase the number of spots and utilize fast read out detectors to increase the x-ray flux magnitude further, while still staying within the stationary target inherent thermal limitations.
Multisource inverse-geometry CT. Part I. System concept and development
B. De Man, J. Uribe, J. Baek, D. Harrison, Z. Yin, R. Longtin, J. Roy, B. Waters, C. Wilson, J. Short, L. Inzinna, J. Reynolds, V. B. Neculaes, K. Frutschy, B. Senzig, N. J. Pelc
Medical Physics, vol. 43, no. 8, pp. 4607-4616, 2016
Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications.
Methods: The authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals.
Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts.
Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals.
Use of Synthetic CT to reduce simulation time of complex phantoms and systems
S. E. Divel, W. P. Segars, S. Christensen, M. Wintermark, M. G. Lansberg, N. J. Pelc
CT Meeting 2016, pp. 253-256, 2016
Simulation-based approaches to validate CT scanning methods, in which the exact anatomy and physiology of the phantom and the physical attributes of the system are known, provides a ground truth for quantitatively evaluating different techniques. However, long simulation times of complicated phantoms, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize protocol performance. This work investigated the feasibility of reducing the simulation time of these complex cases by employing the principles of Synthetic CT. Noiseless simulations are performed at two monoenergetic energies and the projections are decomposed into basis materials. These can be used to quickly generate projections at any spectrum and dose level. After determining the optimum energy levels for the initial noiseless monoenergetic scans, the performance of the synthetic simulations was evaluated by comparing the reconstructed Hounsfield Unit (HU) values, reconstructed noise standard deviation, and time required to those of traditional simulations. The HU values of synthetic simulations matched traditional simulations within 2.9 HU (5.4%) in the brain tissue, within 27.6 HU (3.1%) in the iodine-enhanced blood vessels, and within 20.2 HU (1.5%) in the skull. The standard deviation of the synthetic simulation was within 2 to l0 HU. The synthetic processing reduced the execution time by 97.93% for each additional protocol run on the same anatomy.
Pixel size tradeoffs for CdTe spectral photon counting detectors
S. S. Hsieh, N. J. Pelc
CT Meeting 2016, pp. 387-390, 2016
Energy discriminating photon counting detectors, which measure the energy of individual photons incident on the detector, are promising components for next-generation CT scanners. The most common substrate material in research prototypes today is CdTe or CdZnTe (CZT), popular for its high atomic number and absorption. However, these detectors face tradeoffs. Smaller pixels are desirable to enable fast counting rates and minimize count rate loss. However, smaller pixels also increase the deleterious effects of charge sharing. We explore these tradeoffs and compare different pixel sizes against an ideal photon counting detector that does not suffer from charge sharing.
Lossy compression of projection data from photon counting detectors
P. Shunhavanich, N. J. Pelc
CT Meeting 2016, pp. 467-470, 2016
Photon counting x-ray detectors (PCXDs) are being considered for adoption in clinical settings due to their advantages of improved tissue characterization, reduced noise, and lower dose. The benefit of having multiple energy bins data in turn put a burden on the bandwidth of slip ring and data storage subsystems, through which the projection data samples must be transmitted in real-time. The problem is further amplified with PCXDs’ increased number of detector channels. This leads to a bandwidth bottleneck problem. In this work, we propose a lossy compression method for raw CT projection data from PCXDs, which includes a step of quantizing prediction residuals prior to encoding. From our modeled prediction error distribution, the quantization level is chosen such that the percentage of variance due to quantization error vs quantum noise variance is equals to 1 or 2 %. Huffman code and Golomb encoder are applied. Using three simulated phantoms, compression ratio of 3.1:1 with 1.15% RMSE to std. of quantum noise and compression ratio of 3.4:1 with 2.85% RMSE to std. of quantum noise are achieved for the 1 and 2 percent quantization error variance, respectively. From the initial simulation results, the proposed algorithm shows good control over compression and image quality of reconstructed image.
Comparison weighted energy bin vs. weighted basis material CT images
P. L. Rajbhandary, N. J. Pelc
CT Meeting 2016, pp. 327-329, 2016
Spectral imaging systems need to be able to produce "conventional" looking images, and it's been shown that systems with energy discriminating detectors can achieve higher CNR than conventional systems by optimal weighting. Combining measured data in energy bins (EBs) and also combining basis material images have previously been proposed, but there are no studies systematically comparing the two methods. In this paper, we evaluate the two methods for systems with ideal photon counting detectors using CNR and beam hardening (BH) artifact as metrics. For both linear comb-stick spectrum with one delta function per EB and 120-kVp polychromatic simulations, the difference of the optimal CNR between the two methods for the studied phantom is within 1%. For a polychromatic spectrum, beam-hardening artifacts are noticeable in EB weighted images (BH artifact of 3.8% for 8 EB and 6.9% for 2 EB), while weighted basis material images are free of such artifacts.
Improving pulse detection in multibin photon-counting detectors
S. S. Hsieh, N. J. Pelc
Journal of Medical Imaging, vol. 3, no. 2, pp. 023505, 2016
Energy-discriminating, photon-counting (EDPC) detectors are attractive for their potential for improved detective quantum efficiency and for their spectral imaging capabilities. However, at high count rates, counts are lost, the detected spectrum is distorted, and the advantages of EDPC detectors disappear. Existing EDPC detectors identify counts by analyzing the signal with a bank of comparators. We explored alternative methods for pulse detection for multibin EDPC detectors that could improve performance at high count rates. The detector signal was simulated in a Monte Carlo fashion assuming a bipolar shape and analyzed using several methods, including the conventional bank of comparators. For example, one method recorded the peak energy of the pulse along with the width (temporal extent) of the pulse. The Cramer–Rao lower bound of the variance of basis material estimates was numerically found for each method. At high count rates, the variance in water material (bone canceled) measurements could be reduced by as much as an order of magnitude. Improvements in virtual monoenergetic images were modest. We conclude that stochastic noise in spectral imaging tasks could be reduced if alternative methods for pulse detection were utilized.
A limit on dose reduction possible with CT reconstruction algorithms without prior knowledge of the scan subject
S. S. Hsieh, D. A. Chesler, D. Fleischmann, N. J. Pelc
Medical Physics, vol. 43, no. 3, pp. 1361-1368, 2016
Purpose: To find an upper bound on the maximum dose reduction possible for any reconstruction algorithm, analytic or iterative, that result from the inclusion of the data statistics. The authors do not analyze noise reduction possible from prior knowledge or assumptions about the object.
Methods: The authors examined the task of estimating the density of a circular lesion in a cross section. Raw data were simulated by forward projection of existing images and numerical phantoms. To assess an upper bound on the achievable dose reduction by any algorithm, the authors assume that both the background and the shape of the lesion are completely known. Under these conditions, the best possible estimate of the density can be determined by solving a weighted least squares problem directly in the raw data domain. Any possible reconstruction algorithm that does not use prior knowledge or make assumptions about the object, including filtered back projection (FBP) or iterative reconstruction methods with this constraint, must be no better than this least squares solution. The authors simulated 10 000 sets of noisy data and compared the variance in density from the least squares solution with those from FBP. Density was estimated from FBP images using either averaging within a ROI, or streak-adaptive averaging with better noise performance.
Results: The bound on the possible dose reduction depends on the degree to which the observer can read through the possibly streaky noise. For the described low contrast detection task with the signal shape and background known exactly, the average dose reduction possible compared to FBP with streak-adaptive averaging was 42% and it was 64% if only the ROI average is used with FBP. The exact amount of dose reduction also depends on the background anatomy, with statistically inhomogeneous backgrounds showing greater benefits.
Conclusions: The dose reductions from new, statistical reconstruction methods can be bounded. Larger dose reductions in the density estimation task studied here are only possible with the introduction of prior knowledge, which can introduce bias.
Development of a realistic, dynamic digital brain phantom for CT Perfusion validation
S. E. Divel, W. P. Segars, S. Christensen, M. Wintermark, M. G. Lansberg, N. J. Pelc
SPIE Medical Imaging 2016: Physics of Medical Imaging, vol. 9783, pp. 97830Y, 2016
Physicians rely on CT Perfusion (CTP) images and quantitative image data, including cerebral blood flow, cerebral blood volume, and bolus arrival delay, to diagnose and treat stroke patients. However, the quantification of these metrics may vary depending on the computational method used. Therefore, we have developed a dynamic and realistic digital brain phantom upon which CTP scans can be simulated based on a set of ground truth scenarios. Building upon the previously developed 4D extended cardiac-torso (XCAT) phantom containing a highly detailed brain model, this work consisted of expanding the intricate vasculature by semi-automatically segmenting existing MRA data and fitting nonuniform rational B-spline surfaces to the new vessels. Using time attenuation curves input by the user as reference, the contrast enhancement in the vessels changes dynamically. At each time point, the iodine concentration in the arteries and veins is calculated from the curves and the material composition of the blood changes to reflect the expected values. CatSim, a CT system simulator, generates simulated data sets of this dynamic digital phantom which can be further analyzed to validate CTP studies and post-processing methods. The development of this dynamic and realistic digital phantom provides a valuable resource with which current uncertainties and controversies surrounding the quantitative computations generated from CTP data can be examined and resolved.
Lossless compression of projection data from photon counting detectors
P. Shunhavanich, N. J. Pelc
SPIE Medical Imaging 2016: Physics of Medical Imaging, vol. 9783, pp. 97831J, 2016
With many attractive attributes, photon counting detectors with many energy bins are being considered for clinical CT systems. In practice, a large amount of projection data acquired for multiple energy bins must be transferred in real time through slip rings and data storage subsystems, causing a bandwidth bottleneck problem. The higher resolution of these detectors and the need for faster acquisition additionally contribute to this issue. In this work, we introduce a new approach to lossless compression, specifically for projection data from photon counting detectors, by utilizing the dependencies in the multi-energy data. The proposed predictor estimates the value of a projection data sample as a weighted average of its neighboring samples and an approximation from other energy bins, and the prediction residuals are then encoded. Context modeling using three or four quantized local gradients is also employed to detect edge characteristics of the data. Using three simulated phantoms including a head phantom, compression of 2.3:1-2.4:1 was achieved. The proposed predictor using zero, three, and four gradient contexts was compared to JPEG-LS and the ideal predictor (noiseless projection data). Among our proposed predictors, three-gradient context is preferred with a compression ratio from Golomb coding 7% higher than JPEG-LS and only 3% lower than the ideal predictor. In encoder efficiency, the Golomb code with the proposed three-gradient contexts has higher compression than block floating point. We also propose a lossy compression scheme, which quantizes the prediction residuals with scalar uniform quantization using quantization boundaries that limit the ratio of quantization error variance to quantum noise variance. Applying our proposed predictor with three-gradient context, the lossy compression achieved a compression ratio of 3.3:1 but inserted a 2.1% standard deviation of error compared to that of quantum noise in reconstructed images. From the initial simulation results, the proposed algorithm shows good control over the bits needed to represent multienergy projection data.