Method for decreasing CT simulation time of complex phantoms and systems through separation of material specific projection data MEDICAL IMAGING 2017: PHYSICS OF MEDICAL IMAGING
Divel, S. E., Christensen, S., Wintermark, M., Lansberg, M. G., Pelc, N. J.
2017; 1013259

 More 

Abstract
Computer simulation is a powerful tool in CT; however, long simulation times of complex phantoms and systems, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize CT techniques. Long simulation times primarily result from the tracing of hundreds of line integrals through each of the hundreds of geometrical shapes defined within the phantom. However, when the goal is to perform dynamic simulations or test many scan protocols using a particular phantom, traditional simulation methods inefficiently and repeatedly calculate line integrals through the same set of structures although only a few parameters change in each new case. In this work, we have developed a new simulation framework that overcomes such inefficiencies by dividing the phantom into material specific regions with the same time attenuation profiles, acquiring and storing monoenergetic projections of the regions, and subsequently scaling and combining the projections to create equivalent polyenergetic sinograms. The simulation framework is especially efficient for the validation and optimization of CT perfusion which requires analysis of many stroke cases and testing hundreds of scan protocols on a realistic and complex numerical brain phantom. Using this updated framework to conduct a 31-time point simulation with 80 mm of z-coverage of a brain phantom on two 16-core Linux serves, we have reduced the simulation time from 62 hours to under 2.6 hours, a 95% reduction.

 View details for DOI 10.1117/12.2254076

 Less 

Improvements in low contrast detectability with iterative reconstruction and the effect of slice thickness MEDICAL IMAGING 2017: PHYSICS OF MEDICAL IMAGING
Hsieh, S. S., Pelc, N. J.
2017; 1013253

 More 

Abstract
Iterative reconstruction has become a popular route for dose reduction in CT scans. One method for assessing the dose reduction of iterative reconstruction is to use a low contrast detectability phantom. The apparent improvement in detectability can be very large on these phantoms, with many studies showing dose reduction in excess of 50%. In this work, we show that much of the advantage of iterative reconstruction in this context can be explained by differences in slice thickness. After adjusting the effective reconstruction kernel by blurring filtered backprojection images to match the shape of the noise power spectrum of iterative reconstruction, we produce thick slices and compare the two reconstruction algorithms. The remaining improvement from iterative reconstruction, at least in scans with relatively uniform statistics in the raw data, is significantly reduced. Hence, the effective slice thickness in iterative reconstruction may be larger than that of filtered backprojection, explaining some of the improvement in image quality.

 View details for DOI 10.1117/12.2253937

 Less 

Sensitivity analysis of pulse pileup model parameter in photon counting detectors MEDICAL IMAGING 2017: PHYSICS OF MEDICAL IMAGING
Shunhavanich, P., Pelc, N. J.
2017; 101323M

 More 

Abstract
Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.

 View details for DOI 10.1117/12.2255665

 Less 

Effect of spatio-energy correlation in PCD due to charge sharing, scatter, and secondary photons MEDICAL IMAGING 2017: PHYSICS OF MEDICAL IMAGING
Rajbhandary, P. L., Hsieh, S. S., Pelc, N. J.
2017; 101320V

 More 

Abstract
Charge sharing, scatter and fluorescence events in a photon counting detector (PCD) can result in multiple counting of a single incident photon in neighboring pixels. This causes energy distortion and correlation of data across energy bins in neighboring pixels (spatio-energy correlation). If a “macro-pixel” is formed by combining multiple small pixels, it will exhibit correlations across its energy bins. Charge sharing and fluorescence escape are dependent on pixel size and detector material. Accurately modeling these effects can be crucial for detector design and for model based imaging applications. This study derives a correlation model for the multi-counting events and investigates the effect in virtual non-contrast and effective monoenergetic imaging. Three versions of 1 mm2 square CdTe macro-pixel were compared: a 4×4 grid, 2×2 grid, or 1×1 composed of pixels with side length 250 μm, 500 μm, or 1 mm, respectively. The same flux was applied to each pixel, and pulse pile-up was ignored. The mean and covariance matrix of measured photon counts is derived analytically using pre-computed spatio-energy response functions (SERF) estimated from Monte Carlo simulations. Based on the Cramer-Rao Lower Bound, a macro-pixel with 250×250 μm2 sub-pixels shows ~2.2 times worse variance than a single 1 mm2 pixel for spectral imaging, while its penalty for effective monoenergetic imaging is <10% compared to a single 1 mm2 pixel.

 View details for DOI 10.1117/12.2254999

 Less 

Image quality comparison between single energy and dual energy CT protocols for hepatic imaging MEDICAL PHYSICS
Yao, Y., Ng, J. M., Megibow, A. J., Pelc, N. J.
2016; 43 (8): 4877-4890

 More 

Abstract
Purpose: Multi-detector computed tomography (MDCT) enables volumetric scans in a single breath hold and is clinically useful for hepatic imaging. For simple tasks, conventional single energy (SE) computed tomography (CT) images acquired at the optimal tube potential are known to have better quality than dual energy (DE) blended images. However, liver imaging is complex and often requires imaging of both structures containing iodinated contrast media, where atomic number differences are the primary contrast mechanism, and other structures, where density differences are the primary contrast mechanism. Hence it is conceivable that the broad spectrum used in a dual energy acquisition may be an advantage. In this work we are interested in comparing these two imaging strategies at equal-dose and more complex settings. Methods: We developed numerical anthropomorphic phantoms to mimic realistic clinical CT scans for medium size and large size patients. MDCT images based on the defined phantoms were simulated using various SE and DE protocols at pre- and post-contrast stages. For SE CT, images from 60 kVp through 140 with 10 kVp steps were considered; for DE CT, both 80/140 and 100/140 kVp scans were simulated and linearly blended at the optimal weights. To make a fair comparison, the mAs of each scan was adjusted to match the reference radiation dose (120 kVp, 200 mAs for medium size patients and 140 kVp, 400 mAs for large size patients). Contrast-to-noise ratio (CNR) of liver against other soft tissues was used to evaluate and compare the SE and DE protocols, and multiple pre- and post-contrasted liver-tissue pairs were used to define a composite CNR. To help validate the simulation results, we conducted a small clinical study. Eighty-five 120 kVp images and 81 blended 80/140 kVp images were collected and compared through both quantitative image quality analysis and an observer study. Results: In the simulation study, we found that the CNR of pre-contrast SE image mostly increased with increasing kVp while for post-contrast imaging 90 kVp or lower yielded higher CNR images, depending on the differential iodine concentration of each tissue. Similar trends were seen in DE blended CNR and those from SE protocols. In the presence of differential iodine concentration (i.e., post-contrast), the CNR curves maximize at lower kVps (80–120), with the peak shifted rightward for larger patients. The combined pre- and post-contrast composite CNR study demonstrated that an optimal SE protocol has better performance than blended DE images, and the optimal tube potential for SE scan is around 90 kVp for a medium size patients and between 90 and 120 kVp for large size patients (although low kVp imaging requires high x-ray tube power to avoid photon starvation). Also, a tin filter added to the high kVp beam is not only beneficial for material decomposition but it improves the CNR of the DE blended images as well. The dose adjusted CNR of the clinical images also showed the same trend and radiologists favored the SE scans over blended DE images. Conclusions: Our simulation showed that an optimized SE protocol produces up to 5% higher CNR for a range of clinical tasks. The clinical study also suggested 120 kVp SE scans have better image quality than blended DE images. Hence, blended DE images do not have a fundamental CNR advantage over optimized SE images.

 View details for DOI 10.1118/1.4959554

 Less 

Multisource inverse-geometry CT. Part II. X-ray source design and prototype MEDICAL PHYSICS
Neculaes, V. B., Caiafa, A., Cao, Y., De Man, B., Edic, P. M., Frutschy, K., Gunturi, S., Inzinna, L., Reynolds, J., Vermilyea, M., Wagner, D., Zhang, X., Zou, Y., Pelc, N. J., Lounsberry, B.
2016; 43 (8): 4617-4627

 More 

Abstract
Purpose: This paper summarizes the development of a high-power distributed x-ray source, or “multisource,” designed for inverse-geometry computed tomography (CT) applications [see B. De Man et al., “Multisource inverse-geometry CT. Part I. System concept and development,” Med. Phys. 43, 4607–4616 (2016)]. The paper presents the evolution of the source architecture, component design (anode, emitter, beam optics, control electronics, high voltage insulator), and experimental validation. Methods: Dispenser cathode emitters were chosen as electron sources. A modular design was adopted, with eight electron emitters (two rows of four emitters) per module, wherein tungsten targets were brazed onto copper anode blocks—one anode block per module. A specialized ceramic connector provided high voltage standoff capability and cooling oil flow to the anode. A matrix topology and low-noise electronic controls provided switching of the emitters. Results: Four modules (32 x-ray sources in two rows of 16) have been successfully integrated into a single vacuum vessel and operated on an inverse-geometry computed tomography system. Dispenser cathodes provided high beam current (>1000 mA) in pulse mode, and the electrostatic lenses focused the current beam to a small optical focal spot size (0.5 × 1.4 mm). Controlled emitter grid voltage allowed the beam current to be varied for each source, providing the ability to modulate beam current across the fan of the x-ray beam, denoted as a virtual bowtie filter. The custom designed controls achieved x-ray source switching in <1 μs. The cathode-grounded source was operated successfully up to 120 kV. Conclusions: A high-power, distributed x-ray source for inverse-geometry CT applications was successfully designed, fabricated, and operated. Future embodiments may increase the number of spots and utilize fast read out detectors to increase the x-ray flux magnitude further, while still staying within the stationary target inherent thermal limitations.

 View details for DOI 10.1118/1.4954847

 Less 

Multisource inverse-geometry CT. Part I. System concept and development MEDICAL PHYSICS
De Man, B., Uribe, J., Baek, J., Harrison, D., Yin, Z., Longtin, R., Roy, J., Waters, B., Wilson, C., Short, J., Inzinna, L., Reynolds, J., Neculaes, V. B., Frutschy, K., Senzig, B., Pelc, N. J.
2016; 43 (8): 4607-4616

 More 

Abstract
Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: The authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals.

 View details for DOI 10.1118/1.4954846

 Less 

Pixel size tradeoffs for CdTe spectral photon counting detectors CT MEETING 2016 PROCEEDINGS
Hsieh, S. S., Pelc, N. J.
2016; 387-90

 More 

Abstract
Energy discriminating photon counting detectors, which measure the energy of individual photons incident on the detector, are promising components for next-generation CT scanners. The most common substrate material in research prototypes today is CdTe or CdZnTe (CZT), popular for its high atomic number and absorption. However, these detectors face tradeoffs. Smaller pixels are desirable to enable fast counting rates and minimize count rate loss. However, smaller pixels also increase the deleterious effects of charge sharing. We explore these tradeoffs and compare different pixel sizes against an ideal photon counting detector that does not suffer from charge sharing.

 Less 

Use of Synthetic CT to reduce simulation time of complex phantoms and systems CT MEETING 2016 PROCEEDINGS
Divel, S. E., Segars, W. P., Christensen, S., Wintermark, M., Lansberg, M. G., Pelc, N. J.
2016; 253-6

 More 

Abstract
Simulation-based approaches to validate CT scanning methods, in which the exact anatomy and physiology of the phantom and the physical attributes of the system are known, provides a ground truth for quantitatively evaluating different techniques. However, long simulation times of complicated phantoms, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize protocol performance. This work investigated the feasibility of reducing the simulation time of these complex cases by employing the principles of Synthetic CT. Noiseless simulations are performed at two monoenergetic energies and the projections are decomposed into basis materials. These can be used to quickly generate projections at any spectrum and dose level. After determining the optimum energy levels for the initial noiseless monoenergetic scans, the performance of the synthetic simulations was evaluated by comparing the reconstructed Hounsfield Unit (HU) values, reconstructed noise standard deviation, and time required to those of traditional simulations. The HU values of synthetic simulations matched traditional simulations within 2.9 HU (5.4%) in the brain tissue, within 27.6 HU (3.1%) in the iodine-enhanced blood vessels, and within 20.2 HU (1.5%) in the skull. The standard deviation of the synthetic simulation was within 2 to l0 HU. The synthetic processing reduced the execution time by 97.93% for each additional protocol run on the same anatomy.

 Less 

Lossy compression of projection data from photon counting detectors CT MEETING 2016 PROCEEDINGS
Shunhavanich, P., Pelc, N. J.
2016; 467-70

 More 

Abstract
Photon counting x-ray detectors (PCXDs) are being considered for adoption in clinical settings due to their advantages of improved tissue characterization, reduced noise, and lower dose. The benefit of having multiple energy bins data in turn put a burden on the bandwidth of slip ring and data storage subsystems, through which the projection data samples must be transmitted in real-time. The problem is further amplified with PCXDs’ increased number of detector channels. This leads to a bandwidth bottleneck problem. In this work, we propose a lossy compression method for raw CT projection data from PCXDs, which includes a step of quantizing prediction residuals prior to encoding. From our modeled prediction error distribution, the quantization level is chosen such that the percentage of variance due to quantization error vs quantum noise variance is equals to 1 or 2 %. Huffman code and Golomb encoder are applied. Using three simulated phantoms, compression ratio of 3.1:1 with 1.15% RMSE to std. of quantum noise and compression ratio of 3.4:1 with 2.85% RMSE to std. of quantum noise are achieved for the 1 and 2 percent quantization error variance, respectively. From the initial simulation results, the proposed algorithm shows good control over compression and image quality of reconstructed image.

 Less