Table of contents

Volume 56

Number 18, 21 September 2011

Previous issue Next issue

Papers

5771

, , and

In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.

5789

, and

The CT scanner-displayed radiation dose information is based on CT dose index (CTDI) over an integration length of 100 mm (CTDI100), which is lower than the CTDI over an infinite integration length (CTDI). In an adult or a pediatric body CT scan, the limiting equilibrium dose can be established near the central scan plane, and CTDI more closely indicates the accumulated dose than CTDI100. The aim of this study was to (a) evaluate CTDI efficiencies, epsilon(CTDI100) = CTDI100/CTDI, for a multi-detector CT (MDCT) scanner, (b) examine the dependences of epsilon(CTDI100) on kV, beam width, phantom diameter, phantom length and position in phantom and (c) investigate how to estimate CTDI based on the CT scanner-displayed information. We performed a comprehensive Geant4-based simulation study of a clinical CT scanner, and calculated epsilon(CTDI100) for a range of parameters. The results were compared with the epsilon(CTDI100) data of previous studies. Differences in the epsilon(CTDI100) values of these studies were assessed. A broad analysis of the epsilon(CTDI100) variations with the above-mentioned parameters was presented. Based on the results, we proposed a practical approach to obtain the weighted CTDI using the CT scanner-displayed information. A reference combination of 120 kV and a beam width close to 20 mm can be selected to determine the efficiencies of the weighted CTDI by using either phantom measurements or computer simulations. The results can be applied to estimate the weighted CTDI for 80–140 kV and the beam widths within 40 mm. Errors in the weighted CTDI due to the variations of kV and beam width can be 5% or less for the MDCT scanners.

5805

, , , and

In scintillation dosimetry, a Cerenkov background signal is generated when a conventional fibre optic is exposed to radiation produced by a megavoltage linear accelerator. Three methods of measuring dose in the presence of Cerenkov background are compared. In the first method, a second background fibre is used to estimate the Cerenkov signal in the signal fibre. In the second method, a colour camera is used to measure the combined scintillation and Cerenkov light in two wavelength ranges and a mathematical process is used to extract the scintillation signal. In the third method, a hollow air core light guide is used to carry the scintillation signal through the primary radiation field. In this paper, the strengths and weaknesses of each dosimetry system are identified and recommendations for the optimum method for common clinical dosimetry situations are made.

5823

, , , , , and

This paper proposes a hybrid technique to simulate the complete chain of an oral cone beam computed tomography (CBCT) system for the study of both radiation dose and image quality. The model was developed around a 3D Accuitomo 170 unit (J Morita, Japan) with a tube potential range of 60–90 kV. The Monte Carlo technique was adopted to simulate the x-ray generation, filtration and collimation. Exact dimensions of the bow-tie filter were estimated iteratively using experimentally acquired flood images. Non-flat radiation fields for different exposure settings were mediated via 'phase spaces'. Primary projection images were obtained by ray tracing at discrete energies and were fused according to the two-dimensional energy modulation templates derived from the phase space. Coarse Monte Carlo simulations were performed for scatter projections and the resulting noisy images were smoothed by Richardson–Lucy fitting. Resolution and noise characteristics of the flat panel detector were included using the measured modulation transfer function (MTF) and the noise power spectrum (NPS), respectively. The Monte Carlo dose calculation was calibrated in terms of kerma free-in-air about the isocenter, using an ionization chamber, and was subsequently validated by comparison against the measured air kerma in water at various positions of a cylindrical water phantom. The resulting dose discrepancies were found <10% for most cases. Intensity profiles of the experimentally acquired and simulated projection images of the water phantom showed comparable fractional increase over the common area as changing from a small to a large field of view, suggesting that the scatter was accurately accounted. Image validation was conducted using two small phantoms and the built-in quality assurance protocol of the system. The reconstructed simulated images showed high resemblance on contrast resolution, noise appearance and artifact pattern in comparison to experimentally acquired images, with <5% difference for voxel values of the aluminum and air insert regions and <3% difference for voxel uniformity across the homogeneous PMMA region. The detector simulation by use of the MTF and NPS data exhibited a big influence on noise and the sharpness of the resulting images. The hybrid simulation technique is flexible and has wide applicability to CBCT systems.

5845

, and

Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user-defined stenoses, can be used to obtain clinically realistic projection images with the Monte Carlo code penMesh for optimizing imaging and dosimetry.

5865

, , , , , and

Breast MRI acquires many images from the breast, and computer-aided algorithms and display tools are often used to assist the radiologist's interpretation. Women with lifetime risk greater than 20% of developing breast cancer are recommended to receive annual screening MRI, but the current breast MRI computer-aided-diagnosis systems do not provide the necessary function for comparison of images acquired at different times. The purpose of this work was to develop registration methods for evaluating the spatial change pattern of fibroglandular tissue between two breast MRI scans of the same woman taken at different times. The registration method is based on rigid alignment followed by a non-rigid Demons algorithm. The method was tested on three different subjects who had different degrees of changes in the fibroglandular tissue, including two patients who showed different spatial shrinkage patterns after receiving neoadjuvant chemotherapy before surgery, and one control case from a normal volunteer. Based on the transformation matrix, the collapse of multiple voxels on the baseline images to one voxel on the follow-up images is used to calculate the shrinkage factor. Conversely, based on the reverse transformation matrix the expansion factor can be calculated. The shrinkage/expansion factor, the deformation magnitude and direction, as well as the Jacobian determinate at each location can be displayed in a 3D rendering view to show the spatial changes between two MRI scans. These different parameters show consistent results and can be used for quantitative evaluation of the spatial change patterns. The presented registration method can be further developed into a clinical tool for evaluating therapy-induced changes and for early diagnosis of breast cancer in screening MRI.

5877

, , and

We present an initial evaluation of a mechanically cooled, high-purity germanium double-sided strip detector as a potential gamma camera for small-animal SPECT. It is 90 mm in diameter and 10 mm thick with two sets of 16 orthogonal strips that have a 4.5 mm width with a 5 mm pitch. We found an energy resolution of 0.96% at 140 keV, an intrinsic efficiency of 43.3% at 122 keV and a FWHM spatial resolution of approximately 1.5 mm. We demonstrated depth-of-interaction estimation capability through comparison of pinhole acquisitions with a point source on and off axes. Finally, a flood-corrected flood image exhibited a strip-level uniformity of less than 1%. This high-purity germanium offers many desirable properties for small-animal SPECT.

5889

, , , , , and

For real-time optoacoustic (OA) imaging of the human body, a linear array transducer and reflection mode optical irradiation is usually preferred. Such a setup, however, results in significant image background, which prevents imaging structures at the ultimate depth determined by the light distribution and the signal noise level. Therefore, we previously proposed a method for image background reduction, based on displacement-compensated averaging (DCA) of image series obtained when the tissue sample under investigation is gradually deformed. OA signals and background signals are differently affected by the deformation and can thus be distinguished. The proposed method is now experimentally applied to image artificial tumours embedded inside breast phantoms. OA images are acquired alternately with pulse-echo images using a combined OA/echo-ultrasound device. Tissue deformation is accessed via speckle tracking in pulse echo images, and used to compensate in the OA images for the local tissue displacement. In that way, OA sources are highly correlated between subsequent images, while background is decorrelated and can therefore be reduced by averaging. We show that image contrast in breast phantoms is strongly improved and detectability of embedded tumours significantly increased, using the DCA method.

5903

, and

The accumulation of injected contrast agents allows the image enhancement of lesions through the use of contrast-enhanced mammography. In this technique, the combination of two acquired images is used to create an enhanced image. There exist several methods to acquire the images to be combined, which include dual energy subtraction using a single detection layer that suffers from motion artifacts due to patient motion between image acquisition. To mitigate motion artifacts, a detector composed of two layers may be used to simultaneously acquire the low and high energy images. In this work, we evaluate both of these methods using amorphous selenium as the detection material to find the system parameters (tube voltage, filtration, photoconductor thickness and relative intensity ratio) leading to the optimal performance. We then compare the performance of the two detectors under the variation of contrast agent concentration, tumor size and dose. The detectability was found to be most comparable at the lower end of the evaluated factors. The single-layer detector not only led to better contrast, due to its greater spectral separation capabilities, but also had lower quantum noise. The single-layer detector was found to have a greater detectability by a factor of 2.4 for a 2.5 mm radius tumor having a contrast agent concentration of 1.5 mg ml−1 in a 4.5 cm thick 50% glandular breast. The inclusion of motion artifacts in the comparison is part of ongoing research efforts.

5925

, , , , , and

Large area detector computed tomography systems with fast rotating gantries enable volumetric dynamic cardiac perfusion studies. Prospectively, ECG-triggered acquisitions limit the data acquisition to a predefined cardiac phase and thereby reduce x-ray dose and limit motion artefacts. Even in the case of highly accurate prospective triggering and stable heart rate, spatial misalignment of the cardiac volumes acquired and reconstructed per cardiac cycle may occur due to small motion pattern variations from cycle to cycle. These misalignments reduce the accuracy of the quantitative analysis of myocardial perfusion parameters on a per voxel basis. An image-based solution to this problem is elastic 3D image registration of dynamic volume sequences with variable contrast, as it is introduced in this contribution. After circular cone-beam CT reconstruction of cardiac volumes covering large areas of the myocardial tissue, the complete series is aligned with respect to a chosen reference volume. The results of the registration process and the perfusion analysis with and without registration are evaluated quantitatively in this paper. The spatial alignment leads to improved quantification of myocardial perfusion for three different pig data sets.

5949

, , , and

High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low-contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV (EPTV) regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing energy consisting of an EPTV norm and a data fidelity term posed by the x-ray projections. The EPTV term is proposed to preferentially perform smoothing only on the non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original TV norm. During the reconstruction process, the pixels at the edges would be gradually identified and given low penalty weight. Our iterative algorithm is implemented on graphics processing unit to improve its speed. We test our reconstruction algorithm on a digital NURBS-based cardiac-troso phantom, a physical chest phantom and a Catphan phantom. Reconstruction results from a conventional filtered backprojection (FBP) algorithm and a TV regularization method without edge-preserving penalty are also presented for comparison purposes. The experimental results illustrate that both the TV-based algorithm and our EPTV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under a low-dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low-contrast structures and therefore maintain acceptable spatial resolution.

5969

, and

Spectral x-ray imaging using novel photon counting x-ray detectors (PCDs) with energy resolving abilities is capable of providing energy-selective images. PCDs have energy thresholds, enabling the classification of photons into multiple energy bins. The extra energy information provided may allow materials such as iodine and calcium, or water and fat to be distinguishable. The information content of spectral x-ray images, however, depends on how the photons are grouped together. In this work, we present a model to optimize energy windows for maximum material discrimination. Multivariate statistics allows the confidence region of the correlated uncertainties to be mapped in the thickness space. Minimization of the uncertainties enables optimization of energy windows. Applications related to small animal imaging and breast imaging are considered.

5985

and

Vibro-acoustography (VA) is a medical imaging method based on the nonlinear interaction of two or more distinct ultrasound beams whose frequencies differ by several kHz. In turn, the interacting waves produce a difference-frequency signal which carries the information of the imaged tissue region. Two mechanisms are responsible for the difference-frequency generation (DFG) in VA, namely the dynamic (oscillatory) radiation force and the scattering of sound-by-sound. The role and importance of each phenomenon in VA is assessed here. A theoretical model based on Westervelt's equation for the DFG in the nonlinear scattering of two incident ultrasound waves by a small rigid sphere (compared to the incident wavelengths) is presented. Furthermore, a scattering experiment using VA is devised and the data show very good agreement with the proposed theory. The results reveal that the effect of scattering of sound-by-sound is the dominant component in the DFG in VA rather than the dynamic radiation force.

5995

, , , , , , , and

The purpose of this study is to investigate whether computerized analysis using three-class Bayesian artificial neural network (BANN) feature selection and classification can characterize tumor grades (grade 1, grade 2 and grade 3) of breast lesions for prognostic classification on DCE-MRI. A database of 26 IDC grade 1 lesions, 86 IDC grade 2 lesions and 58 IDC grade 3 lesions was collected. The computer automatically segmented the lesions, and kinetic and morphological lesion features were automatically extracted. The discrimination tasks—grade 1 versus grade 3, grade 2 versus grade 3, and grade 1 versus grade 2 lesions—were investigated. Step-wise feature selection was conducted by three-class BANNs. Classification was performed with three-class BANNs using leave-one-lesion-out cross-validation to yield computer-estimated probabilities of being grade 3 lesion, grade 2 lesion and grade 1 lesion. Two-class ROC analysis was used to evaluate the performances. We achieved AUC values of 0.80 ± 0.05, 0.78 ± 0.05 and 0.62 ± 0.05 for grade 1 versus grade 3, grade 1 versus grade 2, and grade 2 versus grade 3, respectively. This study shows the potential for (1) applying three-class BANN feature selection and classification to CADx and (2) expanding the role of DCE-MRI CADx from diagnostic to prognostic classification in distinguishing tumor grades.

6009

, , , , , , , , and

Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys.34 4772–81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys.63 921–9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1 mm (0.7 ± 0.1 mm). When a single artificial internal marker was used to derive the lung motion, the average 3D error was found to be within 2 mm (1.8 ± 0.3 mm) through comprehensive statistical analysis. The optimal number of PCA coefficients needs to be determined on a patient-by-patient basis and two PCA coefficients seem to be sufficient for accurate modeling of the lung motion for most patients. In conclusion, we have presented thorough theoretical analysis and clinical validation of the PCA lung motion model. The feasibility of deriving the entire lung motion using a single marker has also been demonstrated on clinical data using a simulation approach.

6031

, , , , and

The uncertainty of radioactivity concentrations measured with positron emission tomography (PET) scanners ultimately depends on the uncertainty of the calibration factors. A new practical calibration scheme using point-like 22Na radioactive sources has been developed. The purpose of this study is to theoretically investigate the effects of the associated 1.275 MeV γ rays on the calibration factors. The physical processes affecting the coincidence data were categorized in order to derive approximate semi-quantitative formulae. Assuming the design parameters of some typical commercial PET scanners, the effects of the γ rays as relative deviations in the calibration factors were evaluated by semi-quantitative formulae and a Monte Carlo simulation. The relative deviations in the calibration factors were less than 4%, depending on the details of the PET scanners. The event losses due to rejecting multiple coincidence events of scattered γ rays had the strongest effect. The results from the semi-quantitative formulae and the Monte Carlo simulation were consistent and were useful in understanding the underlying mechanisms. The deviations are considered small enough to correct on the basis of precise Monte Carlo simulation. This study thus offers an important theoretical basis for the validity of the calibration method using point-like 22Na radioactive sources.

6047

, , , , and

In this paper, a novel technique based on association rules (ARs) is presented in order to find relations among activated brain areas in single photon emission computed tomography (SPECT) imaging. In this sense, the aim of this work is to discover associations among attributes which characterize the perfusion patterns of normal subjects and to make use of them for the early diagnosis of Alzheimer's disease (AD). Firstly, voxel-as-feature-based activation estimation methods are used to find the tridimensional activated brain regions of interest (ROIs) for each patient. These ROIs serve as input to secondly mine ARs with a minimum support and confidence among activation blocks by using a set of controls. In this context, support and confidence measures are related to the proportion of functional areas which are singularly and mutually activated across the brain. Finally, we perform image classification by comparing the number of ARs verified by each subject under test to a given threshold that depends on the number of previously mined rules. Several classification experiments were carried out in order to evaluate the proposed methods using a SPECT database that consists of 41 controls (NOR) and 56 AD patients labeled by trained physicians. The proposed methods were validated by means of the leave-one-out cross validation strategy, yielding up to 94.87% classification accuracy, thus outperforming recent developed methods for computer aided diagnosis of AD.

6065

, , , , and

A commercial optically stimulated luminescence (OSL) dosimetry system was investigated for in vivo dosimetry in radiation therapy. Dosimetric characteristics of InLight dot dosimeters and a microStar reader (Landauer Inc.) were tested in 60Co beams. The reading uncertainty of a single dosimeter was 0.6%. The reproducibility of a set of dosimeters after a single irradiation was 1.6%, while in repeated irradiations of the same dosimeters it was found to be 3.5%. When OSL dosimeters were optically bleached between exposures, the reproducibility of repeated measurements improved to 1.0%. Dosimeters were calibrated for the entrance dose measurements and a full set of correction factors was determined. A pilot patient study that followed phantom validation testing included more than 100 measured fields with a mean relative difference of the measured entrance dose from the expected dose of 0.8% and the standard deviation of 2.5%. In conclusion, these results demonstrate that OSL dot dosimeters represent a valid alternative to already established in vivo dosimetry systems.

6083

, and

Measurement errors in polymer gel dosimetry can originate either during irradiation or scanning. One concern related to the exothermic nature of polymerization reaction was that the heat released in polymer gel dosimeters during irradiation modifies their dose response. In this paper, the effect of heat released from the exothermal polymerization reaction on the dose response of a number of dosimeters was studied. In addition, we investigated whether heat-generated geometric distortion existed in newly proposed gel dosimeters that contain highly thermoresponsive polymers. Our results suggest that despite a significant internal temperature increase in some gel compositions, their dose responses are not affected when oxygen is well expelled mechanically from the gel mixture. We also report on significant pre-irradiation instability in some recently developed polymer gel dosimeters but that geometric distortions were not observed. Data obtained by a set of small calibration vials are compared to those obtained from larger phantoms, and potential physicochemical causes of deviations between them are identified.

6109

, , and

Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.

6129

, and

In this paper, it is demonstrated that the effects of acoustic attenuation may play a significant role in establishing the quality of tomographic optoacoustic reconstructions. Accordingly, spatially dependent reduction of signal amplitude leads to quantification errors in the reconstructed distribution of the optical absorption coefficient while signal broadening causes loss of image resolution. Here we propose a correction algorithm for accounting for attenuation effects, which is applicable in both the time and frequency domains. It is further investigated which part of the optoacoustic signal spectrum is practically affected by those effects in realistic imaging scenarios. The validity and benefits of the suggested modelling and correction approaches are experimentally validated in phantom measurements.

Notes

N183

, , , , , , , and

Attenuation of photon flux on trajectories between the source and pinhole apertures affects the quantitative accuracy of reconstructed single-photon emission computed tomography (SPECT) images. We propose a Chang-based non-uniform attenuation correction (NUA-CT) for small-animal SPECT/CT with focusing pinhole collimation, and compare the quantitative accuracy with uniform Chang correction based on (i) body outlines extracted from x-ray CT (UA-CT) and (ii) on hand drawn body contours on the images obtained with three integrated optical cameras (UA-BC). Measurements in phantoms and rats containing known activities of isotopes were conducted for evaluation. In 125I, 201Tl, 99mTc and 111In phantom experiments, average relative errors comparing to the gold standards measured in a dose calibrator were reduced to 5.5%, 6.8%, 4.9% and 2.8%, respectively, with NUA-CT. In animal studies, these errors were 2.1%, 3.3%, 2.0% and 2.0%, respectively. Differences in accuracy on average between results of NUA-CT, UA-CT and UA-BC were less than 2.3% in phantom studies and 3.1% in animal studies except for 125I (3.6% and 5.1%, respectively). All methods tested provide reasonable attenuation correction and result in high quantitative accuracy. NUA-CT shows superior accuracy except for 125I, where other factors may have more impact on the quantitative accuracy than the selected attenuation correction.

N195

This work investigates and compares two different phase-correction algorithms for Dixon fat–water separation and two different quality maps (QM) for region-growing: the original QM, based on phase gradients, and a QM based on phase uncertainty, proposed in this article. A spoiled dual-gradient-echo sequence was employed at 1.5 T to acquire in-phase and out-of-phase images of joints, parotid glands, abdomen and test objects. All 97 datasets were processed eight times each: with two different phase correction algorithms (original and hierarchical phase correction), with two different QM, and with/without removing linear component of the phase drifts associated with dual-echo acquisitions and bipolar readout gradient waveforms. The linear component of the phase drift along the readout direction was found to reach 4.1° pixel−1, depending on the geometric parameters. Pre-processing to remove linear phase shifts has little impact on outcome. The hierarchic phase-correction algorithm outperformed the original phase-correction algorithm in all applications. The proposed phase-uncertainty QM provides a small performance improvement in clinical images, but can be vulnerable to flow-related phase shifts in bright vessels. Overall the most successful phase-correction technique employed phase-uncertainty QMs and hierarchic algorithms, with pre-processing to correct the linear phase drift associated with dual-echo acquisitions and bipolar readout gradient waveform.