Water-Fat Decomposition by IDEAL-MRI With Phase Estimation: A Method to Determine Chemical Contents In Vivo

Author(s):  
Jing Xu ◽  
Xiaofei Hu ◽  
Haiying Tang ◽  
Richard Kennan ◽  
Karim Azer

High-resolution Magnetic Resonance Imaging (MRI) of humans and animals in vivo is routine and non-invasive. Identifying and quantifying chemical composition of tissue from acquired images is a challenge. MR spectroscopy (MRS) may be used to identify chemical components accurately over a finite volume in the tissue. However, the temporal and spatial resolutions are limited. Multi-spectral MRI exploits the multiple modes of MR such as T1, T2 and proton density maps and classifies voxels into different tissue types, but the chemical identity of the tissue remains unknown. Many fat suppression methods were developed because the unwanted fat signal often compromises image interpretability in clinical MRI, but these techniques are sensitive to MR field inhomogeneity. Multi-point Dixon methods separate MR images into water and fat images and are less sensitive to field inhomogeneity [1] and IDEAL-MRI (iterative decomposition of water and fat with echo asymmetry and least-squares estimation) improved upon the Dixon methods by avoiding the problem of phase unwrapping [2]. However, special care has to be taken when estimating the field map to avoid erroneous solutions to the least-squares estimation problem which lead to artifacts such as swapping of water and fat. The use of region growing schemes (with a reliable seed) mitigates this problem as demonstrated in previous studies [3][4]. However, the seed is not always reliable and growing schemes can be sensitive to phase discontinuities. Moreover, although the technology was successfully demonstrated on many clinical scanners, only limited applications were found in preclinical scanners with high MR field where the field inhomogeneity can be far worse [5]. We developed a robust and accurate algorithm to compute water and fat content on an 11.7T small animal scanner by improving upon existing phase estimation methods through multiple starting pixels and consensus-based region growing. The method, after further validation, has the potential of providing a translatable assay to study disease progression and regression related to fat and water contents in various animal models, such as studying atherosclerotic plaque composition.

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Kathryn E. Keenan ◽  
Ben P. Berman ◽  
Slávka Rýger ◽  
Stephen E. Russek ◽  
Wen-Tung Wang ◽  
...  

Quantitative Susceptibility Mapping (QSM) is an MRI tool with the potential to reveal pathological changes from magnetic susceptibility measurements. Before phase data can be used to recover susceptibility ( Δ χ ), the QSM process begins with two steps: data acquisition and phase estimation. We assess the performance of these steps, when applied without user intervention, on several variations of a phantom imaging task. We used a rotating-tube phantom with five tubes ranging from Δ χ = 0.05 ppm to Δ χ = 0.336  ppm. MRI data was acquired at nine angles of rotation for four different pulse sequences. The images were processed by 10 phase estimation algorithms including Laplacian, region-growing, branch-cut, temporal unwrapping, and maximum-likelihood methods, resulting in approximately 90 different combinations of data acquisition and phase estimation methods. We analyzed errors between measured and expected phases using the probability mass function and Cumulative Distribution Function. Repeatable acquisition and estimation methods were identified based on the probability of relative phase errors. For single-echo GRE and segmented EPI sequences, a region-growing method was most reliable with Pr (relative error <0.1) = 0.95 and 0.90, respectively. For multiecho sequences, a maximum-likelihood method was most reliable with Pr (relative error <0.1) = 0.97. The most repeatable multiecho methods outperformed the most repeatable single-echo methods. We found a wide range of repeatability and reproducibility for off-the-shelf MRI acquisition and phase estimation approaches, and this variability may prevent the techniques from being widely integrated in clinical workflows. The error was dominated in many cases by spatially discontinuous phase unwrapping errors. Any postprocessing applied on erroneous phase estimates, such as QSM’s background field removal and dipole inversion, would suffer from error propagation. Our paradigm identifies methods that yield consistent and accurate phase estimates that would ultimately yield consistent and accurate Δ χ estimates.


2021 ◽  
Author(s):  
Kathryn E Keenan ◽  
Benjamin Paul Berman ◽  
Slavka Carnicka ◽  
Stephen E Russek ◽  
Wen-Tung Wang ◽  
...  

Purpose: Quantitative Susceptibility Mapping (QSM) is an MRI tool with the potential to reveal pathological changes from magnetic susceptibility measurements. Before phase data can be used to recover susceptibility (Δχ), the QSM process begins with two steps: data acquisition and phase estimation. We assess the performance of these steps, when applied without user intervention, on several variations of a phantom imaging task. Approach: We used a rotating-tube phantom with five tubes ranging from Δχ=0.05 ppm to Δχ=0.336 ppm. MRI data was acquired at nine angles of rotation for four different pulse sequences. The images were processed by 10 phase estimation algorithms including Laplacian, region-growing, branch-cut, temporal unwrapping and maximum-likelihood methods. We analyzed errors between measured and expected phase using the probability mass function and Cumulative Distribution Function. Results: Repeatable acquisition and estimation methods were identified based on the probability of relative phase errors. For single-echo GRE and segmented EPI sequences, a region-growing method was most reliable with Pr(relative error<0.1)=0.95 and 0.90 respectively. For multi-echo sequences, a Maximum-Likelihood method was most reliable with Pr(relative error<0.1)=0.97. The most repeatable multi-echo methods outperformed the most repeatable single-echo methods. Conclusions: We found a wide range of repeatability and reproducibility for off-the-shelf MRI acquisition and phase estimation approaches. The error was dominated in many cases by spatially discontinuous phase unwrapping errors. Any post-processing applied on erroneous phase estimates, such as QSM's background field removal and dipole inversion, would suffer from error propagation. Our paradigm identifies methods that yield consistent and accurate phase estimates that would ultimately yield consistent and accurate Δ𝜒 estimates.


2019 ◽  
Author(s):  
Szabolcs David ◽  
Hamed Y. Mesri ◽  
Max A. Viergever ◽  
Alexander Leemans

AbstractDiffusion magnetic resonance imaging (dMRI) is one of the most prevalent methods to investigate the micro- and macrostructure of the human brain in vivo. Prior to any group analysis, dMRI data are generally processed to alleviate adverse effects of known artefacts such as signal drift, data noise and outliers, subject motion, and geometric distortions. These dMRI data processing steps are often combined in automated pipelines, such as the one of the Human Connectome Project (HCP). While improving the performance of processing tools has clearly shown its benefits at each individual step along the pipeline, it remains unclear whether – and to what degree – choices for specific user-defined parameter settings can affect the final outcome of group analyses. In this work, we demonstrate how making such a choice for a particular processing step of the pipeline drives the final outcome of a group study. More specifically, we performed a dMRI group analysis on gender using HCP data sets and compared the results obtained with two diffusion tensor imaging estimation methods: the widely used ordinary linear least squares (OLLS) and the more reliable iterative weighted linear least squares (IWLLS). Our results show that the effect sizes for group analyses are significantly smaller with IWLLS than with OLLS. While previous literature has demonstrated higher estimation reliability with IWLLS than with OLLS using simulations, this work now also shows how OLLS can produce a larger number of false positives than IWLLS in a typical group study. We therefore highly recommend using the IWLLS method. By raising awareness of how the choice of estimator can artificially inflate effect size and thus alter the final outcome, this work may contribute to improvement of the reliability and validity of dMRI group studies.


2016 ◽  
Vol 54 (10) ◽  
pp. 5671-5687 ◽  
Author(s):  
Sami Samiei-Esfahany ◽  
Joana Esteves Martins ◽  
Freek van Leijen ◽  
Ramon F. Hanssen

2020 ◽  
Vol 92 (7) ◽  
pp. 993-1000 ◽  
Author(s):  
Houzhe Zhang ◽  
Defeng Gu ◽  
Xiaojun Duan ◽  
Kai Shao ◽  
Chunbo Wei

Purpose The purpose of this paper is to focus on the performance of three typical nonlinear least-squares estimation algorithms in atmospheric density model calibration. Design/methodology/approach The error of Jacchia-Roberts atmospheric density model is expressed as an objective function about temperature parameters. The estimation of parameter corrections is a typical nonlinear least-squares problem. Three algorithms for nonlinear least-squares problems, Gauss–Newton (G-N), damped Gauss–Newton (damped G-N) and Levenberg–Marquardt (L-M) algorithms, are adopted to estimate temperature parameter corrections of Jacchia-Roberts for model calibration. Findings The results show that G-N algorithm is not convergent at some sampling points. The main reason is the nonlinear relationship between Jacchia-Roberts and its temperature parameters. Damped G-N and L-M algorithms are both convergent at all sampling points. G-N, damped G-N and L-M algorithms reduce the root mean square error of Jacchia-Roberts from 20.4% to 9.3%, 9.4% and 9.4%, respectively. The average iterations of G-N, damped G-N and L-M algorithms are 3.0, 2.8 and 2.9, respectively. Practical implications This study is expected to provide a guidance for the selection of nonlinear least-squares estimation methods in atmospheric density model calibration. Originality/value The study analyses the performance of three typical nonlinear least-squares estimation methods in the calibration of atmospheric density model. The non-convergent phenomenon of G-N algorithm is discovered and explained. Damped G-N and L-M algorithms are more suitable for the nonlinear least-squares problems in model calibration than G-N algorithm and the first two algorithms have slightly fewer iterations.


1972 ◽  
Vol 28 (03) ◽  
pp. 447-456 ◽  
Author(s):  
E. A Murphy ◽  
M. E Francis ◽  
J. F Mustard

SummaryThe characteristics of experimental error in measurement of platelet radioactivity have been explored by blind replicate determinations on specimens taken on several days on each of three Walker hounds.Analysis suggests that it is not unreasonable to suppose that error for each sample is normally distributed ; and while there is evidence that the variance is heterogeneous, no systematic relationship has been discovered between the mean and the standard deviation of the determinations on individual samples. Thus, since it would be impracticable for investigators to do replicate determinations as a routine, no improvement over simple unweighted least squares estimation on untransformed data suggests itself.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


Sign in / Sign up

Export Citation Format

Share Document