scholarly journals Sparse Representation of Deformable 3D Organs with Spherical Harmonics and Structured Dictionary

2011 ◽  
Vol 2011 ◽  
pp. 1-17
Author(s):  
Dan Wang ◽  
Ahmed H. Tewfik ◽  
Yingchun Zhang ◽  
Yunhe Shen

This paper proposed a novel algorithm to sparsely represent a deformable surface (SRDS) with low dimensionality based on spherical harmonic decomposition (SHD) and orthogonal subspace pursuit (OSP). The key idea in SRDS method is to identify the subspaces from a training data set in the transformed spherical harmonic domain and then cluster each deformation into the best-fit subspace for fast and accurate representation. This algorithm is also generalized into applications of organs with both interior and exterior surfaces. To test the feasibility, we first use the computer models to demonstrate that the proposed approach matches the accuracy of complex mathematical modeling techniques and then both ex vivo and in vivo experiments are conducted using 3D magnetic resonance imaging (MRI) scans for verification in practical settings. All results demonstrated that the proposed algorithm features sparse representation of deformable surfaces with low dimensionality and high accuracy. Specifically, the precision evaluated as maximum error distance between the reconstructed surface and the MRI ground truth is better than 3 mm in real MRI experiments.

2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


2020 ◽  
Vol 58 (8) ◽  
pp. 1667-1679
Author(s):  
Benedikt Franke ◽  
J. Weese ◽  
I. Waechter-Stehle ◽  
J. Brüning ◽  
T. Kuehne ◽  
...  

Abstract The transvalvular pressure gradient (TPG) is commonly estimated using the Bernoulli equation. However, the method is known to be inaccurate. Therefore, an adjusted Bernoulli model for accurate TPG assessment was developed and evaluated. Numerical simulations were used to calculate TPGCFD in patient-specific geometries of aortic stenosis as ground truth. Geometries, aortic valve areas (AVA), and flow rates were derived from computed tomography scans. Simulations were divided in a training data set (135 cases) and a test data set (36 cases). The training data was used to fit an adjusted Bernoulli model as a function of AVA and flow rate. The model-predicted TPGModel was evaluated using the test data set and also compared against the common Bernoulli equation (TPGB). TPGB and TPGModel both correlated well with TPGCFD (r > 0.94), but significantly overestimated it. The average difference between TPGModel and TPGCFD was much lower: 3.3 mmHg vs. 17.3 mmHg between TPGB and TPGCFD. Also, the standard error of estimate was lower for the adjusted model: SEEModel = 5.3 mmHg vs. SEEB = 22.3 mmHg. The adjusted model’s performance was more accurate than that of the conventional Bernoulli equation. The model might help to improve non-invasive assessment of TPG.


2020 ◽  
Author(s):  
Stefano Mandija ◽  
Petar I. Petrov ◽  
Jord J. T. Vink ◽  
Sebastian F. W. Neggers ◽  
Cornelis A. T. van den Berg

AbstractFirst in vivo brain conductivity reconstructions using Helmholtz MR-Electrical Properties Tomography (MR-EPT) have been published. However, a large variation in the reconstructed conductivity values is reported and these values differ from ex vivo conductivity measurements. Given this lack of agreement, we performed an in vivo study on eight healthy subjects to provide reference in vivo brain conductivity values. MR-EPT reconstructions were performed at 3 T for eight healthy subjects. Mean conductivity and standard deviation values in the white matter, gray matter and cerebrospinal fluid (σWM, σGM, and σCSF) were computed for each subject before and after erosion of regions at tissue boundaries, which are affected by typical MR-EPT reconstruction errors. The obtained values were compared to the reported ex vivo literature values. To benchmark the accuracy of in vivo conductivity reconstructions, the same pipeline was applied to simulated data, which allow knowledge of ground truth conductivity. Provided sufficient boundary erosion, the in vivo σWM and σGM values obtained in this study agree for the first time with literature values measured ex vivo. This could not be verified for the CSF due to its limited spatial extension. Conductivity reconstructions from simulated data verified conductivity reconstructions from in vivo data and demonstrated the importance of discarding voxels at tissue boundaries. The presented σWM and σGM values can therefore be used for comparison in future studies employing different MR-EPT techniques.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


2019 ◽  
Vol 7 (3) ◽  
pp. SE113-SE122 ◽  
Author(s):  
Yunzhi Shi ◽  
Xinming Wu ◽  
Sergey Fomel

Salt boundary interpretation is important for the understanding of salt tectonics and velocity model building for seismic migration. Conventional methods consist of computing salt attributes and extracting salt boundaries. We have formulated the problem as 3D image segmentation and evaluated an efficient approach based on deep convolutional neural networks (CNNs) with an encoder-decoder architecture. To train the model, we design a data generator that extracts randomly positioned subvolumes from large-scale 3D training data set followed by data augmentation, then feed a large number of subvolumes into the network while using salt/nonsalt binary labels generated by thresholding the velocity model as ground truth labels. We test the model on validation data sets and compare the blind test predictions with the ground truth. Our results indicate that our method is capable of automatically capturing subtle salt features from the 3D seismic image with less or no need for manual input. We further test the model on a field example to indicate the generalization of this deep CNN method across different data sets.


2014 ◽  
Vol 58 (6) ◽  
pp. 3306-3311 ◽  
Author(s):  
Tong Zhu ◽  
Sven O. Friedrich ◽  
Andreas Diacon ◽  
Robert S. Wallis

ABSTRACTSutezolid (PNU-100480 [U-480]) is an oxazolidinone antimicrobial being developed for the treatment of tuberculosis. An active sulfoxide metabolite (PNU-101603 [U-603]), which reaches concentrations in plasma several times those of the parent, has been reported to drive the killing of extracellularMycobacterium tuberculosisby sutezolid in hollow-fiber culture. However, the relative contributions of the parent and metabolite against intracellularM. tuberculosisin vivoare not fully understood. The relationships between the plasma concentrations of U-480 and U-603 and intracellular whole-blood bactericidal activity (WBA) inex vivocultures were examined using a direct competitive population pharmacokinetic (PK)/pharmacodynamic 4-parameter sigmoid model. The data set included 690 PK determinations and 345 WBA determinations from 50 tuberculosis patients enrolled in a phase 2a sutezolid trial. The model parameters were solved iteratively. The median U-603/U-480 concentration ratio was 7.1 (range, 1 to 28). The apparent 50% inhibitory concentration of U-603 for intracellularM. tuberculosiswas 17-fold greater than that of U-480 (90% confidence interval [CI], 9.9- to 53-fold). Model parameters were used to simulatein vivoactivity after oral dosing with sutezolid at 600 mg twice a day (BID) and 1,200 mg once a day (QD). Divided dosing resulted in greater cumulative activity (−0.269 log10per day; 90% CI, −0.237 to −0.293 log10per day) than single daily dosing (−0.186 log10per day; 90% CI, −0.160 to −0.208 log10per day). U-480 accounted for 84% and 78% of the activity for BID and QD dosing, respectively, despite the higher concentrations of U-603. Killing of intracellularM. tuberculosisby orally administered sutezolid is mainly due to the activity of the parent compound. Taken together with the findings of other studies in the hollow-fiber model, these findings suggest that sutezolid and its metabolite act on different mycobacterial subpopulations.


2020 ◽  
Vol 2020 (6) ◽  
pp. 71-1-71-7
Author(s):  
Christian Kapeller ◽  
Doris Antensteiner ◽  
Svorad Štolc

Industrial machine vision applications frequently employ Photometric Stereo (PS) methods to detect fine surface defects on objects with challenging surface properties. To achieve highly precise results, acquisition setups with a vast amount of strobed illumination angles are required. The time-consuming nature of such an undertaking renders it inapt for most industrial applications. We overcome these limitations by carefully tailoring the required light setup to specific applications. Our novel approach facilitates the design of optimized acquisition setups for inline PS inspection systems. The optimal positions of light sources are derived from only a few representative material samples without the need for extensive amounts of training data. We formulate an energy function that constructs the illumination setup which generates the highest PS accuracy. The setup can be tailored for fast acquisition speed or cost efficiency. A thorough evaluation of the performance of our approach will be given on a public data set, evaluated by the mean angular error (MAE) for surface normals and root mean square (RMS) error for albedos. Our results show, that the obtained optimized PS setups can deliver a reconstruction performance close to the ground truth, while requiring only a few acquisitions.


Circulation ◽  
2014 ◽  
Vol 130 (suppl_2) ◽  
Author(s):  
Geetha Rayarao ◽  
Robert W Biederman ◽  
Diane V Thompson ◽  
Sahadev T Reddy ◽  
June Yamrozik ◽  
...  

Introduction: In cardiac MRI (CMR), heart volumes are traditionally measured using contouring methods applied to contiguous image data. Herein, we introduce a new approach, Automatic Threshold and Manual Trimming (ATMT), which is applied to the same contiguous data set. Potentially, the ATMT method can be applied by seed/region-growing algorithms with minimal user supervision. We sought to establish its clinical validity. Hypothesis: We hypothesize that the ATMT approach is more accurate as compared to conventional 'gold standard', cardiac contouring. Methods: Hearts from two populations (N=74) were evaluated: explanted heart transplant (Tx) and a clinical validation cohort ( in vivo ). The transplanted hearts were imaged ex vivo using CMR and then weighed on a high-fidelity scale. Cardiac volume/mass was compared (N=54) to the patient cohort (N=20) and measured non-invasively with stroke volume, independently measured via CMR phase velocity technique. Bland-Altman was applied in a 3-way manner for each group. Results: Bland-Altman analysis for Standard Deviation (SD), Bias and Correlation (R) are summarized in Table 1. When compared with independent measurements (weight/flow), ATMT has lower Bias (close to zero) and SD. Further, any comparison involving cardiac contours has a substantially larger bias term and a higher SD. From the table below, ATMT has consistently higher correlations with the independent measurement than does the contour method. Conclusions: Based on multiple comparison metrics with independent measures, the ATMT approach is more accurate and reproducible for quantification of cardiac volume (integral for EF determination) as compared to standard contouring. Furthermore, ATMT accommodates trabeculae and papillary structures more intuitively than the contouring method. This intrinsic accuracy coupled with the potential for more rapid analysis gives a valid impetus to further develop the ATMT approach further increasing CMR accuracy.


Author(s):  
A. Lemme ◽  
Y. Meirovitch ◽  
M. Khansari-Zadeh ◽  
T. Flash ◽  
A. Billard ◽  
...  

AbstractThis paper introduces a benchmark framework to evaluate the performance of reaching motion generation approaches that learn from demonstrated examples. The system implements ten different performance measures for typical generalization tasks in robotics using open source MATLAB software. Systematic comparisons are based on a default training data set of human motions, which specify the respective ground truth. In technical terms, an evaluated motion generation method needs to compute velocities, given a state provided by the simulation system. This however is agnostic to how this is done by the method or how the methods learns from the provided demonstrations. The framework focuses on robustness, which is tested statistically by sampling from a set of perturbation scenarios. These perturbations interfere with motion generation and challenge its generalization ability. The benchmark thus helps to identify the strengths and weaknesses of competing approaches, while allowing the user the opportunity to configure the weightings between different measures.


2021 ◽  
Author(s):  
Robert Jones ◽  
Chiara Maffei ◽  
Jean Augustinack ◽  
Bruce Fischl ◽  
Hui Wang ◽  
...  

AbstractCompressed sensing (CS) has been used to enhance the feasibility of diffusion spectrum imaging (DSI) by reducing the required acquisition time. CS applied to DSI (CS-DSI) attempts to reconstruct diffusion probability density functions (PDFs) from significantly undersampled q-space data. Dictionary-based CS-DSI using L2-regularized algorithms is an intriguing approach that has demonstrated high fidelity reconstructions, fast computation times and inter-subject generalizability when tested on in vivo data. CS-DSI reconstruction fidelity is typically evaluated using the fully sampled data as ground truth. However, it is difficult to gauge how great an error with respect to the fully sampled PDF we can tolerate, without knowing whether that error also translates to substantial loss of accuracy with respect to the true fiber orientations. Here, we obtain direct measurements of axonal orientations in ex vivo human brain tissue at microscopic resolution with polarization-sensitive optical coherence tomography (PSOCT). We employ dictionary-based CS reconstruction methods to DSI data from the same samples, acquired at high max b-value (40000 s/mm2) and with high spatial resolution. We compare the diffusion orientation estimates from both CS and fully sampled DSI to the ground-truth orientations from PSOCT. This allows us to investigate the conditions under which CS reconstruction preserves the accuracy of diffusion orientation estimates with respect to PSOCT. We find that, for a CS acceleration factor of R=3, CS-DSI preserves the accuracy of the fully sampled DSI data. That acceleration is sufficient to make the acquisition time of DSI comparable to that of state-of-the-art single- or multi-shell acquisitions. We also show that, as the acceleration factor increases further, different CS reconstruction methods degrade in different ways. Finally, we find that the signal-to-noise (SNR) of the training data used to construct the dictionary can have an impact on the accuracy of the CS-DSI, but that there is substantial robustness to loss of SNR in the test data.


Sign in / Sign up

Export Citation Format

Share Document