scholarly journals Quantifying the structure of strong gravitational lens potentials with uncertainty-aware deep neural networks

2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.

2020 ◽  
Vol 58 (8) ◽  
pp. 1667-1679
Author(s):  
Benedikt Franke ◽  
J. Weese ◽  
I. Waechter-Stehle ◽  
J. Brüning ◽  
T. Kuehne ◽  
...  

Abstract The transvalvular pressure gradient (TPG) is commonly estimated using the Bernoulli equation. However, the method is known to be inaccurate. Therefore, an adjusted Bernoulli model for accurate TPG assessment was developed and evaluated. Numerical simulations were used to calculate TPGCFD in patient-specific geometries of aortic stenosis as ground truth. Geometries, aortic valve areas (AVA), and flow rates were derived from computed tomography scans. Simulations were divided in a training data set (135 cases) and a test data set (36 cases). The training data was used to fit an adjusted Bernoulli model as a function of AVA and flow rate. The model-predicted TPGModel was evaluated using the test data set and also compared against the common Bernoulli equation (TPGB). TPGB and TPGModel both correlated well with TPGCFD (r > 0.94), but significantly overestimated it. The average difference between TPGModel and TPGCFD was much lower: 3.3 mmHg vs. 17.3 mmHg between TPGB and TPGCFD. Also, the standard error of estimate was lower for the adjusted model: SEEModel = 5.3 mmHg vs. SEEB = 22.3 mmHg. The adjusted model’s performance was more accurate than that of the conventional Bernoulli equation. The model might help to improve non-invasive assessment of TPG.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


2019 ◽  
Vol 7 (3) ◽  
pp. SE113-SE122 ◽  
Author(s):  
Yunzhi Shi ◽  
Xinming Wu ◽  
Sergey Fomel

Salt boundary interpretation is important for the understanding of salt tectonics and velocity model building for seismic migration. Conventional methods consist of computing salt attributes and extracting salt boundaries. We have formulated the problem as 3D image segmentation and evaluated an efficient approach based on deep convolutional neural networks (CNNs) with an encoder-decoder architecture. To train the model, we design a data generator that extracts randomly positioned subvolumes from large-scale 3D training data set followed by data augmentation, then feed a large number of subvolumes into the network while using salt/nonsalt binary labels generated by thresholding the velocity model as ground truth labels. We test the model on validation data sets and compare the blind test predictions with the ground truth. Our results indicate that our method is capable of automatically capturing subtle salt features from the 3D seismic image with less or no need for manual input. We further test the model on a field example to indicate the generalization of this deep CNN method across different data sets.


2011 ◽  
Vol 2011 ◽  
pp. 1-17
Author(s):  
Dan Wang ◽  
Ahmed H. Tewfik ◽  
Yingchun Zhang ◽  
Yunhe Shen

This paper proposed a novel algorithm to sparsely represent a deformable surface (SRDS) with low dimensionality based on spherical harmonic decomposition (SHD) and orthogonal subspace pursuit (OSP). The key idea in SRDS method is to identify the subspaces from a training data set in the transformed spherical harmonic domain and then cluster each deformation into the best-fit subspace for fast and accurate representation. This algorithm is also generalized into applications of organs with both interior and exterior surfaces. To test the feasibility, we first use the computer models to demonstrate that the proposed approach matches the accuracy of complex mathematical modeling techniques and then both ex vivo and in vivo experiments are conducted using 3D magnetic resonance imaging (MRI) scans for verification in practical settings. All results demonstrated that the proposed algorithm features sparse representation of deformable surfaces with low dimensionality and high accuracy. Specifically, the precision evaluated as maximum error distance between the reconstructed surface and the MRI ground truth is better than 3 mm in real MRI experiments.


2020 ◽  
Vol 496 (2) ◽  
pp. 1718-1729 ◽  
Author(s):  
Wolfgang Enzi ◽  
Simona Vegetti ◽  
Giulia Despali ◽  
Jen-Wei Hsueh ◽  
R Benton Metcalf

ABSTRACT We present the analysis of a sample of 24 SLACS-like galaxy–galaxy strong gravitational lens systems with a background source and deflectors from the Illustris-1 simulation. We study the degeneracy between the complex mass distribution of the lenses, substructures, the surface brightness distribution of the sources, and the time delays. Using a novel inference framework based on Approximate Bayesian Computation, we find that for all the considered lens systems, an elliptical and cored power-law mass density distribution provides a good fit to the data. However, the presence of cores in the simulated lenses affects most reconstructions in the form of a Source Position Transformation. The latter leads to a systematic underestimation of the source sizes by 50 per cent on average, and a fractional error in H0 of around $25_{-19}^{+37}$ per cent. The analysis of a control sample of 24 lens systems, for which we have perfect knowledge about the shape of the lensing potential, leads to a fractional error on H0 of $12_{-3}^{+6}$ per cent. We find no degeneracy between complexity in the lensing potential and the inferred amount of substructures. We recover an average total projected mass fraction in substructures of fsub < 1.7–2.0 × 10−3 at the 68 per cent confidence level in agreement with zero and the fact that all substructures had been removed from the simulation. Our work highlights the need for higher resolution simulations to quantify the lensing effect of more realistic galactic potentials better, and that additional observational constraint may be required to break existing degeneracies.


2020 ◽  
Vol 2020 (6) ◽  
pp. 71-1-71-7
Author(s):  
Christian Kapeller ◽  
Doris Antensteiner ◽  
Svorad Štolc

Industrial machine vision applications frequently employ Photometric Stereo (PS) methods to detect fine surface defects on objects with challenging surface properties. To achieve highly precise results, acquisition setups with a vast amount of strobed illumination angles are required. The time-consuming nature of such an undertaking renders it inapt for most industrial applications. We overcome these limitations by carefully tailoring the required light setup to specific applications. Our novel approach facilitates the design of optimized acquisition setups for inline PS inspection systems. The optimal positions of light sources are derived from only a few representative material samples without the need for extensive amounts of training data. We formulate an energy function that constructs the illumination setup which generates the highest PS accuracy. The setup can be tailored for fast acquisition speed or cost efficiency. A thorough evaluation of the performance of our approach will be given on a public data set, evaluated by the mean angular error (MAE) for surface normals and root mean square (RMS) error for albedos. Our results show, that the obtained optimized PS setups can deliver a reconstruction performance close to the ground truth, while requiring only a few acquisitions.


2021 ◽  
Author(s):  
Uwe Ehret

<p>In this contribution, I will suggest an approach to build models as ordered and connected collections of multivariate, discrete probability distributions (dpd's). This approach can be seen as a Machine-Learning (ML) approach as it allows very flexible learning from data (almost) without prior constraints. Models can be built on dpd's only (fully data-based model), but they can also be included into existing process-based models at places where relations among data are not well-known (hybrid model). This provides flexibility for learning similar to including other ML approaches - e.g. Neural Networks - into process-based models, with the advantage that the dpd's can be investigated and interpreted by the modeler as long as their dimensionality remains low. Models based on dpd's are fundamentally probabilistic, and model responses for out-of-sample situations can be assured by dynamically coarse-graining the dpd's: The farther a predictive situation is from the learning situations, the coarser/more uncertain the prediction will be, and vice versa.</p><p>I will present the main elements and steps of such dpd-based modeling at the example of several systems, ranging from simple deterministic (ideal spring) to complex (hydrological system), and will discuss the influence of i) the size of the available training data set, ii) choice of the dpd priors, and iii) binning choices on the models' predictive power.</p>


Author(s):  
A. Lemme ◽  
Y. Meirovitch ◽  
M. Khansari-Zadeh ◽  
T. Flash ◽  
A. Billard ◽  
...  

AbstractThis paper introduces a benchmark framework to evaluate the performance of reaching motion generation approaches that learn from demonstrated examples. The system implements ten different performance measures for typical generalization tasks in robotics using open source MATLAB software. Systematic comparisons are based on a default training data set of human motions, which specify the respective ground truth. In technical terms, an evaluated motion generation method needs to compute velocities, given a state provided by the simulation system. This however is agnostic to how this is done by the method or how the methods learns from the provided demonstrations. The framework focuses on robustness, which is tested statistically by sampling from a set of perturbation scenarios. These perturbations interfere with motion generation and challenge its generalization ability. The benchmark thus helps to identify the strengths and weaknesses of competing approaches, while allowing the user the opportunity to configure the weightings between different measures.


2008 ◽  
Vol 17 (07) ◽  
pp. 1055-1070 ◽  
Author(s):  
ALEXANDER F. ZAKHAROV

It is well-known that gravitational lensing is a powerful tool in the investigation of the distribution of matter, including that of dark matter (DM). Typical angular distances between images and typical time scales depend on the gravitational lens masses. For the case of microlensing, angular distances between images or typical astrometric shifts are about 10-5 - 10-6 arcsec. Such an angular resolution will be reached with the space–ground VLBI interferometer, Radioastron. The basic targets for microlensing searches should be bright point-like radio sources at cosmological distances. In this case, an analysis of their variability and a solid determination of microlensing could lead to an estimation of their cosmological mass density. Moreover, one could not exclude the possibility that non-baryonic dark matter could also form microlenses if the corresponding optical depth were high enough. It is known that in gravitationally lensed systems the probability (the optical depth) of observing microlensing is relatively high. Therefore, for example, gravitationally lensed objects, like the CLASS gravitational lens B1600+434, appear to be most suitable to detect astrometric microlensing, since features of photometric microlensing have been detected in these objects. However, to directly resolve these images and to directly detect the apparent motion of the knots, the Radioastron sensitivity would have to be improved, since the estimated flux density is below the sensitivity threshold. Alternatively, they may be observed by increasing an integration time, assuming that a radio source has a typical core–jet structure and microlensing phenomena are caused by the superluminal apparent motions of knots. In the case of a confirmation (or a negation) of claims about microlensing in gravitational lens systems, one can speculate about the microlens contribution to the gravitational lens mass. Astrometric microlensing due Galactic MACHOs is not very important because of low optical depths and long typical time scales. Therefore, the launch of the space interferometer Radioastron will enable the investigation of microlensing in the radio band, giving rise to the possibility of not only resolving microimages but also of observing astrometric microlensing.


2021 ◽  
Vol 53 (3) ◽  
pp. 428-250
Author(s):  
Premana Wardayanti Premadi ◽  
Dading Hadi Nugroho ◽  
Anton Timur Jaelani

We report the results of combined analyses of X-ray and optical data of two galaxy clusters, CL 0024+1654 and RX J0152.7−1357 at redshift z = 0.395 and z = 0.830, respectively, offering a holistic physical description of the two clusters. Our X-ray analysis yielded temperature and density profiles of the gas in the intra-cluster medium (ICM). Using optical photometric and spectroscopic data, complemented with mass distribution from a gravitational lensing study, we investigated any possible correlation between the physical properties of the galaxy members, i.e. their color, morphology, and star formation rate (SFR), and their environments. We quantified the properties of the environment around each galaxy by galaxy number density, ICM temperature, and mass density. Although our results show that the two clusters exhibit a weaker correlation compared to relaxed clusters, it still confirms the significant effect of the ICM on the SFR in the galaxies. The close relation between the physical properties of galaxies and the condition of their immediate environment found in this work indicates the locality of galaxy evolution, even within a larger bound system such as a cluster. Various physical mechanisms are suggested to explain the relation between the properties of galaxies and their environment.


Sign in / Sign up

Export Citation Format

Share Document