Resolution of the inverse problem of optical grating testing by means of a neural network

Author(s):  
Stephane Robert ◽  
Alain Mure-Ravaud
Author(s):  
Lukas Hecker ◽  
Rebekka Rupprecht ◽  
Ludger Tebartz van Elst ◽  
Juergen Kornmeier

AbstractEEG and MEG are well-established non-invasive methods in neuroscientific research and clinical diagnostics. Both methods provide a high temporal but low spatial resolution of brain activity. In order to gain insight about the spatial dynamics of the M/EEG one has to solve the inverse problem, which means that more than one configuration of neural sources can evoke one and the same distribution of EEG activity on the scalp. Artificial neural networks have been previously used successfully to find either one or two dipoles sources. These approaches, however, have never solved the inverse problem in a distributed dipole model with more than two dipole sources. We present ConvDip, a novel convolutional neural network (CNN) architecture that solves the EEG inverse problem in a distributed dipole model based on simulated EEG data. We show that (1) ConvDip learned to produce inverse solutions from a single time point of EEG data and (2) outperforms state-of-the-art methods (eLORETA and LCMV beamforming) on all focused performance measures. (3) It is more flexible when dealing with varying number of sources, produces less ghost sources and misses less real sources than the comparison methods. (4) It produces plausible inverse solutions for real-world EEG recordings and needs less than 40 ms for a single forward pass. Our results qualify ConvDip as an efficient and easy-to-apply novel method for source localization in EEG and MEG data, with high relevance for clinical applications, e.g. in epileptology and real time applications.


2021 ◽  
Vol 263 (3) ◽  
pp. 3407-3416
Author(s):  
Tyler Dare

Measuring the forces that excite a structure into vibration is an important tool in modeling the system and investigating ways to reduce the vibration. However, determining the forces that have been applied to a vibrating structure can be a challenging inverse problem, even when the structure is instrumented with a large number of sensors. Previously, an artificial neural network was developed to identify the location of an impulsive force on a rectangular plate. In this research, the techniques were extended to plates of arbitrary shape. The principal challenge of arbitrary shapes is that some combinations of network outputs (x- and y-coordinates) are invalid. For example, for a plate with a hole in the middle, the network should not output that the force was applied in the center of the hole. Different methods of accommodating arbitrary shapes were investigated, including output space quantization and selecting the closest valid region.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. R477-R492 ◽  
Author(s):  
Bingbing Sun ◽  
Tariq Alkhalifah

Full-waveform inversion (FWI) is a nonlinear optimization problem, and a typical optimization algorithm such as the nonlinear conjugate gradient or limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) would iteratively update the model mainly along the gradient-descent direction of the misfit function or a slight modification of it. Based on the concept of meta-learning, rather than using a hand-designed optimization algorithm, we have trained the machine (represented by a neural network) to learn an optimization algorithm, entitled the “ML-descent,” and apply it in FWI. Using a recurrent neural network (RNN), we use the gradient of the misfit function as the input, and the hidden states in the RNN incorporate the history information of the gradient similar to an LBFGS algorithm. However, unlike the fixed form of the LBFGS algorithm, the machine-learning (ML) version evolves in response to the gradient. The loss function for training is formulated as a weighted summation of the L2 norm of the data residuals in the original inverse problem. As with any well-defined nonlinear inverse problem, the optimization can be locally approximated by a linear convex problem; thus, to accelerate the training, we train the neural network by minimizing randomly generated quadratic functions instead of performing time-consuming FWIs. To further improve the accuracy and robustness, we use a variational autoencoder that projects and represents the model in latent space. We use the Marmousi and the overthrust examples to demonstrate that the ML-descent method shows faster convergence and outperforms conventional optimization algorithms. The energy in the deeper part of the models can be recovered by the ML-descent even when the pseudoinverse of the Hessian is not incorporated in the FWI update.


Sign in / Sign up

Export Citation Format

Share Document