Simulated annealing, genetic algorithms and seismic waveform inversion

Author(s):  
Mrinal K. Sen ◽  
Paul L. Stoffa
Geophysics ◽  
1991 ◽  
Vol 56 (10) ◽  
pp. 1624-1638 ◽  
Author(s):  
Mrinal K. Sen ◽  
Paul L. Stoffa

The seismic inverse problem involves finding a model m that either minimizes the error energy between the data and theoretical seismograms or maximizes the cross‐correlation between the synthetics and the observations. We are, however, faced with two problems: (1) the model space is very large, typically of the order of [Formula: see text]; and, (2) the error energy function is multimodal. Existing calculus‐based methods are local in scope and easily get trapped in local minima of the energy function. Other methods such as “simulated annealing” and “genetic algorithms” can be applied to such global optimization problems and they do not depend on the starting model. Both of these methods bear analogy to natural systems and are robust in nature. For example, simulated annealing is the analog to a physical process in which a solid in a “heat bath” is heated by increasing the temperature, followed by slow cooling until it reaches the global minimum energy state where it forms a crystal. To use simulated annealing efficiently for 1-D seismic waveform inversion, we require a modeling method that rapidly performs the forward modeling calculation and a cooling schedule that will enable us to find the global minimum of the energy function rapidly. With the advent of vector computers, the reflectivity method has proved successful and the time of the calculation can be reduced substantially if only plane‐wave seismograms are required. Thus, the principal problem with simulated annealing is to find the critical temperature, i.e., the temperature at which crystallization occurs. By initiating the simulated annealing process with different starting temperatures for a fixed number of iterations with a very slow cooling, we noticed that by starting very near but just above the critical temperature, we reach very close to the global minimum energy state very rapidly. We have applied this technique successfully to band‐limited synthetic data in the presence of random noise. In most cases we find that we are able to obtain very good solutions using only a few plane wave seismograms.


1992 ◽  
Vol 109 (2) ◽  
pp. 323-342 ◽  
Author(s):  
Malcolm Sambridge ◽  
Guy Drijkoningen

Geophysics ◽  
1991 ◽  
Vol 56 (11) ◽  
pp. 1794-1810 ◽  
Author(s):  
Paul L. Stoffa ◽  
Mrinal K. Sen

Seismic waveform inversion is one of many geophysical problems which can be identified as a nonlinear multiparameter optimization problem. Methods based on local linearization fail if the starting model is too far from the true model. We have investigated the applicability of “Genetic Algorithms” (GA) to the inversion of plane‐wave seismograms. Like simulated annealing, genetic algorithms use a random walk in model space and a transition probability rule to help guide their search. However, unlike a single simulated annealing run, the genetic algorithms search from a randomly chosen population of models (strings) and work with a binary coding of the model parameter set. Unlike a pure random search, such as in a “Monte Carlo” method, the search used in genetic algorithms is not directionless. Genetic algorithms essentially consist of three operations, selection, crossover, and mutation, which involve random number generation, string copies, and some partial string exchanges. The choice of the initial population, the probabilities of crossover and mutation are crucial for the practical implementation of the algorithm. We investigated the effects of these parameters in the inversion of plane‐wave seismograms in which a normalized crosscorrelation function was used as the objective or fitness function (E). We also introduce the concept of “update” probability to control the influence of past generations. The combination of a low value of mutation probability (∼0.01), a moderate value of the crossover probability (∼0.6) and a high value of update probability (∼0.9) are found to be optimal for the convergence of the algorithm. Further, we show that concepts from simulated annealing can be used effectively for the stretching of the fitness function which helps in the convergence of the algorithm. Thus, we propose to use exp (E/T) rather than E as the fitness function, where T (analogous to temperature in simulated annealing) is a properly chosen parameter which can change slowly with each generation. Also, by repeating the GA optimization procedure several times with different randomly chosen initial model populations, we derive “a very good subset” of models from the entire model space and calculate the a posteriori probability density σ(m) ∝ exp (E(m)/T). The σ(m) ’s are then used to calculate a “mean” model, which is found to be close to the true model.


Author(s):  
Holman Ospina-Mateus ◽  
Leonardo Augusto Quintana Jiménez ◽  
Francisco J. Lopez-Valdes ◽  
Shyrle Berrio Garcia ◽  
Lope H. Barrero ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document