scholarly journals Obtaining reliable localizations with Time Reverse Imaging: limits to array design, velocity models and signal-to-noise ratios

2018 ◽  
Author(s):  
Claudia Werner ◽  
Erik H. Saenger

Abstract. Time Reverse Imaging (TRI) is evolving into a standard technique for localizing and characterizing seismic events. In recent years, TRI has been applied to a wide range of applications from the lab scale over the field scale up to the global scale. No identification of events and their onset times is necessary when localizing events with TRI. Therefore, it is especially suited for localizing quasi-simultaneous events and events with a low signal-to-noise ratio. However, in contrast to more regularly applied localization methods, the prerequisites for applying TRI are not sufficiently known. To investigate the significance of station distributions, complex velocity models and signal-to-noise ratios for the localization quality, numerous simulations were performed using a finite difference code to propagate elastic waves through three-dimensional models. Synthetic seismograms were reversed in time and re-inserted into the model. The time-reversed wavefield backpropagates through the model and, in theory, focuses at the source location. This focusing was visualized using imaging conditions. Additionally, artificial focusing spots were removed with an illumination map specific to the setup. Successful localizations were sorted into four categories depending on their reliability. Consequently, individual simulation setups could be evaluated by their ability to produce reliable localizations. Optimal inter-station distances, minimum apertures, relations between array and source location, heterogeneities of inter-station distances and total number of stations were investigated for different source depth as well as source types. Additionally, the quality of the localization was analysed when using a complex velocity model or a low signal-to-noise ratio. Finally, an array in Southern California was investigated for its ability to localize seismic events in specific target depths while using the actual velocity model for that region. In addition, the success rate with recorded data was estimated. Knowledge about the prerequisites for using TRI enables the estimation of success rates for a given problem. Furthermore, it reduces the time needed for adjusting stations to achieve more reliable localizations and provides a foundation for designing arrays for applying TRI.

Solid Earth ◽  
2018 ◽  
Vol 9 (6) ◽  
pp. 1487-1505 ◽  
Author(s):  
Claudia Werner ◽  
Erik H. Saenger

Abstract. Time reverse imaging (TRI) is evolving into a standard technique for locating and characterising seismic events. In recent years, TRI has been employed for a wide range of applications from the lab scale, to the field scale and up to the global scale. No identification of events or their onset times is necessary when locating events with TRI; therefore, it is especially suited for locating quasi-simultaneous events and events with a low signal-to-noise ratio. However, in contrast to more regularly applied localisation methods, the prerequisites for applying TRI are not sufficiently known.To investigate the significance of station distributions, complex velocity models and signal-to-noise ratios with respect to location accuracy, numerous simulations were performed using a finite difference code to propagate elastic waves through three-dimensional models. Synthetic seismograms were reversed in time and reinserted into the model. The time-reversed wave field back propagates through the model and, in theory, focuses at the source location. This focusing was visualised using imaging conditions. Additionally, artificial focusing spots were removed using an illumination map specific to the set-up. Successful locations were sorted into four categories depending on their reliability. Consequently, individual simulation set-ups could be evaluated by their ability to produce reliable source locations.Optimal inter-station distances, minimum apertures, relations between the array and source locations, heterogeneities of inter-station distances and the total number of stations were investigated for different source depths and source types. Additionally, the accuracy of the locations was analysed when using a complex velocity model or a low signal-to-noise ratio.Finally, an array in southern California was investigated regarding its ability to locate seismic events at specific target depths while using the actual velocity model for that region. In addition, the success rate with recorded data was estimated.Knowledge about the prerequisites for using TRI enables the estimation of success rates for a given problem. Furthermore, it reduces the time needed to adjust stations to achieve more reliable locations and provides a foundation for designing arrays for applying TRI.


2018 ◽  
Vol 18 (5-6) ◽  
pp. 1620-1632 ◽  
Author(s):  
Avik Kumar Das ◽  
Christopher KY Leung

Acoustic emission is a powerful experimental structural health monitoring technique for determining the location of cracks formed in a member. Pinpointing wave arrival time is essential for accurate source location. Conventional arrival detection technique’s accuracy deteriorates rapidly in low signal to noise ratio (5–40 dB) region, thus unsuitable for source location due to this inaccuracy. A new technique to pinpoint the arrival time based on the power of the wave is proposed. We have designed an adaptive filter based on the power characteristics of acoustic emission wave. After filtration of the acoustic emission wave, sliding window is employed to accurately identify the region of wave arrival based on the change in transmitted power. The results from various experimental and numerical arrival time detection experiments consistently show that the proposed methodology is stable and accurate for a wide range of signal to noise ratio values (5–100 dB). Particularly, in signal to noise ratio region (5–40 dB), the method is significantly more accurate as compared to the other methods described in the literature. The method was then employed to study the localized damage progression in a steel fiber–reinforced beam under four-point bending. The results suggest that calculated source location using the new method is consistent with that from visual inspection of the member at failure and more accurate than the localization results from existing method.


Geophysics ◽  
1993 ◽  
Vol 58 (9) ◽  
pp. 1301-1313 ◽  
Author(s):  
Delman Lee ◽  
Geoffrey M. Jackson ◽  
Iain M. Mason

Partially coherent migration reduces the spurious details introduced by velocity macro‐model imperfections. In a partially coherent migration, instead of summing coherently over the full aperture to achieve maximum lateral resolution, (1) a coherent stack is performed over a limited window width, and then (2) the collection of coherent stacks for different windows along the full aperture are summed incoherently to produce an amplitude value for each output point. Amplitude accuracy of a migration is improved with some sacrifice in spatial (lateral) resolution. A parameter in partially coherent migration is the running coherent window width, which accounts for the spatial correlation of errors in the velocity model. The coherent window width controls the trade‐off between signal‐to‐noise ratio and lateral resolution. Assuming that timing errors introduced by imperfections of a velocity macro‐model are from a zero‐mean stationary Gaussian process, partially coherent migration is shown to raise the signal‐to‐noise ratio of the migrated image as compared to a conventional migration. The two competing aspects of signal‐to‐noise ratio and lateral resolution of the partially coherent migration in the presence of timing errors are analyzed in a stochastic framework. The intuitively attractive idea of limiting the coherent window width to the correlation length of the timing errors is confirmed numerically.


2021 ◽  
Author(s):  
Ghanimah Abuhaimed ◽  
Nizar Jaber ◽  
Nouha Alcheikh ◽  
Mohammad I. Younis

Abstract Micro/Nano-electromechanical systems, MEMS/NEMS-based resonators are presently an important part of a wide range of applications. However, many of these devices suffer from the low signal-to-noise ratio and the need for a large driving force. Different principles were proposed to enhance the sensitivity and improve their signal-to-noise ratios (SNR), such as bifurcations, jumps and higher-order excitation. However, these methods require special designs and high actuation voltages, which are not always available in the standard function generators and power supplies. Also, it increases the devices’ overall cost and power requirements. Furthermore, parametric excitation is explored as an option to amplify the signal at a lower cost and energy demand. However, this type of excitation requires specific geometrical settings, in addition to very low damping conditions. Electrothermal actuation is investigated to achieve excitation of primary resonance, which can be used for parametric excitation. This type of excitation is desirable due to its simplicity, robustness and ability to create large internal forces at low voltages. However, the time response is limited by the thermal relaxation time. In this work, we demonstrate the use of electromagnetic actuation to significantly amplify the response of electrothermally actuated clamped-clamped resonators at first mode (primary) resonance. At ambient pressure, experimental data show 18 times amplification of the response amplitude compared with electrothermal actuation only. The method is based on introducing a permanent magnetic field to induce an out-of-plane Lorentz-force. The results show the great potential of this technique being used for a variety of sensing and signal processing applications, especially, where a large signal-to-noise ratio is required while using low operational voltages.


1988 ◽  
Vol 10 (3) ◽  
pp. 171-195 ◽  
Author(s):  
J.M. Thijssen ◽  
B.J. Oosterveld ◽  
R.F. Wagner

In search of the optimal display of echographic information for the detection of focal lesions, a systematic study was performed considering a wide range of gray level transforms (i.e., lookup tables). This range comprised power functions of the echo envelope signal (1/8 ≤ n ≤ 8), power functions of the logarithmic transform and a sigmoid function. The implications of the transforms on the first order statistics (histogram, “point signal-to-noise ratio” SNRp) and on the second order statistics (autocorrelation function) could be derived both analytically, and from the analysis of simulated and experimentally obtained echograms of homogeneously scattering tissue models. These results were employed to estimate the lesion signal-to-noise ratio SNRQ, which specifies the detectability of a lesion by an ideal observer. It was found, both theoretically and practically, that the intensity display corresponds to the optimal transform (i.e., n=2) for a low contrast lesion. When the data were first logarithmically compressed, the lesion SNR appeared to increase with increasing power (1/8 ≤ n ≤ 8). A logarithmic transform followed by a sigmoid compression did not produce much improvement. These effects of gray level transforms on the SNRQ were shown to be relatively small, with the exception of powers n > 2 when applied to linear (i.e. amplitude) data. In the case of high lesion contrast, the sequence of log compression, followed by a square law produced the optimum SNRQ. This sequence is equivalent to the processing within echographic equipment, where the TV monitor has a gamma of the order of 2.


2017 ◽  
Author(s):  
Eline R. Kupers ◽  
Helena X. Wang ◽  
Kaoru Amano ◽  
Kendrick N. Kay ◽  
David J. Heeger ◽  
...  

AbstractCurrently, non-invasive methods for studying the human brain do not reliably measure signals that depend on the rate of action potentials (spikes) in a neural population, independent of other responses such as hemodynamic coupling (functional magnetic resonance imaging) and subthreshold neuronal synchrony (oscillations and event-related potentials). In contrast, invasive methods - animal microelectrode recordings and human intracortical recordings (electrocorticography, or ECoG) - have recently measured broadband power elevation spanning 50-200 Hz in electrical fields generated by neuronal activity as a proxy for the locally averaged spike rates. Here, we sought to detect and quantify stimulus-related broadband responses using a non-invasive method - magnetoencephalography (MEG) - in individual subjects. Because extracranial measurements like MEG have multiple global noise sources and a relatively low signal-to-noise ratio, we developed an automated denoising technique, adapted from Kay et al, 2013 (1), that helps reveal the broadband signal of interest. Subjects viewed 12-Hz contrast-reversing patterns in the left, right, or bilateral visual field. Sensor time series were separated into an evoked component (12-Hz amplitude) and a broadband component (60–150 Hz, excluding stimulus harmonics). In all subjects, denoised broadband responses were reliably measured in sensors over occipital cortex. The spatial pattern of the broadband measure depended on the stimulus, with greater broadband power in sensors contralateral to the stimulus. Because we obtain reliable broadband estimates with relatively short experiments (~20 minutes), with a sufficient signal-to-noise-ratio to distinguish responses to different stimuli, we conclude that MEG broadband signals, denoised with our method, offer a practical, non-invasive means for characterizing spike-rate-dependent neural activity for a wide range of scientific questions about human brain function.Author SummaryNeuronal activity causes perturbations in nearby electrical fields. These perturbations can be measured non-invasively in the living human brain using electro- and magneto-encephalography (EEG and MEG). These two techniques have generally emphasized two kinds of measurements: oscillations and event-related responses, both of which reflect synchronous activity from large populations of neurons. A third type of signal, a stimulus-related increase in power spanning a wide range of frequencies (‘broadband’), is routinely measured in invasive recordings in animals and pre-surgical patients with implanted electrodes, but not with MEG and EEG. This broadband response is of great interest because unlike oscillations and event-related responses, it is correlated with neuronal spike rates. Here we report quantitative, spatially specific measurements of broadband fields in individual human subjects using MEG. These results demonstrate that a spike- rate-dependent measure of brain activity can be obtained non-invasively from the living human brain, and is suitable for investigating a wide range of questions about spiking activity in the human brain.


2017 ◽  
Vol 27 (10) ◽  
pp. 1357-1363
Author(s):  
Jianxin Peng ◽  
Shengju Wu

Reverberation time and signal-to-noise ratio in classrooms are critical factors to speech intelligibility. In this study, the combined effect of reverberation time and signal-to-noise ratio on Chinese speech intelligibility of children was investigated in 28 elementary school classrooms in China. The results show that Chinese speech intelligibility scores increase with an increase of signal-to-noise ratio and the age of children, and decrease with an increase of reverberation time in classrooms. Younger children require higher signal-to-noise ratio and shorter reverberation time than older children to understand the speech. The A-weighted signal-to-noise ratio combined with a wide range of reverberation time can be used to predict speech intelligibility score and serve as a criterion for classroom design for elementary schools.


Geophysics ◽  
2011 ◽  
Vol 76 (2) ◽  
pp. MA1-MA10 ◽  
Author(s):  
Ben Witten ◽  
Brad Artman

Locating subsurface sources from passive seismic recordings is difficult when attempted with data that have no observable arrivals and/or a low signal-to-noise ratio (S/N). Energy can be focused at its source using time-reversal techniques. However, when a focus cannot be matched to a particular event, it can be difficult to distinguish true focusing from artifacts. Artificial focusing can arise from numerous causes, including noise contamination, acquisition geometry, and velocity model effects. We present a method that reduces the ambiguity of the results by creating an estimate of the (S/N) in the image domain and defining a statistical confidence threshold for features in the images. To do so, time-reverse imaging techniques are implemented on both recorded data and a noise model. In the data domain, the noise model approximates the energy of local noise sources. After imaging, the result also captures the effects of acquisition geometry and the velocity model. The signal image is then divided by the noise image to produce an estimate of the (S/N). The distribution of image (S/N) values due to purely stochastic noise provides a means by which to calculate a confidence threshold. This threshold is used to set the minimum displayed value of images to a statistically significant limit. Two-dimensional synthetic examples show the effectiveness of this technique under varying amounts of noise and despite challenging velocity models. Using this method, we collocate anomalous low-frequency energy content, measured over oil reservoirs in Africa and Europe, with the subsurface location of the productive intervals through 2D and 3D implementations.


Author(s):  
S. R. Heister ◽  
V. V. Kirichenko

Introduction. The digital representation of received radar signals has provided a wide range of opportunities for their processing. However, the used hardware and software impose some limits on the number of bits and sampling rate of the signal at all conversion and processing stages. These limitations lead to a decrease in the signal-to-interference ratio due to quantization noise introduced by powerful components comprising the received signal (interfering reflections; active noise interference), as well as the attenuation of a low-power reflected signal represented by a limited number of bits. In practice, the amplitude of interfering reflections can exceed that of the signal reflected from the target by a factor of thousands.Aim. In this connection, it is essential to take into account the effect of quantization noise on the signal-tointerference ratio.Materials and methods. The article presents expressions for calculating the power and power spectral density (PSD) of quantization noise, which take into account the value of the least significant bit of an analog-to-digital converter (ADC) and the signal sampling rate. These expressions are verified by simulating 4-, 8- and 16-bit ADCs in the Mathcad environment.Results. Expressions are derived for calculating the quantization noise PSD of interfering reflections, which allows the PSD to be taken into account in the signal-to-interference ratio at the output of the processing chain. In addition, a comparison of decimation options (by discarding and averaging samples) is performed drawing on the estimates of the noise PSD and the signal-to-noise ratio.Conclusion. Recommendations regarding the ADC bit depth and sampling rate for the radar receiver are presented.


Sign in / Sign up

Export Citation Format

Share Document