High-amplitude noise detection by the expectation-maximization algorithm with application to swell-noise attenuation

Geophysics ◽  
2010 ◽  
Vol 75 (3) ◽  
pp. V39-V49 ◽  
Author(s):  
Maïza Bekara ◽  
Mirko van der Baan

High-amplitude noise is a common problem in seismic data. Current filtering techniques that target this problem first detect the location of the noise and then remove it by damping or interpolation. Detection is done conventionally by comparing individual data amplitudes in a certain domain to a user-controlled local threshold. In practice, the threshold is optimally tuned by trial and error and is often changed to match the varying noise power across the data set. We have developed an automatic method to compute the appropriate threshold for high-amplitude noise detection and attenuation. The main idea is to exploit differences in statistical properties between noise and signal amplitudes to construct a detection criterion. A model that consists of a mixtureof two statistical distributions, representing the signal and the noise, is fitted to the data. Then it is used to estimate the probability (i.e., likelihood) that each sample in the data is noisy by means of an expectation-maximization (EM) algorithm. Only those samples with a likelihood greater than a specific threshold are considered to be noise. The resulting probability threshold is better adapted to the data compared to a conventional amplitude threshold. It offers the user, through the probability threshold value, the possibility to quantify the confidence in whether a large amplitude anomaly is considered as noise. The method is generic; however, our work develops and implements the method for swell-noise attenuation. Initial results are encouraging, showing slightly better performance than an optimized conventional method but with much less parameter testing and variation.

Geophysics ◽  
1995 ◽  
Vol 60 (6) ◽  
pp. 1887-1896 ◽  
Author(s):  
Ray Abma ◽  
Jon Claerbout

Attenuating random noise with a prediction filter in the time‐space domain generally produces results similar to those of predictions done in the frequency‐space domain. However, in the presence of moderate‐ to high‐amplitude noise, time‐space or t-x prediction passes less random noise than does frequency‐space, or f-x prediction. The f-x prediction may also produce false events in the presence of parallel events where t-x prediction does not. These advantages of t-x prediction are the result of its ability to control the length of the prediction filter in time. An f-x prediction produces an effective t-x domain filter that is as long in time as the input data. Gulunay’s f-x domain prediction tends to bias the predictions toward the traces nearest the output trace, allowing somewhat more noise to be passed, but this bias may be overcome by modifying the system of equations used to calculate the filter. The 3-D extension to the 2-D t-x and f-x prediction techniques allows improved noise attenuation because more samples are used in the predictions, and the requirement that events be strictly linear is relaxed.


2018 ◽  
Vol 15 (6) ◽  
pp. 172988141881470
Author(s):  
Nezih Ergin Özkucur ◽  
H Levent Akın

Self-localization in autonomous robots is one of the fundamental issues in the development of intelligent robots, and processing of raw sensory information into useful features is an integral part of this problem. In a typical scenario, there are several choices for the feature extraction algorithm, and each has its weaknesses and strengths depending on the characteristics of the environment. In this work, we introduce a localization algorithm that is capable of capturing the quality of a feature type based on the local environment and makes soft selection of feature types throughout different regions. A batch expectation–maximization algorithm is developed for both discrete and Monte Carlo localization models, exploiting the probabilistic pose estimations of the robot without requiring ground truth poses and also considering different observation types as blackbox algorithms. We tested our method in simulations, data collected from an indoor environment with a custom robot platform and a public data set. The results are compared with the individual feature types as well as naive fusion strategy.


2018 ◽  
Vol 35 (8) ◽  
pp. 1508-1518
Author(s):  
Rosembergue Pereira Souza ◽  
Luiz Fernando Rust da Costa Carmo ◽  
Luci Pirmez

Purpose The purpose of this paper is to present a procedure for finding unusual patterns in accredited tests using a rapid processing method for analyzing video records. The procedure uses the temporal differencing technique for object tracking and considers only frames not identified as statistically redundant. Design/methodology/approach An accreditation organization is responsible for accrediting facilities to undertake testing and calibration activities. Periodically, such organizations evaluate accredited testing facilities. These evaluations could use video records and photographs of the tests performed by the facility to judge their conformity to technical requirements. To validate the proposed procedure, a real-world data set with video records from accredited testing facilities in the field of vehicle safety in Brazil was used. The processing time of this proposed procedure was compared with the time needed to process the video records in a traditional fashion. Findings With an appropriate threshold value, the proposed procedure could successfully identify video records of fraudulent services. Processing time was faster than when a traditional method was employed. Originality/value Manually evaluating video records is time consuming and tedious. This paper proposes a procedure to rapidly find unusual patterns in videos of accredited tests with a minimum of manual effort.


2013 ◽  
Vol 28 (5) ◽  
pp. 1175-1187 ◽  
Author(s):  
Kapil Sheth ◽  
Thomas Amis ◽  
Sebastian Gutierrez-Nolasco ◽  
Banavar Sridhar ◽  
Daniel Mulfinger

Abstract This paper presents a method for determining a threshold value of probabilistic convective weather forecast data. By synchronizing air traffic data and an experimental probabilistic convective weather forecast product, it was observed that aircraft avoid areas of specific forecasted probability. Both intensity and echo top of the forecasted weather were synchronized with air traffic data to derive the probability threshold parameter. This value can be used by dispatchers for flight planning and by air traffic managers to reroute streams of aircraft around convective cells. The main contribution of this paper is to provide a method to compute the probability threshold parameters using a specific experimental probabilistic convective forecast product providing hourly guidance up to 6 h. Air traffic and weather data for a 4-month period during the summer of 2007 were used to compute the parameters for the continental United States. The results are shown for different altitudes, times of day, aircraft types, and airspace users. Threshold values for each of the 20 Air Route Traffic Control Centers were also computed. Additional details are presented for seven high-altitude sectors in the Fort Worth, Texas, center. For the analysis reported here, flight intent was not considered and no assessment of flight deviation was conducted since only aircraft tracks were used.


Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. P57-P70 ◽  
Author(s):  
Shaun Strong ◽  
Steve Hearn

Survey design for converted-wave (PS) reflection is more complicated than for standard P-wave surveys, due to raypath asymmetry and increased possibility of phase distortion. Coal-scale PS surveys (depth [Formula: see text]) require particular consideration, partly due to the particular physical properties of the target (low density and low velocity). Finite-difference modeling provides a pragmatic evaluation of the likely distortion due to inclusion of postcritical reflections. If the offset range is carefully chosen, then it may be possible to incorporate high-amplitude postcritical reflections without seriously degrading the resolution in the stack. Offsets of up to three times target depth may in some cases be usable, with appropriate quality control at the data-processing stage. This means that the PS survey design may need to handle raypaths that are highly asymmetrical and that are very sensitive to assumed velocities. A 3D-PS design was used for a particular coal survey with the target in the depth range of 85–140 m. The objectives were acceptable fold balance between bins and relatively smooth distribution of offset and azimuth within bins. These parameters are relatively robust for the P-wave design, but much more sensitive for the case of PS. Reduction of the source density is more acceptable than reduction of the receiver density, particularly in terms of the offset-azimuth distribution. This is a fortuitous observation in that it improves the economics of a dynamite source, which is desirable for high-resolution coal-mine planning. The final-survey design necessarily allows for logistical and economic considerations, which implies some technical compromise. However, good fold, offset, and azimuth distributions are achieved across the survey area, yielding a data set suitable for meaningful analysis of P and S azimuthal anisotropy.


Sign in / Sign up

Export Citation Format

Share Document