The effects of data quality in local earthquake tomography: Application to the Alpine region

Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCB71-WCB79 ◽  
Author(s):  
Stephan Husen ◽  
Tobias Diehl ◽  
Edi Kissling

Despite the increase in quality and number of seismic stations in many parts of the world, accurate timing of individual arrival times remains crucial for many tomographic applications. To achieve a data set of high quality, arrival times need to be picked with high accuracy, including a proper assessment of the uncertainty of timing and phase identification, and a high level of consistency. We have investigated the effects of data quantity and quality on the solution quality in local earthquake tomography. We have compared tomographic results obtained with synthetic and real data of two very different data sets. The first data set consisted of a large set of arrival times of low precision and unknown accuracy taken from the International Seismological Centre (ISC) Bulletin for the greater Alpine region. The second high-quality data set for the same region was seven times smaller and was obtained by automated quality-weighted repicking. During a first series of inversions, synthetic data resembling the two data sets were inverted with the same amount of Gaussian distributed noise added. Subsequently, during a second series of inversions, the noise level was increased successively for ISC data to study the effect of larger Gaussian distributed error on the solution quality. Finally, the real data for both data sets were inverted. These investigations showed that, for Gaussian distributed error, a smaller data set of high quality could achieve a similar or better solution quality than a data set seven times larger but about four times lower in quality. Our results further suggest that the quality of the ISC Bulletin is degraded significantly by inconsistencies, strongly limiting the use of this large data set for local earthquake tomography studies.

1997 ◽  
Vol 40 (1) ◽  
Author(s):  
S. Solarino ◽  
E. Kissling ◽  
S. Sellami ◽  
G. Smriglio ◽  
F. Thouvenot ◽  
...  

Local earthquake data collected by seven national and regional seismic networks have been compiled into a travel time catalog of 32341 earthquakes for the period 1980 to 1995 in South-Central Europe. As a prerequisite, a complete and corrected station list (master station list) has been prepared according to updated information provided by every network. By simultaneous inversion of some 600 well-locatable events we obtained one-dimensional (1D) velocity propagation models for each network. Consequently, these velocity models with appropriate station corrections have been used to obtain high-quality hypocenter locations for events inside and among the station networks. For better control, merging of phase data from several networks was performed as an iterative process where at each iteration two data sets of neighbouring networks or groups of networks were merged. Particular care was taken to detect and correctly identify phase data from events common to data sets from two different networks. In case of reports of the same phase data from more than one network, the phase data from the network owning and servicing the station were used according to the master station list. The merging yielded a data set of 278007 P and 191074 S-wave travel time observations from 32341 events in the greater Alpine region. Restrictive selection (number of P-wave observations >7; gap <160 degrees) yielded a data set of about 10000 events with a total of more than 128000 P and 87000 S-wave observations well suited for local earthquake seismic tomography study. Preliminary tomographic results for South-Central Europe clearly show the topography of the crust-mantle boundary in the greater Alpine region and outline the 3D structure of the seismic Ivrea body.


2020 ◽  
Author(s):  
Irene Molinari ◽  
Matteo Bagagli ◽  
Tobias Diehl ◽  
Edi Kissling ◽  
John Clinton ◽  
...  

&lt;p&gt;We take advantage of the new large seismic data set provided by the AlpArray Seismic Network (AASN) as part of the AlpArray research initiative (www.alparray.ethz.ch), to provide consistent and precise hypocenter locations and uniform magnitude calculations across the greater Alpine region. The AASN is composed of more than 650 broadband seismic stations, 300 of which are temporary. The uniform station coverage provides an unique occasion to study the laterally strongly variable seismicity that is presently monitored and reported by a dozen individual observatories. A homogeneous earthquake catalog in terms of location and magnitude is a prerequisite to improve our understanding of seismo-tectonics and the seismic hazard in the greater Alpine region.&lt;/p&gt;&lt;p&gt;Our catalog covers four years of seismicity with a targeted magnitude of completeness of 2.5 from 2016 to 2019 and results from scanning &amp;#8764;1000 broadband stations (&amp;#8764;60 TB of data). First, we detect and analyse events in the region using the STA/LTA based detector of the SeisComP3 monitoring system in off-line mode. Later, after an initial location has been obtained, we apply a high-quality semi-automated re-picking approach defining the consistent phase arrival times in combination with timing uncertainties and phase identification assessment. This automatic re-picking framework is implemented with the QUAKE library (Bagagli et al., 2019), an object-oriented Python package that exploit different waveform information both frequency- and energy- related by taking advantage of different well-established picking algorithms. The QUAKE picker has been tuned and tested against a consistent phases reference data set (P-, S- and secondary phases) of &amp;#8764;2500 phases manually picked for 10 events (M&amp;#8805; 2.5) homogeneously distributed in the region.&lt;/p&gt;&lt;p&gt;Subsequently, the high-quality automatic picks of selected well-locatable earthquakes are used to calculate a minimum 1D P-wave velocity model for the region with appropriate stations corrections. Finally, all events are relocated with the NonLinLoc algorithm in combination with the updated 1D model and a final estimate of the magnitude is given. We compare our locations and magnitudes with existing regional and local earthquake catalogs (ISC, EMSC, national catalogs) to assess and discuss the completeness and quality of the derived AlpArray research catalog.&lt;/p&gt;


2018 ◽  
Vol 11 (2) ◽  
pp. 53-67
Author(s):  
Ajay Kumar ◽  
Shishir Kumar

Several initial center selection algorithms are proposed in the literature for numerical data, but the values of the categorical data are unordered so, these methods are not applicable to a categorical data set. This article investigates the initial center selection process for the categorical data and after that present a new support based initial center selection algorithm. The proposed algorithm measures the weight of unique data points of an attribute with the help of support and then integrates these weights along the rows, to get the support of every row. Further, a data object having the largest support is chosen as an initial center followed by finding other centers that are at the greatest distance from the initially selected center. The quality of the proposed algorithm is compared with the random initial center selection method, Cao's method, Wu method and the method introduced by Khan and Ahmad. Experimental analysis on real data sets shows the effectiveness of the proposed algorithm.


Author(s):  
Avinash Navlani ◽  
V. B. Gupta

In the last couple of decades, clustering has become a very crucial research problem in the data mining research community. Clustering refers to the partitioning of data objects such as records and documents into groups or clusters of similar characteristics. Clustering is unsupervised learning, because of unsupervised nature there is no unique solution for all problems. Most of the time complex data sets require explanation in multiple clustering sets. All the Traditional clustering approaches generate single clustering. There is more than one pattern in a dataset; each of patterns can be interesting in from different perspectives. Alternative clustering intends to find all unlike groupings of the data set such that each grouping has high quality and distinct from each other. This chapter gives you an overall view of alternative clustering; it's various approaches, related work, comparing with various confusing related terms like subspace, multi-view, and ensemble clustering, applications, issues, and challenges.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Yu Qiao ◽  
Jun Wu ◽  
Hao Cheng ◽  
Zilan Huang ◽  
Qiangqiang He ◽  
...  

In the age of the development of artificial intelligence, we face the challenge on how to obtain high-quality data set for learning systems effectively and efficiently. Crowdsensing is a new powerful tool which will divide tasks between the data contributors to achieve an outcome cumulatively. However, it arouses several new challenges, such as incentivization. Incentive mechanisms are significant to the crowdsensing applications, since a good incentive mechanism will attract more workers to participate. However, existing mechanisms failed to consider situations where the crowdsourcer has to hire capacitated workers or workers from multiregions. We design two objectives for the proposed multiregion scenario, namely, weighted mean and maximin. The proposed mechanisms maximize the utility of services provided by a selected data contributor under both constraints approximately. Also, extensive simulations are conducted to verify the effectiveness of our proposed methods.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Suleman Nasiru

The need to develop generalizations of existing statistical distributions to make them more flexible in modeling real data sets is vital in parametric statistical modeling and inference. Thus, this study develops a new class of distributions called the extended odd Fréchet family of distributions for modifying existing standard distributions. Two special models named the extended odd Fréchet Nadarajah-Haghighi and extended odd Fréchet Weibull distributions are proposed using the developed family. The densities and the hazard rate functions of the two special distributions exhibit different kinds of monotonic and nonmonotonic shapes. The maximum likelihood method is used to develop estimators for the parameters of the new class of distributions. The application of the special distributions is illustrated by means of a real data set. The results revealed that the special distributions developed from the new family can provide reasonable parametric fit to the given data set compared to other existing distributions.


Geophysics ◽  
2007 ◽  
Vol 72 (4) ◽  
pp. T47-T55 ◽  
Author(s):  
Emil Blias

Waves propagating across a vertical seismic profiling (VSP) array may be distinguished by their differing arrival times and linear-moveout velocities. Current methods typically assume that the waves propagate uniformly with an unvarying wavelet shape and amplitude. These assumptions break down in the presence of irregular spatial sampling, event truncations, wavelet variations, and noise. I present a new method that allows each event to independently vary in its amplitude and arrival time as it propagates across the array. The method uses an iterative global nonlinear optimization scheme that consists of several least-squares and two eigenvalue problems at each step. Events are stripped from the data one at a time. As stronger events are predicted and removed, weaker events then become visible and can be modeled in turn. As each new event is approximately modeled, the fit for all previously removed events is then revisited and updated. Iterations continue until no remaining coherent events can be distinguished. As VSP data sets are typically not large, the expense of this method is not a significant limitation. I demonstrate with a real-data example that this iterative approach can lead to a significantly better VSP wavefield separation than that which has been available when using conventional techniques.


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. J35-J48 ◽  
Author(s):  
Bernard Giroux ◽  
Abderrezak Bouchedda ◽  
Michel Chouteau

We introduce two new traveltime picking schemes developed specifically for crosshole ground-penetrating radar (GPR) applications. The main objective is to automate, at least partially, the traveltime picking procedure and to provide first-arrival times that are closer in quality to those of manual picking approaches. The first scheme is an adaptation of a method based on cross-correlation of radar traces collated in gathers according to their associated transmitter-receiver angle. A detector is added to isolate the first cycle of the radar wave and to suppress secon-dary arrivals that might be mistaken for first arrivals. To improve the accuracy of the arrival times obtained from the crosscorrelation lags, a time-rescaling scheme is implemented to resize the radar wavelets to a common time-window length. The second method is based on the Akaike information criterion(AIC) and continuous wavelet transform (CWT). It is not tied to the restrictive criterion of waveform similarity that underlies crosscorrelation approaches, which is not guaranteed for traces sorted in common ray-angle gathers. It has the advantage of being automated fully. Performances of the new algorithms are tested with synthetic and real data. In all tests, the approach that adds first-cycle isolation to the original crosscorrelation scheme improves the results. In contrast, the time-rescaling approach brings limited benefits, except when strong dispersion is present in the data. In addition, the performance of crosscorrelation picking schemes degrades for data sets with disparate waveforms despite the high signal-to-noise ratio of the data. In general, the AIC-CWT approach is more versatile and performs well on all data sets. Only with data showing low signal-to-noise ratios is the AIC-CWT superseded by the modified crosscorrelation picker.


2018 ◽  
Author(s):  
Yan Li ◽  
Hans von Storch ◽  
Qingyyuan Wang ◽  
Qingliang Zhou

Abstract. We have designed a method for testing the quality of multidecadal analyses of SST in regional seas by using a set of high-quality local SST observations. In recognizing that local data may reflect local effects, we focus on dominant EOFs of the local data and of the localized data of the analyses. We examine patterns, and the variability as well as the trends of the principal components. This method is applied to examine four different SST analyses, namely HadISST1, ERSST, COBE SST, and NOAA OISST. They are assessed using a newly constructed high-quality data set of SST at 26 coastal stations along the Chinese coast in 1960–2015 which underwent careful examination with respect to quality, and a number of corrections of inhomogeneities. The four gridded analyses perform by and large well, in particular since 1980. However, for the pre-satellite time period, before 1980, the analyses differ among each other and show some inconsistencies with the local data, such as artificial break points, periods of bias and differences in trends. We conclude that gridded SST-analyses need improvement in the pre-satellite time (prior to 1980s), by re-examining in detail archives of local quality-controlled SST data in many data-sparse regions of the world.


2019 ◽  
Vol 623 ◽  
pp. L9 ◽  
Author(s):  
M. Fredslund Andersen ◽  
P. Pallé ◽  
J. Jessen-Hansen ◽  
K. Wang ◽  
F. Grundahl ◽  
...  

Context. We present the first high-cadence multiwavelength radial-velocity observations of the Sun-as-a-star, carried out during 57 consecutive days using the stellar échelle spectrograph at the Hertzsprung SONG Telescope operating at the Teide Observatory. Aims. Our aim was to produce a high-quality data set and reference values for the global helioseismic parameters νmax, ⊙ and Δν⊙ of the solar p-modes using the SONG instrument. The obtained data set or the inferred values should then be used when the scaling relations are applied to other stars showing solar-like oscillations observed with SONG or similar instruments. Methods. We used different approaches to analyse the power spectrum of the time series to determine νmax, ⊙: simple Gaussian fitting and heavy smoothing of the power spectrum. We determined Δν⊙ using the method of autocorrelation of the power spectrum. The amplitude per radial mode was determined using the method described in Kjeldsen et al. (2008, ApJ, 682, 1370). Results. We found the following values for the solar oscillations using the SONG spectrograph: νmax, ⊙ = 3141 ± 12 μHz, Δν⊙ = 134.98 ± 0.04 μHz, and an average amplitude of the strongest radial modes of 16.6 ± 0.4 cm s−1. These values are consistent with previous measurements with other techniques.


Sign in / Sign up

Export Citation Format

Share Document