scholarly journals Compilation of a recent seismicity data base of the greater Alpine region from several seismological networks and preliminary 3D tomographic results

1997 ◽  
Vol 40 (1) ◽  
Author(s):  
S. Solarino ◽  
E. Kissling ◽  
S. Sellami ◽  
G. Smriglio ◽  
F. Thouvenot ◽  
...  

Local earthquake data collected by seven national and regional seismic networks have been compiled into a travel time catalog of 32341 earthquakes for the period 1980 to 1995 in South-Central Europe. As a prerequisite, a complete and corrected station list (master station list) has been prepared according to updated information provided by every network. By simultaneous inversion of some 600 well-locatable events we obtained one-dimensional (1D) velocity propagation models for each network. Consequently, these velocity models with appropriate station corrections have been used to obtain high-quality hypocenter locations for events inside and among the station networks. For better control, merging of phase data from several networks was performed as an iterative process where at each iteration two data sets of neighbouring networks or groups of networks were merged. Particular care was taken to detect and correctly identify phase data from events common to data sets from two different networks. In case of reports of the same phase data from more than one network, the phase data from the network owning and servicing the station were used according to the master station list. The merging yielded a data set of 278007 P and 191074 S-wave travel time observations from 32341 events in the greater Alpine region. Restrictive selection (number of P-wave observations >7; gap <160 degrees) yielded a data set of about 10000 events with a total of more than 128000 P and 87000 S-wave observations well suited for local earthquake seismic tomography study. Preliminary tomographic results for South-Central Europe clearly show the topography of the crust-mantle boundary in the greater Alpine region and outline the 3D structure of the seismic Ivrea body.

Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCB71-WCB79 ◽  
Author(s):  
Stephan Husen ◽  
Tobias Diehl ◽  
Edi Kissling

Despite the increase in quality and number of seismic stations in many parts of the world, accurate timing of individual arrival times remains crucial for many tomographic applications. To achieve a data set of high quality, arrival times need to be picked with high accuracy, including a proper assessment of the uncertainty of timing and phase identification, and a high level of consistency. We have investigated the effects of data quantity and quality on the solution quality in local earthquake tomography. We have compared tomographic results obtained with synthetic and real data of two very different data sets. The first data set consisted of a large set of arrival times of low precision and unknown accuracy taken from the International Seismological Centre (ISC) Bulletin for the greater Alpine region. The second high-quality data set for the same region was seven times smaller and was obtained by automated quality-weighted repicking. During a first series of inversions, synthetic data resembling the two data sets were inverted with the same amount of Gaussian distributed noise added. Subsequently, during a second series of inversions, the noise level was increased successively for ISC data to study the effect of larger Gaussian distributed error on the solution quality. Finally, the real data for both data sets were inverted. These investigations showed that, for Gaussian distributed error, a smaller data set of high quality could achieve a similar or better solution quality than a data set seven times larger but about four times lower in quality. Our results further suggest that the quality of the ISC Bulletin is degraded significantly by inconsistencies, strongly limiting the use of this large data set for local earthquake tomography studies.


Author(s):  
Brian Hoeschen ◽  
Darcy Bullock ◽  
Mark Schlappi

Historically, stopped delay was used to characterize the operation of intersection movements because it was relatively easy to measure. During the past decade, the traffic engineering community has moved away from using stopped delay and now uses control delay. That measurement is more precise but quite difficult to extract from large data sets if strict definitions are used to derive the data. This paper evaluates two procedures for estimating control delay. The first is based on a historical approximation that control delay is 30% larger than stopped delay. The second is new and based on segment delay. The procedures are applied to a diverse data set collected in Phoenix, Arizona, and compared with control delay calculated by using the formal definition. The new approximation was observed to be better than the historical stopped delay procedure; it provided an accurate prediction of control delay. Because it is an approximation, this methodology would be most appropriately applied to large data sets collected from travel time studies for ranking and prioritizing intersections for further analysis.


Geophysics ◽  
2022 ◽  
pp. 1-59
Author(s):  
Fucai Dai ◽  
Feng Zhang ◽  
Xiangyang Li

SS-waves (SV-SV waves and SH-SH waves) are capable of inverting S-wave velocity ( VS) and density ( ρ) because they are sensitive to both parameters. SH-SH waves can be separated from multicomponent data sets more effectively than the SV-SV wave because the former is decoupled from the PP-wave in isotropic media. In addition, the SH-SH wave can be better modeled than the SV-SV wave in the case of strong velocity/impedance contrast because the SV-SV wave has multicritical angles, some of which can be quite small when velocity/ impedance contrast is strong. We derived an approximate equation of the SH-SH wave reflection coefficient as a function of VS and ρ in natural logarithm variables. The approximation has high accuracy, and it enables the inversion of VS and ρ in a direct manner. Both coefficients corresponding to VS and ρ are “model-parameter independent” and thus there is no need for prior estimate of any model parameter in inversion. Then, we developed an SH-SH wave inversion method, and demonstrated it by using synthetic data sets and a real SH-SH wave prestack data set from the west of China. We found that VS and ρ can be reliably estimated from the SH-SH wave of small angles.


2016 ◽  
Vol 16 (4) ◽  
pp. 901-913 ◽  
Author(s):  
Stephen Cusack

Abstract. The clustering of severe European windstorms on annual timescales has substantial impacts on the (re-)insurance industry. Our knowledge of the risk is limited by large uncertainties in estimates of clustering from typical historical storm data sets covering the past few decades. Eight storm data sets are gathered for analysis in this study in order to reduce these uncertainties. Six of the data sets contain more than 100 years of severe storm information to reduce sampling errors, and observational errors are reduced by the diversity of information sources and analysis methods between storm data sets. All storm severity measures used in this study reflect damage, to suit (re-)insurance applications. The shortest storm data set of 42 years provides indications of stronger clustering with severity, particularly for regions off the main storm track in central Europe and France. However, clustering estimates have very large sampling and observational errors, exemplified by large changes in estimates in central Europe upon removal of one stormy season, 1989/1990. The extended storm records place 1989/1990 into a much longer historical context to produce more robust estimates of clustering. All the extended storm data sets show increased clustering between more severe storms from return periods (RPs) of 0.5 years to the longest measured RPs of about 20 years. Further, they contain signs of stronger clustering off the main storm track, and weaker clustering for smaller-sized areas, though these signals are more uncertain as they are drawn from smaller data samples. These new ultra-long storm data sets provide new information on clustering to improve our management of this risk.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2019 ◽  
Vol 73 (8) ◽  
pp. 893-901
Author(s):  
Sinead J. Barton ◽  
Bryan M. Hennelly

Cosmic ray artifacts may be present in all photo-electric readout systems. In spectroscopy, they present as random unidirectional sharp spikes that distort spectra and may have an affect on post-processing, possibly affecting the results of multivariate statistical classification. A number of methods have previously been proposed to remove cosmic ray artifacts from spectra but the goal of removing the artifacts while making no other change to the underlying spectrum is challenging. One of the most successful and commonly applied methods for the removal of comic ray artifacts involves the capture of two sequential spectra that are compared in order to identify spikes. The disadvantage of this approach is that at least two recordings are necessary, which may be problematic for dynamically changing spectra, and which can reduce the signal-to-noise (S/N) ratio when compared with a single recording of equivalent duration due to the inclusion of two instances of read noise. In this paper, a cosmic ray artefact removal algorithm is proposed that works in a similar way to the double acquisition method but requires only a single capture, so long as a data set of similar spectra is available. The method employs normalized covariance in order to identify a similar spectrum in the data set, from which a direct comparison reveals the presence of cosmic ray artifacts, which are then replaced with the corresponding values from the matching spectrum. The advantage of the proposed method over the double acquisition method is investigated in the context of the S/N ratio and is applied to various data sets of Raman spectra recorded from biological cells.


2013 ◽  
Vol 756-759 ◽  
pp. 3652-3658
Author(s):  
You Li Lu ◽  
Jun Luo

Under the study of Kernel Methods, this paper put forward two improved algorithm which called R-SVM & I-SVDD in order to cope with the imbalanced data sets in closed systems. R-SVM used K-means algorithm clustering space samples while I-SVDD improved the performance of original SVDD by imbalanced sample training. Experiment of two sets of system call data set shows that these two algorithms are more effectively and R-SVM has a lower complexity.


Sign in / Sign up

Export Citation Format

Share Document