Mo calculations from a generalized AR parameter method for WWSSN instruments

1979 ◽  
Vol 69 (2) ◽  
pp. 329-351
Author(s):  
Barbara Radovich Williams

abstract A single calibration curve which relates seismic moment (Mo to the AR parameter, a time domain sum of the area under the surface-wave envelope, has been found applicable to 20 WWSSN long-period instruments recording surface waves over predominantly oceanic paths of up to 17,000 km epicentral distance, and Ms up to 8.0. The method is easy and Mo estimates are obtained in a routine procedure with errors of a factor of 2 for continental events and a factor of 3 for oceanic events. Analog or digital data can be used from a few stations with only an approximate knowledge of the recording instrument, type of path, and mechanism of the event. Theoretical Mo versus AR curves are derived from the average spectral theory of Geller (1976) and the empirically determined curve is consistent with the predicted. The AR method is shown to be more sensitive to longer period contributions to the wave train as the event size increases, establishing its usefulness for studying earthquakes with a wide range of Ms values. The method is applied to 12 events occurring in 1974 to 1975 with unknown Mo.

1998 ◽  
Vol 88 (1) ◽  
pp. 95-106 ◽  
Author(s):  
Mitchell Withers ◽  
Richard Aster ◽  
Christopher Young ◽  
Judy Beiriger ◽  
Mark Harris ◽  
...  

Abstract Digital algorithms for robust detection of phase arrivals in the presence of stationary and nonstationary noise have a long history in seismology and have been exploited primarily to reduce the amount of data recorded by data logging systems to manageable levels. In the present era of inexpensive digital storage, however, such algorithms are increasingly being used to flag signal segments in continuously recorded digital data streams for subsequent processing by automatic and/or expert interpretation systems. In the course of our development of an automated, near-real-time, waveform correlation event-detection and location system (WCEDS), we have surveyed the abilities of such algorithms to enhance seismic phase arrivals in teleseismic data streams. Specifically, we have considered envelopes generated by energy transient (STA/LTA), Z-statistic, frequency transient, and polarization algorithms. The WCEDS system requires a set of input data streams that have a smooth, low-amplitude response to background noise and seismic coda and that contain peaks at times corresponding to phase arrivals. The algorithm used to generate these input streams from raw seismograms must perform well under a wide range of source, path, receiver, and noise scenarios. Present computational capabilities allow the application of considerably more robust algorithms than have been historically used in real time. However, highly complex calculations can still be computationally prohibitive for current workstations when the number of data streams become large. While no algorithm was clearly optimal under all source, receiver, path, and noise conditions tested, an STA/LTA algorithm incorporating adaptive window lengths controlled by nonstationary seismogram spectral characteristics was found to provide an output that best met the requirements of a global correlation-based event-detection and location system.


1981 ◽  
Vol 71 (6) ◽  
pp. 1731-1741
Author(s):  
I. N. Gupta ◽  
R. A. Hartenberger

Abstract An analysis of seismic field data from surface shots in two radically different geologic environments shows significantly different seismic phases at the two sites. At the first site, which has a layered sedimentary section, five distinct phases are observed: the P-wave first arrival; a complex wave train consisting of higher mode Rayleigh waves; a precursor to air-blast wave; the air blast wave; and the air-coupled Rayleigh waves. Records from the second site, overlying an unlayered mass of igneous rocks, show only three distinct seismic phases: the P-wave first arrival; a simple wave train of fundamental-mode Rayleigh and Love waves; and an air blast wave. Peak ground velocity, based on the average of the three largest amplitudes in the surface waves preceding the air blast wave, scales well with yield for both sites. Measurements of peak ground velocity may be used to estimate yields of explosive charges at either site within a factor of about 2 if the source distance is known. The scaling relationship appears to be valid over a wide range of yields and site geological conditions.


1996 ◽  
Vol 39 (2) ◽  
Author(s):  
D. Seidl ◽  
M. Hellweg ◽  
P. Okubo ◽  
H. Rademacher

The seismic wavefield near an active volcanic vent consists of superimposed signals in a wide range of frequency bands from sources inside and outside the volcano. To characterize the broadband wavefield near Puu Oo, we deployed a profile of three three-component broadband sensors in a 200 m long line about 1.5 km WSW of the active vent. During this period, Puu Oo maintained a constant, but very low level of activity. The digital data logger recorded the wavefield continuously in the frequency band between 0.01 and 40 Hz between June 25 and July 9, 1994. At the same time, local wind conditions along with air temperature and pressure were monitored by a portable digital weather station. On the basis of characteristic elements, such as waveform, spatial coherence between stations, particle motion and power spectra, the wavefield can be divided into three bands. The dominant signals in the frequency band between 0.01 and 0.1 Hz are not coherent among the stations. Their ground velocities correlate with the wind speed. The signals in the 0.1 to 0.5 Hz band are coherent across the profile and most probably represent a superposition of volcanic tremor and microseisms from the Pacific Ocean. Much of the energy above 0.5 Hz can be attributed to activity at the vent. Power spectra from recordings of the transverse components show complex peaks between 0.5 and 3 Hz which vary in amplitude due to site effects and distance. On the other hand, power spectra calculated from the radial components show a clearly periodic pattern of peaks at 1 Hz intervals for some time segments. A further remarkable feature of the power spectra is that they are highly stationary.


BMJ Open ◽  
2019 ◽  
Vol 9 (4) ◽  
pp. e026828 ◽  
Author(s):  
Donald J Willison ◽  
Joslyn Trowbridge ◽  
Michelle Greiver ◽  
Karim Keshavjee ◽  
Doug Mumford ◽  
...  

Digital data generated in the course of clinical care are increasingly being leveraged for a wide range of secondary purposes. Researchers need to develop governance policies that can assure the public that their information is being used responsibly. Our aim was to develop a generalisable model for governance of research emanating from health data repositories that will invoke the trust of the patients and the healthcare professionals whose data are being accessed for health research. We developed our governance principles and processes through literature review and iterative consultation with key actors in the research network including: a data governance working group, the lead investigators and patient advisors. We then recruited persons to participate in the governing and advisory bodies. Our governance process is informed by eight principles: (1) transparency; (2) accountability; (3) follow rule of law; (4) integrity; (5) participation and inclusiveness; (6) impartiality and independence; (7) effectiveness, efficiency and responsiveness and (8) reflexivity and continuous quality improvement. We describe the rationale for these principles, as well as their connections to the subsequent policies and procedures we developed. We then describe the function of the Research Governing Committee, the majority of whom are either persons living with diabetes or physicians whose data are being used, and the patient and data provider advisory groups with whom they consult and communicate. In conclusion, we have developed a values-based information governance framework and process for Diabetes Action Canada that adds value over-and-above existing scientific and ethics review processes by adding a strong patient perspective and contextual integrity. This model is adaptable to other secure data repositories.


2020 ◽  
Vol 91 (6) ◽  
pp. 3563-3573
Author(s):  
Shuqin Wang ◽  
Jinhai Zhang

Abstract Seismic waveforms are essential for seismology but are clipped when their actual amplitudes are too high to be faithfully recorded by seismometers. The clipping effects are popular for both big earthquakes and small earthquakes within a short epicentral distance. Here, we illustrate potential risks of direct usage of clipped waveforms by examining the frequency leakage and show the failure of bandpass filtering for different clipping levels; then we summarize two characteristics of clipped records: (1) The temporal gradient is unusually large around the clipped segment compared with the unclipped portions, and (2) the clipped samples cluster into one segment or several if many samples are involved. Next, we propose three criteria for distinguishing clipped samples from the perfect samples based on these two characteristics. Finally, we design a numerical algorithm for automatic detection of clipped samples using constraints on the gradient, amplitude, and gradient-varying range. Numerical experiments show the excellent performance of our algorithm on automatically detecting the clipped samples. Our algorithm seamlessly integrates all necessary constraints for both flat-top type and back-to-zero type and thus can correctly recognize these two types simultaneously; in addition, it is basically data driven and thus can work well without considering seismometer configuration and instrument type, which would be helpful for real-time detection of clipped records without interruption from human operations. As a robust and swift tool of automatic detection on amplitude-clipped samples, our algorithm could identify most typical clipped records and reduce potential risks due to using unrecognizable clipped waveforms; furthermore, it would be helpful for fast detection and possible restoration of clipped waveforms in the presence of huge volumes of data.


1979 ◽  
Vol 23 (1) ◽  
pp. 75-79 ◽  
Author(s):  
Dennis B. Beringer

Systematic and economic design and evaluation strategies were applied to a computer-generated 4-D aerial navigation system. During the evaluation each of 24 experienced instrument pilots received training in a PLATO-based digital flight simulator using either a keyboard entry/static map, keyboard entry/dynamic map, or touch entry/dynamic map system. Tasks performed during the execution of an area navigation course included continuous flight control, navigation data updating, digital data entry, and amended course plotting. Digital data entry training time was comparable for all three systems but the touch-map proved superior for the plotting tasks, greatly reducing training and task execution times while virtually eliminating errors. Subsequent performance evaluation showed that the touch-map reduced flight path tracking error, increased processing rates on a digit-cancelling secondary task, and increased the accuracy of manual plotting operations. It was concluded that a touch entry system could significantly reduce cockpit workload across a wide range of operational environments.


1980 ◽  
Vol 26 (94) ◽  
pp. 520
Author(s):  
R. A. Sommerfeld

Abstract Results From the last four winters’ studies on acoustic emissions from snow slopes have shown that periods of higher noise in the frequency band 5 to 125 Hz are associated with periods of instability, but that the phenomenon of acoustic emissions from unstable snow is very weak. Interference with the acoustic emissions from extraneous sources such as chairlifts, trucks, and airplanes causes ambiguities in the data which interfere with the straightforward prediction of instability. Spectra of the noises were examined, with the idea that filters might improve the signal-to-noise ratio. It was found that the noise generated by unstable snow can occur over a fairly wide range of frequencies and that there is no band of frequencies which is unique to unstable snow. It was found that the noise from the chairlift had a very stable spectrum and that it had a band from 50 to 65 Hz which was pronounced and in which the snow noise was fairly low. The r.m.s. voltage in the band 5 to 125 Hz can therefore be reduced by subtracting a proportion of the r.m.s. output of a narrow-band 55 Hz filter. By adjusting the constant of proportionality, it was possible to eliminate interference from the chairlift almost entirely, and this will be used. Spectral analyses have also shown that ambiguities are generated by variations in the 60 Hz power-line noise. It is possible to suppress this with the use of filters, but without complex digital data manipulation, it is not possible to eliminate it. The fact that the snow noise does not transmit over large distances means that events of interest should not occur simultaneously on two, widely spaced geophones. The system will include a geophone in an unstressed region whose r.m.s. voltage will be subtracted from that of the geophone in the stressed region to eliminate signals which are common to both. It is hoped that this technique will eliminate ambiguities caused by extraneous sources such as trucks and airplanes.


1989 ◽  
Vol 60 (3) ◽  
pp. 95-100 ◽  
Author(s):  
S.E. Hough ◽  
K. Jacob ◽  
R. Busby ◽  
P.A. Friberg

Abstract We present analysis of a magnitude 3.5 event which occurred at 9 km epicentral distance from a digital strong motion instrument operated by the National Center for Earthquake Engineering Research. Although the size of this isolated event is such that it can scarcely be considered to be a significant earthquake, a careful analysis of this high quality recording does yield several interesting results: 1) the S-wave spectra can be interpreted in terms of a simple omega-squared source spectrum and frequency-independent attenuation, 2) there is the suggestion of a poorly-resolved resonance in the P-wave spectrum, and perhaps most importantly, 3) the apparently simple S-wave spectra can be fit almost equally well with a surprisingly wide range of seismic corner frequencies, from roughly 5 to 25 Hz. This uncertainty in corner frequency translates into uncertainties in inferred Q values of almost an order of magnitude, and into uncertainties in stress drop of two orders of magnitude. Given the high quality of the data and the short epicentral distance to the station, we consider it likely that resolution of spectral decay and corner frequency will be at least as poor for any other recording of earthquakes with comparable or smaller magnitudes.


1984 ◽  
Vol 21 (10) ◽  
pp. 1105-1113 ◽  
Author(s):  
C. J. Rebollar ◽  
E. R. Kanasewich ◽  
E. Nyland

Seismic records at Edmonton (EDM) and Suffield (SES) between January 1976 and February 1980 show 220 events with magnitudes less than 4 originating near Rocky Mountain House. Many of these events show well defined Sn, Sg, and Pg phases and a small variation in the difference of Sg − Sn and Sg − Pg. Analysis of the theoretical travel times using a structure determined for central Alberta yields an average focal depth of 20 ± 5 km and an average epicentral distance of 175 ± 5 km southwest of Edmonton for 40 of these events. Because Sn was not clear on the remainder, it was not possible to get focal depths for all the events.Seismic moments of 80 events with local magnitudes from 1.6 to 3.5 were found to be in the range of 6.6 ± 2 × 1018 to 7.9 ± 2 × 1020 dyn∙cm (6.6 ± 2 × 1013 to 7.9 ± 2 × 1015 N∙cm). A relationship between local magnitude and seismic moment was log (M0) = 1.3ML + 16.6. This is similar to that determined for California. Source radii, where they could be determined, were 500 ± 50 m and stress drops were 0.75 ± 0.75 bar (75 ± 75 kPa).The energy release of 263 events recorded at EDM from the Rocky Mountain House area was 5.6 × 1017 erg (5.6 × 1010 J). The b value for this earthquake swarm was 0.8, similar to that observed in other parts of western Canada.The depths of focus, the low stress drops, and the statistical similarity to other natural earthquake sequences suggest that at least part of the swarm is of a natural origin.


Author(s):  
Trung Duy Pham ◽  
Dat Tran ◽  
Wanli Ma

In the biomedical and healthcare fields, the ownership protection of the outsourced data is becoming a challenging issue in sharing the data between data owners and data mining experts to extract hidden knowledge and patterns. Watermarking has been proved as a right-protection mechanism that provides detectable evidence for the legal ownership of a shared dataset, without compromising its usability under a wide range of data mining for digital data in different formats such as audio, video, image, relational database, text and software. Time series biomedical data such as Electroencephalography (EEG) or Electrocardiography (ECG) is valuable and costly in healthcare, which need to have owner protection when sharing or transmission in data mining application. However, this issue related to kind of data has only been investigated in little previous research as its characteristics and requirements. This paper proposes an optimized watermarking scheme to protect ownership for biomedical and healthcare systems in data mining. To achieve the highest possible robustness without losing watermark transparency, Particle Swarm Optimization (PSO) technique is used to optimize quantization steps to find a suitable one. Experimental results on EEG data show that the proposed scheme provides good imperceptibility and more robust against various signal processing techniques and common attacks such as noise addition, low-pass filtering, and re-sampling.


Sign in / Sign up

Export Citation Format

Share Document