The Koyna earthquake of December 10 1967: A multiple seismic event

1971 ◽  
Vol 61 (1) ◽  
pp. 167-176 ◽  
Author(s):  
Harsh K. Gupta ◽  
B. K. Rastogi ◽  
Hari Narain

abstract The analysis of P waves recorded at seismological observatories and seismic arrays at teleseismic distances and strong motion seismographs located at Koyna Dam suggest the Koyna earthquake of December 10 1967 to be a complex multiple event. Six of the events could be identified, and the second and third events are located with respect to the initiation using the Gutenberg sine-curve method at distances of 6 and 17 km due south, the average rupture velocity being 3.4 km/sec. The findings are consistent with the field observations and the different origin times, epicenters and magnitudes reported for the earthquake. Seismic array records are found to be very useful in examining the multiplicity of seismic events.

1967 ◽  
Vol 57 (5) ◽  
pp. 1017-1023 ◽  
Author(s):  
Max Wyss ◽  
James N. Brune

abstract The seismograms of the Alaskan earthquake of 28 March 1964 are characterized by multiple P-phases not predicted by the travel-time curves. Seismograms with low magnifications from 80 stations covering distances from 40° to 90° and a wide range of azimuths were analyzed. The character of the P-wave portion of the seismograms is interpreted in terms of an approximate multiple-event source mechanism where the propagating rupture triggers larger distinct events. Six events were located using the Gutenberg sine-curve method. The times after the initial origin time were 9, 19, 28, 29, 44 and 72 sec respectively, and the events were located 35, 66, 89, 93, 165 and 250 km away from the initial epicenter. Dividing the distance by the delay-time gives an average rupture velocity of 3.5 km/sec.


1990 ◽  
Vol 80 (3) ◽  
pp. 507-518 ◽  
Author(s):  
Jim Mori ◽  
Stephen Hartzell

Abstract We examined short-period P waves to investigate if waveform data could be used to determine which of two nodal planes was the actual fault plane for a small (ML 4.6) earthquake near Upland, California. We removed path and site complications by choosing a small aftershock (ML 2.7) as an empirical Green function. The main shock P waves were deconvolved by using the empirical Green function to produce simple far-field displacement pulses. We used a least-squares method to invert these pulses for the slip distribution on a finite fault. Both nodal planes (strike 125°, dip 85° and strike 221°, dip 40°) of the first-motion focal mechanism were tested at various rupture velocities. The southwest trending fault plane consistently gave better fitting solutions than the southeast-trending plane. We determined a moment of 4.2 × 1022 dyne-cm. The rupture velocity, and thus the source area could not be well resolved, but if we assume a reasonable rupture velocity of 0.87 times the shear wave velocity, we obtain a source area of 0.97 km2 and a stress drop of 38 bars. Choice of a southwest-trending fault plane is consistent with the trend of the nearby portion of the Transverse Ranges frontal fault zone and indicates left-lateral motion. This method provides a way to determine the fault plane for small earthquakes that have no surface rupture and no obvious trend in aftershock locations.


1994 ◽  
Vol 37 (6) ◽  
Author(s):  
B. P. Cohee ◽  
G. C. Beroza

In this paper we compare two time-domain inversion methods that have been widely applied to the problem of modeling earthquake rupture using strong-motion seismograms. In the multi-window method, each point on the fault is allowed to rupture multiple times. This allows flexibility in the rupture time and hence the rupture velocity. Variations in the slip-velocity function are accommodated by variations in the slip amplitude in each time-window. The single-window method assumes that each point on the fault ruptures only once, when the rupture front passes. Variations in slip amplitude are allowed and variations in rupture velocity are accommodated by allowing the rupture time to vary. Because the multi-window method allows greater flexibility, it has the potential to describe a wider range of faulting behavior; however, with this increased flexibility comes an increase in the degrees of freedom and the solutions are comparatively less stable. We demonstrate this effect using synthetic data for a test model of the Mw 7.3 1992 Landers, California earthquake, and then apply both inversion methods to the actual recordings. The two approaches yield similar fits to the strong-motion data with different seismic moments indicating that the moment is not well constrained by strong-motion data alone. The slip amplitude distribution is similar using either approach, but important differences exist in the rupture propagation models. The single-window method does a better job of recovering the true seismic moment and the average rupture velocity. The multi-window method is preferable when rise time is strongly variable, but tends to overestimate the seismic moment. Both methods work well when the rise time is constant or short compared to the periods modeled. Neither approach can recover the temporal details of rupture propagation unless the distribution of slip amplitude is constrained by independent data.


Author(s):  
Maria Mesimeri ◽  
Kristine L. Pankow ◽  
James Rutledge

ABSTRACT We propose a new frequency-domain-based algorithm for detecting small-magnitude seismic events using dense surface seismic arrays. Our proposed method takes advantage of the high energy carried by S waves, and approximate known source locations, which are used to rotate the horizontal components to obtain the maximum amplitude. By surrounding the known source area with surface geophones, we achieve a favorable geometry for locating the detected seismic events with the backprojection method. To test our new detection method, we used a dense circular array, consisting of 151 5 Hz three-component geophones, over a 5 km aperture that was in operation at the Utah Frontier Observatory for Research in Geothermal Energy (FORGE) in southcentral Utah. We apply the new detection method during a small-scale test injection phase at FORGE, and during an aftershock sequence of an Mw 4.1 earthquake located ∼30  km north of the geophone array, within the Black Rock volcanic field. We are able to detect and locate microseismic events (Mw<0) during injections, despite the high level of anthropogenic activity, and several aftershocks that are missing from the regional catalog. By comparing our method with known algorithms that operate both in the time and frequency domain, we show that our proposed method performs better in the case of the FORGE injection monitoring, and equally well for the off-array aftershock sequence. Our new method has the potential to improve microseismic event detections even in extremely noisy environments, and the proposed location scheme serves as a direct discriminant between true and false detections.


2021 ◽  
Author(s):  
Andreas Köhler ◽  
Steffen Mæland

<p>We combine the empirical matched field (EMF) method and machine learning using Convolutional Neural Networks (CNNs) for calving event detection at the IMS station SPITS and GSN station KBS on the Arctic Archipelago of Svalbard. EMF detection with seismic arrays seeks to identify all signals similar to a single template generated by seismic events in a confined target region. In contrast to master event cross-correlation detectors, the detection statistic is not the waveform similarity, but the array beam power obtained using empirical phase delays (steering parameters) between the array stations. Unlike common delay-and-sum beamforming, the steering parameters do not need to represent a plane wave and are directly computed from the template signal without assuming a particular apparent velocity and back-azimuth. As for all detectors, the false alarms rate depends strongly on the beam power threshold setting and therefore needs appropriate tuning or alternatively post-processing. Here, we combine the EMF detector using a low detection threshold with a post-detection classification step. The classifier uses spectrograms of single-station three-component records and state-of-the-art CNNs pre-trained for image recognition. Spectrograms of three-component seismic data are hereby combined as RGB images. We apply the methodology to detect calving events at tidewater glaciers in the Kongsfjord region in Northwestern Svalbard. The EMF detector uses data of the SPITS array, at about 100 km distance to the glaciers, while the CNN classifier processes data from the single three-component station KBS at 15 km distance using time windows where the event is expected according to the EMF detection. The EMF detector combines templates for the P and for the S wave onsets of a confirmed, large calving event. The CNN spectrogram classifier is trained using classes of confirmed calving signals from four different glaciers in the Kongsfjord region, seismic noise examples, and regional tectonic seismic events. By splitting the data into training and test data set, the CNN classifier yields a recognition rate of 89% on average. This is encouragingly high given the complex nature of calving signals and their visually similar waveforms. Subsequently, we process continuous data of 6 months in 2016 using the EMF-CNN method to produce a time series of glacier calving. About 90% of the confirmed calving signals used for the CNN training are detected by EMF processing, and around 80% are assigned to the correct glacier after CNN classification. Such calving time series allow us to estimate and monitor ice loss at tidewater glaciers which in turn can help to better understand the impact of climate change in Polar regions. Combining the superior detection capability of (less common) seismic arrays at a larger source distance with a powerful machine learning classifier at single three-component stations closer to the source, is a promising approach not only for environmental monitoring, but also for event detection and classification in a CTBTO verification context.</p>


1990 ◽  
Vol 80 (6B) ◽  
pp. 1833-1851 ◽  
Author(s):  
Thomas C. Bache ◽  
Steven R. Bratt ◽  
James Wang ◽  
Robert M. Fung ◽  
Cris Kobryn ◽  
...  

Abstract The Intelligent Monitoring System (IMS) is a computer system for processing data from seismic arrays and simpler stations to detect, locate, and identify seismic events. The first operational version processes data from two high-frequency arrays (NORESS and ARCESS) in Norway. The IMS computers and functions are distributed between the NORSAR Data Analysis Center (NDAC) near Oslo and the Center for Seismic Studies (Center) in Arlington, Virginia. The IMS modules at NDAC automatically retrieve data from a disk buffer, detect signals, compute signal attributes (amplitude, slowness, azimuth, polarization, etc.), and store them in a commercial relational database management system (DBMS). IMS makes scheduled (e.g., hourly) transfers of the data to a separate DBMS at the Center. Arrival of new data automatically initiates a “knowledge-based system (KBS)” that interprets these data to locate and identify (earthquake, mine blast, etc.) seismic events. This KBS uses general and area-specific seismological knowledge represented in rules and procedures. For each event, unprocessed data segments (e.g., 7 min for regional events) are retrieved from NDAC for subsequent display and analyst review. The interactive analysis modules include integrated waveform and map display/manipulation tools for efficient analyst validation or correction of the solutions produced by the automated system. Another KBS compares the analyst and automatic solutions to mark overruled elements of the knowledge base. Performance analysis statistics guide subsequent changes to the knowledge base so it improves with experience. The IMS is implemented on networked Sun workstations, with a 56 kbps satellite link bridging the NDAC and Center computer networks. The software architecture is modular and distributed, with processes communicating by messages and sharing data via the DBMS. The IMS processing requirements are easily met with major processes (i.e., signal processing, KBS, and DBMS) on separate Sun 4/2xx workstations. This architecture facilitates expansion in functionality and number of stations. The first version was operated continuously for 8 weeks in late-1989. The Center functions were then transferred to NDAC for subsequent operation. Later versions will be distributed among NDAC, Scripps/IGPP (San Diego), and the Center to process data from many stations and arrays. The IMS design is ambitious in its integration of many new computer technologies, but the operational performance of the first version demonstrates its validity. Thus, IMS provides a new generation of automated seismic event monitoring capability.


2012 ◽  
Vol 524-527 ◽  
pp. 42-48
Author(s):  
Fu Sheng Guo ◽  
Zhao Bin Yan ◽  
Liu Qin Chen

The two early Cambrian seismic events could be found from sedimentary rocks at Peilingjiao section of Kaihua County, Baishi and Fangcun sections of Changshan County in western Zhejiang, except for Jiangshan area. The seismic event at Baishi outcrop can be correlated to the second seismic event at Peilingjiao section. Taking Fangcun as epicenter of the second seismic event, the magnitude of paleoseism in western Zhejiang is about 7~7.6. According to investigation on regional distribution of seismic events, the two seismic activities should be regulated by large Kaihua-Chun’an fault, but unrelated with Jiangshan-Shaoxing fault or Changshan-Xiaoshan fault. However, the formation time of Kaihua-Chun’an fault has not yet been determinate. Based on controlling on Silurian, the possible formation age was inferred to early Paleozoic. The distribution characteristics of seismites indicate that the Kaihua-Chun’an fault was already being active during early Cambrian and seismic activities may be response to Sinian tectonic events in western Zhejiang. By the way of analysis on paleoseismic rhythm, the time interval of the two seismic events in western Zhejiang is less than 5.0 Ma, which may be the result of early frequent activities of Kaihua-Chun’an fault.


Author(s):  
Hirohisa Yamakawa ◽  
Hitoshi Muta

Fukushima Daiichi Nuclear Power Station accident occurred by the Great East Japan Earthquake on March 11, 2011. After that, continuous enhancement of nuclear safety is being required in Japan. The accident of Fukushima was caused by the seismic induced tsunami event, namely, multiple events. The other examples of multiple events due to the seismic event are such as internal fire and internal flooding in the nuclear power plants. In addition, structures, such as a building, and piping might be damaged by the seismic event, which could impact component failure dependently. In order to consider these kinds of events, the development of PRA procedures for multiple events caused by the seismic events will be highly demanded. So, we developed a basic PRA methodology for seismic induced tsunami events using “Direct Quantification of Fault Tree using Monte Carlo simulation (DQFM) methodology”. And we verified its applicability through the evaluation.


1978 ◽  
Vol 68 (1) ◽  
pp. 1-29 ◽  
Author(s):  
Charles A. Langston

abstract Teleseismic P, SV, and SH waves recorded by the WWSS and Canadian networks from the 1971 San Fernando, California earthquake (ML = 6.6) are modeled in the time domain to determine detailed features of the source as a prelude to studying the near and local field strong-motion observations. Synthetic seismograms are computed from the model of a propagating finite dislocation line source embedded in layered elastic media. The effects of source geometry and directivity are shown to be important features of the long-period observations. The most dramatic feature of the model is the requirement that the fault, which initially ruptured at a depth of 13 km as determined from pP-P times, continuously propagated toward the free surface, first on a plane dipping 53°NE, then broke over to a 29°NE dipping fault segment. This effect is clearly shown in the azimuthal variation of both long period P- and SH-wave forms. Although attenuation and interference with radiation from the remainder of the fault are possible complications, comparison of long- and short-period P and short-period pP and P waves suggest that rupture was initially bilateral, or, possibly, strongly unilateral downward, propagating to about 15 km depth. The average rupture velocity of 1.8 km/sec is well constrained from the shape of the long-period wave forms. Total seismic moment is 0.86 × 1026 dyne-cm. Implications for near-field modeling are drawn from these results.


2020 ◽  
Vol 21 (5) ◽  
pp. 514
Author(s):  
Matthias Barus ◽  
Olivier Dalverny ◽  
Hélène Welemane ◽  
Jean-Pierre Faye ◽  
Carmen Martin

This works deals with the seismic vulnerability of buildings in the Pyrenees mountains region where almost a thousand earthquakes are recorded each year in the border area. The challenge is twofold: first to detect the damage due to seismic events and then to localize it inside studied buildings. Operational Modal Analysis (OMA) coupled with numerical modelling by Finite Element (FE) constitutes an interesting approach to address these issues. Here we intend to apply such methodology on a strategic building located in Andorre-la-Vieille whose structure is complex, irregular and heterogeneous. The structural behaviour of the building is studied through frequency computation method in order to identify its undamaged behaviour. A seismic event is next simulated by a non-linear dynamic computation method which creates damage within the structure. Numerical results (natural frequencies, modal shapes and damage location) allow highlighting damaged zones induced by the earthquake and quantify degradation level in these areas. Accordingly, some guidelines may be given in view of the future instrumentation of the building (accelerometers and RAR).


Sign in / Sign up

Export Citation Format

Share Document