The geometrical representation of fault-plane solutions of earthquakes

1957 ◽  
Vol 47 (2) ◽  
pp. 89-110 ◽  
Author(s):  
Adrian E. Scheidegger

Abstract Investigations into the mechanism at the focus of an earthquake have been in progress for a long time. In the course of these investigations it has been demonstrated that the mathematical model of a simple fault is a plausible assumption, at least so far as the explanation of the direction of first motion at distant seismic observatories is concerned. Various methods have been devised for representing and determining the elements of the focal fault of an earthquake, by investigators in Japan, Holland, North America, Italy, and Russia. It is often very difficult to see the connection between the various representations, and the present paper has been undertaken to demonstrate the relationships between them and to devise corresponding “translation schemes.” It is shown that there exists an infinite number of representations of fault-plane solutions all of which satisfy certain basic requirements. However, only four thereof have reached any popularity. It is shown that three of these four representations are entirely equivalent. In each, one uses a sphere; in each, one uses some stereographic projection of this sphere; and in each, one substitutes the tangent to the seismic ray at the focus for the ray itself. Whether one tabulates the angle i which that tangent makes with the vertical and plots tan i/2, as Ritsema and most Russians do, or whether one tabulates and plots tan i, as some of the Russians do, or tabulates and plots cot i, as Hodgson and his various co-workers do, one obtains identical results with equivalent amounts of work. What particular representation anyone will choose for studying an earthquake will therefore depend largely on his taste and previous custom.

2021 ◽  
Vol 40 (4) ◽  
pp. 8493-8500
Author(s):  
Yanwei Du ◽  
Feng Chen ◽  
Xiaoyi Fan ◽  
Lei Zhang ◽  
Henggang Liang

With the increase of the number of loaded goods, the number of optional loading schemes will increase exponentially. It is a long time and low efficiency to determine the loading scheme with experience. Genetic algorithm is a search heuristic algorithm used to solve optimization in the field of computer science artificial intelligence. Genetic algorithm can effectively select the optimal loading scheme but unable to utilize weight and volume capacity of cargo and truck. In this paper, we propose hybrid Genetic and fuzzy logic based cargo-loading decision making model that focus on achieving maximum profit with maximum utilization of weight and volume capacity of cargo and truck. In this paper, first of all, the components of the problem of goods stowage in the distribution center are analyzed systematically, which lays the foundation for the reasonable classification of the problem of goods stowage and the establishment of the mathematical model of the problem of goods stowage. Secondly, the paper abstracts and defines the problem of goods loading in distribution center, establishes the mathematical model for the optimization of single car three-dimensional goods loading, and designs the genetic algorithm for solving the model. Finally, Matlab is used to solve the optimization model of cargo loading, and the good performance of the algorithm is verified by an example. From the performance evaluation analysis, proposed the hybrid system achieve better outcomes than the standard SA model, GA method, and TS strategy.


1992 ◽  
Vol 63 (4) ◽  
pp. 557-566 ◽  
Author(s):  
William E. Doll ◽  
Carol D. Rea ◽  
John E. Ebel ◽  
Sandra J. Craven ◽  
John J. Cipar

Abstract Fifteen years of regional monitoring by the New England Seismic Network indicated a locally high level of seismicity near South Sebec, between the towns of Milo and Dover-Foxcroft in central Maine. Most of the events were located in a diffuse zone south of the distinctive, ENE trending Harriman Pond Fault (HPF) which is indicated by brittle deformation in outcrop and is represented as a depression in topographic maps and satellite images. A portable network consisting of both digital and analog instruments was deployed during the summers of 1989 and 1990 in order to characterize the pattern of the microearthquakes and to determine high-resolution epicenters, depths, and fault plane solutions. Seventy-three events were detected during the experiment, of which 28 could be located. Many of the events south of the fault lie along a NNW trending line which has no major expression in the surface geology. Only, a few of the events are subparallel to the HPF. The first motion data were insufficient for the determination of any fault plane solutions.


2014 ◽  
Vol 926-930 ◽  
pp. 2329-2332
Author(s):  
Liu Yu ◽  
Lei Cheng Chen ◽  
Ming Li ◽  
Peng Rui Wang

Through detailed research on the channel modeling, this paper proposes a new simple channel model to solve the difficult of the complex theory and long time design cycle modeling such as SCM, SCME and the WINNERII models. This paper has constructed the mathematical model, and then simulated and analyzed it based on the Matlab, and finally realized the channel simulator based on FPGA. After that, through the comparison and analysis, we can guarantee the practicability and reliability of this simulator.


1982 ◽  
Vol 72 (3) ◽  
pp. 729-744
Author(s):  
Charles A. Langston

abstract Fault plane solutions are derived from systematic trial-and-error (“grid”) testing of three-component body waveform data from a single station. Modeling P and SH waveform data from five shallow events recorded teleseismically demonstrates that radiation pattern information contained within the interference of the direct wave and surface reflections and the overall relative amplitude between P and SH waveforms is sufficient to discriminate between fault type (e.g., strike-slip versus dip-slip) and often agrees with well-constrained first-motion studies. Events studied are the 9 April 1968 Borrego Mountain, California; 20 June 1978 Thessaloniki, Greece; 13 August 1978 Santa Barbara, California; 20 May 1979 Alaska; and 6 August 1979 Coyote Lake, California, earthquakes. It is also shown using data from the 27 July 1980 Sharpsburg, Kentucky, earthquake that inclusion of pP/P and sP/P polarity and amplitude information to an otherwise unconstrained first-motion study can significantly improve the quality of the fault plane solution. Although there are many potential problems (source multiplicity, directivity, etc.) which can prohibit finding a good model with these techniques and inclusion of data from many stations is clearly desirable, the results of this study suggest that sparse, high-quality waveform data sets may be as or more useful for obtaining source mechanisms than standard first-motion studies. At a minimum, they should be performed together as a consistency check. This procedure would be most useful in the common situation where only a few receivers are available for a particular event.


2016 ◽  
Vol 60 (04) ◽  
pp. 219-238
Author(s):  
K. J. Spyrou ◽  
N. Themelis ◽  
I. Kontolefas

Ship "high-run" incidents in irregular following/quartering seas, in particular their correlation with broaching-to behavior, are investigated, aiming to produce statistical evidence about this connection. A simple linear mathematical model of the yaw and sway motions has been combined with a nonlinear surge equation that is often applied in surf-riding investigations, considering multifrequency wave excitation derived from a spectrum. Types of yaw instability are conjectured from the structure of the mathematical model. The concept of instantaneous celerity is used for formally recognizing the occurrence of high-run and a simple method is proposed for its calculation. Problem's rarity (i.e., whether few only or a large number of broaching-to incidents are recorded during long-time simulations) is controlled by using the rudder control gains as "rarity knobs." Expectations of broaching-to, with and without conditioning on high-run incidence, are presented. Histograms of cumulative time of high-run and limiting yaw angle exceedance are produced and the fitting of standard distributions, variants of the normal distribution, are discussed.


2012 ◽  
Vol 252 ◽  
pp. 134-139
Author(s):  
Jian Jun Hao ◽  
Shuai Shuai Ge ◽  
Xi Hong Zou ◽  
Xiao Hui Ding

Aiming at the problem of long time power interruption and clutch master-slave friction plates wore seriously which greatly shorten the life of clutch during shifting process of AMT, a overrunning AMT without separation process of clutch when shifting is designed. This paper has analyzed structural characteristics and shift principle of overrunning AMT. Through force analysis on the jointing process of roller overrunning clutch, the mathematical model and dynamic model of transmission system are established. Finally, the shifting impact of shifting process is analyzed based on computer simulation. The simulation results indicate that vehicle longitudinal degree of jerk meets the requirement of vehicle comfort.


Author(s):  
Ali Mekky

Highway 407 (a four- to six-lane freeway), in the Greater Toronto Area (GTA) (with a population of 5 million) has been considered for many years as a relief for Highway 401, the busiest highway in North America. Highway 407 is being planned as a toll highway. Ideally, the driver of each car should have a transponder in the car to identify the vehicle for electronic toll billing purposes. The value of the toll (variable toll) would depend on the number of kilometers traveled on the highway. However, to attract some of the drivers who do not want their origins and destinations to be tracked, a fixed-toll option might be available. A study was developed to estimate the changes in the travel and the revenue of Highway 407 if a fixed-toll option were allowed simultaneously with a variable one. The GTA mathematical model, within the EMME/2 environment, was used. Although the available transportation planning packages were not originally designed for evaluating these kinds of toll strategies, it is possible to do the evaluation using several multiclass generalized cost assignment runs with feedback loops. The mathematical model and the evaluation process used are described. One of the results of the evaluation is the finding that allowing fixed-toll operation does not increase the number of users of Highway 407 (operating under a variable toll scheme) but will increase the revenues in a marginal way. Therefore, it is concluded that using only a variable toll rate would maximize the net revenue for the year under consideration.


Author(s):  
Ali Mekky

Tolling strategies recently tested for Highway 407 in the Greater Toronto Area (GTA) are described and analyzed. The GTA is one of the fastest-growing urban areas in North America, with a population of about 5 million. Highway 407, a six-/four-lane freeway in the GTA, has been considered for many years as a relief for Highway 401. It is the busiest highway in North America and is used by more than 1 million vehicles per day. Highway 407 is being planned and constructed as a toll highway. Four strategies are compared. In the base strategy, the toll rate per kilometer is fixed and the value of the toll paid depends on the distance traveled on the highway. In Strategy 2, vehicles on Highway 407 are tolled on the entrance ramps as well as at some points on the highway (main “virtual” plazas). In Strategies 3 and 4, each driver has two choices. The first is to pay a toll depending on the distance traveled. The second is to pay a certain fixed toll once the driver crosses certain points on the highway (mainline plazas) and on the exit ramps. The strategies are compared from the points of view of the number of users, the vehicle-kilometers on the highway, the revenues, and the average toll paid. The GTA mathematical model, within the EMME/2 environment, is used. The mathematical model and the evaluation process are described.


2013 ◽  
Vol 380-384 ◽  
pp. 1473-1476
Author(s):  
Zhan Li ◽  
Da Shen Xue

For a long time, the mathematical model is used to describe the systems characteristics and acquire the solutions, so it gradually develops into a modern computer simulation technology, and it can be used to solve many problems which are complex and unable resolve use mathematical methods. Witness modeling simulation software is applied in this article, and examples analysis is put forward in view of the logistics system of inventory control, finally making the models on the part of the module, whats more, it analysis the data and makes the optimal strategy.


1966 ◽  
Vol 56 (3) ◽  
pp. 755-773 ◽  
Author(s):  
Thomas V. McEvilly

abstract A sequence of more than 100 aftershocks with magnitudes as low as −0.1 was recorded following a magnitude 5.0 earthquake on November 16, 1964, in the San Andreas fault zone of central California. The sequence was monitored in detail by three temporary seismographic stations at distances less than 15 km and the surrounding telemetry array. Nearly all of the 35 earthquakes which could be located clustered in a focal region about 4 km in diameter at a depth near 12 km and exhibited uniform first motion radiation patterns. First motion fault plane solutions are consistent with the right lateral transcurrent motion characteristic of the San Andreas fault. Exceptions to this uniform radiation pattern in the concentrated focal region occurred near the times of two large aftershocks apparently on another fault about 5 km away.


Sign in / Sign up

Export Citation Format

Share Document