Simulation-based comparison of four site-response estimation techniques

1998 ◽  
Vol 88 (1) ◽  
pp. 30-42
Author(s):  
Fabien Coutel ◽  
Peter Mora

Abstract Recent earthquakes have triggered renewed interest to understand better earthquake site response. Most of the studies comparing various techniques for estimating site response were based on real data (from earthquakes, nuclear blasts, and seismic noise). A theoretical approach, using synthetic data generated with the pseudospectral method, is used to compare four site-response estimation techniques. The limits of applicability of each method were determined by modeling microtremors and incoming SV waves (with different incidence angles) and analyzing the site amplifications. The first two techniques investigated consist of dividing the spectrum of the horizontal motion at a site by that of a reference site using either incident S waves or microtremors. The latter was unable to reveal either the resonant frequencies or peak amplitudes in any cases. The two other techniques are based on the horizontal-to-vertical (H/V) spectral ratio using S waves or microtremors. These techniques were found to reveal at least the fundamental resonant frequency and amplitude (former method only) within 10% error, in the case of simple geology (flat layers). However, the results show that these techniques are unable to take into account 2D effects such as focusing effects and basin-edge effects and yield unreliable or incorrect results in such cases.

2012 ◽  
Vol 594-597 ◽  
pp. 1840-1848 ◽  
Author(s):  
Wu Jian Yan ◽  
Yan Bin Wang ◽  
Yu Cheng Shi

Abstract: In this paper, we simulated two-dimension numerical on the strong ground motion in Lanzhou basin through the hybrid scheme based on the pseudospectral method (PSM) and finite difference method (FDM). We base on a focal of 20 km deep and a profile of 5 layers is used as model to analyze the site response and the peak displacement of strong ground motion. The results show that the hybrid PSM/FDM method for seismic wavefield simulation combines with advantages of PSM and FDM and makes up for the disadvantage of them, so this method can process well the calculation of the discontinuous medium surface, then the calculation accuracy is similar to PSM. Through the wavefield simulation it is known that the peak ground displacement (PGD) of the vertical is larger and the influence of surface wave at the basin edge is more obvious than the horizontal.


Author(s):  
Arthur Frankel ◽  
Alex Grant

ABSTRACT Site response, sedimentary basin amplification, and earthquake stress drops for the Portland, Oregon area were determined using accelerometer recordings at 16 sites of 10 local earthquakes with MD 2.6–4.0. A nonlinear inversion was applied to calculate site response (0.5–10 Hz), corner frequencies, and seismic moments from the Fourier spectra of the earthquakes. Site amplifications at lower frequencies of 0.1–2.0 Hz were determined from Fourier spectra of four regional earthquakes with Mw 5.8–6.4. Amplifications were calculated relative to a stiff-soil site outside the Portland and Tualatin basins. Sites on artificial fill and Holocene alluvium show strong amplification peaks (factor of 5) around 1–2 Hz. Sites on the Portland Hills, consisting of thin soil over basalt, display spectral peaks at 4–5 Hz (factor of 4). Spectral peaks at both sites are similar to those predicted for vertically propagating S waves from VS profiles determined at these sites using a borehole and refraction microtremor analysis. The largest amplifications at 0.1–1 Hz were found at stiff-soil sites in the Tualatin basin, based on recordings of regional earthquakes. Amplifications of a factor of 10, at about 0.3 Hz, were observed for a site in the deeper portion of the Tualatin basin and a factor of 7 at 0.5–0.6 Hz for two adjacent sites closer to the border of that basin. Stiff-soil sites in the Portland basin exhibit amplifications of 2–3 at frequencies of about 0.3–0.8 Hz. The frequencies of the amplification peaks for the deep Tualatin basin site can be explained by S-wave resonance in the shallow sediments, but the observed amplification is underestimated. Earthquake stress drops determined from the inversion range from 3 to 11 MPa, with no overall dependence on seismic moment.


2020 ◽  
Vol 223 (1) ◽  
pp. 471-487
Author(s):  
Giulia Sgattoni ◽  
Silvia Castellaro

SUMMARY The vibration modes of the ground have been described both in the 1-D and 2-D case. The 1-D resonance is found on geological structures whose aspect ratio is low, that is on layers with a lateral width much larger than their thickness. A typical example is that of a horizontal soft sediment layer overlying hard bedrock. In this case, the 1-D resonance frequency, traditionally detected by means of the microtremor H/V (horizontal to vertical spectral ratio) technique, depends on the bedrock depth and on the shear wave velocity of the resonating cover layer. The H/V technique is thus used both to map the resonance frequencies in seismic microzonation studies and for stratigraphic imaging. When 2-D resonance occurs, generally on deep and narrow valleys, the whole sedimentary infill vibrates at the same frequency and stratigraphic imaging can no longer be performed by means of the 1-D resonance equation. Understanding the 1-D or 2-D resonance nature of a site is therefore mandatory to avoid wrong stratigraphic and dynamic interpretations, which is in turn extremely relevant for seismic site response assessment. In this paper, we suggest a procedure to address this issue using single-station approaches, which are much more common compared to the multistation synchronized approach presented by research teams in earlier descriptions of the 2-D resonances. We apply the procedure to the Bolzano sedimentary basin in Northern Italy, which lies at the junction of three valleys, for which we observed respectively 1-D-only, 1-D and 2-D, and 2-D-only resonances. We conclude by proposing a workflow scheme to conduct experimental measurements and data analysis in order to assess the 1-D or 2-D resonance nature of a site using a single-station approach.


1997 ◽  
Vol 87 (3) ◽  
pp. 710-730 ◽  
Author(s):  
Luis Fabián Bonilla ◽  
Jamison H. Steidl ◽  
Grant T. Lindley ◽  
Alexei G. Tumarkin ◽  
Ralph J. Archuleta

Abstract During the months that followed the 17 January 1994 M 6.7 Northridge, California, earthquake, portable digital seismic stations were deployed in the San Fernando basin to record aftershock data and estimate site-amplification factors. This study analyzes data, recorded on 31 three-component stations, from 38 aftershocks ranging from M 3.0 to M 5.1, and depths from 0.2 to 19 km. Site responses from the 31 stations are estimated from coda waves, S waves, and ratios of horizontal to vertical (H/V) recordings. For the coda and the S waves, site response is estimated using both direct spectral ratios and a generalized inversion scheme. Results from the inversions indicate that the effect of Qs can be significant, especially at high frequencies. Site amplifications estimated from the coda of the vertical and horizontal components can be significantly different from each other, depending on the choice of the reference site. The difference is reduced when an average of six rock sites is used as a reference site. In addition, when using this multi-reference site, the coda amplification from rock sites is usually within a factor of 2 of the amplification determined from the direct spectral ratios and the inversion of the S waves. However, for nonrock sites, the coda amplification can be larger by a factor of 2 or more when compared with the amplification estimated from the direct spectral ratios and the inversion of the S waves. The H/V method for estimating site response is found to extract the same predominant peaks as the direct spectral ratio and the inversion methods. The amplifications determined from the H/V method are, however, different from the amplifications determined from the other methods. Finally, the stations were grouped into classes based on two different classifications, general geology and a more detailed classification using a quaternary geology map for the Los Angeles and San Fernando areas. Average site-response estimates using the site characterization based on the detailed geology show better correlation between amplification and surface geology than the general geology classification.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


Author(s):  
Alma Andersson ◽  
Joakim Lundeberg

Abstract Motivation Collection of spatial signals in large numbers has become a routine task in multiple omics-fields, but parsing of these rich datasets still pose certain challenges. In whole or near-full transcriptome spatial techniques, spurious expression profiles are intermixed with those exhibiting an organized structure. To distinguish profiles with spatial patterns from the background noise, a metric that enables quantification of spatial structure is desirable. Current methods designed for similar purposes tend to be built around a framework of statistical hypothesis testing, hence we were compelled to explore a fundamentally different strategy. Results We propose an unexplored approach to analyze spatial transcriptomics data, simulating diffusion of individual transcripts to extract genes with spatial patterns. The method performed as expected when presented with synthetic data. When applied to real data, it identified genes with distinct spatial profiles, involved in key biological processes or characteristic for certain cell types. Compared to existing methods, ours seemed to be less informed by the genes’ expression levels and showed better time performance when run with multiple cores. Availabilityand implementation Open-source Python package with a command line interface (CLI), freely available at https://github.com/almaan/sepal under an MIT licence. A mirror of the GitHub repository can be found at Zenodo, doi: 10.5281/zenodo.4573237. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.


Sign in / Sign up

Export Citation Format

Share Document