Ambiguities in AVO inversion of reflections from a gas‐sand

Geophysics ◽  
1995 ◽  
Vol 60 (1) ◽  
pp. 134-141 ◽  
Author(s):  
Giuseppe Drufuca ◽  
Alfredo Mazzotti

We examine the reflections from a thick sand layer embedded in shales deposited in an open marine environment of Miocene age. Borehole data indicate that the sand bed is gas saturated. Making the assumptions of single interface reflections, plane‐wave propagation in elastic and isotropic media, and correct amplitude recovery of the actual seismic data, we try to invert the amplitude variation with offset (AVO) response for the compressional velocity [Formula: see text], shear velocity [Formula: see text], and density [Formula: see text] of the gas‐sand layer, knowing the parameters of the upper layer and the calibration constant. The actual reflections reach incidence angles up to 54 degrees at the farthest offset. Notwithstanding the large range of incidence angles, the outcomes of the inversion are ambiguous for we find many solutions that fit equally well, in a least‐squares sense, the observed AVO response. We present the locus of the solutions as curves in compressional velocity [Formula: see text], shear velocity [Formula: see text], and density [Formula: see text] space. To gain a better understanding of the results, we also perform the same inversion experiment on synthetic AVO data derived from the borehole information. We find that when inverting the AVO response in the same range of incidence angles as in the real data case, the exact solution is found whichever starting point we choose; that is, we have no ambiguity. However, if we limit the incidence angle range, e.g., to 15 degrees, the invention is no longer able to find a unique solution and the set of admissible solutions defines regular curves in [Formula: see text], [Formula: see text], [Formula: see text] space. We infer that residual noise in the recorded data is responsible for the ambiguities of the solutions, and that because of numerical noise, a wide range of incidence angle is required to obtain a unique solution even in noise‐free synthetic data.

Geophysics ◽  
2004 ◽  
Vol 69 (5) ◽  
pp. 1283-1298 ◽  
Author(s):  
Biondo Biondi ◽  
William W. Symes

We analyze the kinematic properties of offset‐domain common image gathers (CIGs) and angle‐domain CIGs (ADCIGs) computed by wavefield‐continuation migration. Our results are valid regardless of whether the CIGs were obtained by using the correct migration velocity. They thus can be used as a theoretical basis for developing migration velocity analysis (MVA) methods that exploit the velocity information contained in ADCIGs. We demonstrate that in an ADCIG cube, the image point lies on the normal to the apparent reflector dip that passes through the point where the source ray intersects the receiver ray. The image‐point position on the normal depends on the velocity error; when the velocity is correct, the image point coincides with the point where the source ray intersects the receiver ray. Starting from this geometric result, we derive an analytical expression for the expected movements of the image points in ADCIGs as functions of the traveltime perturbation caused by velocity errors. By applying this analytical result and assuming stationary raypaths (i.e., small velocity errors), we then derive two expressions for the residual moveout (RMO) function in ADCIGs. We verify our theoretical results and test the accuracy of the proposed RMO functions by analyzing the migration results of a synthetic data set with a wide range of reflector dips. Our kinematic analysis leads also to the development of a new method for computing ADCIGs when significant geological dips cause strong artifacts in the ADCIGs computed by conventional methods. The proposed method is based on the computation of offset‐domain CIGs along the vertical‐offset axis and on the “optimal” combination of these new CIGs with conventional CIGs. We demonstrate the need for and the advantages of the proposed method on a real data set acquired in the North Sea.


2020 ◽  
Author(s):  
Eleonora Diamanti ◽  
Inda Setyawati ◽  
Spyridon Bousis ◽  
leticia mojas ◽  
lotteke Swier ◽  
...  

Here, we report on the virtual screening, design, synthesis and structure–activity relationships (SARs) of the first class of selective, antibacterial agents against the energy-coupling factor (ECF) transporters. The ECF transporters are a family of transmembrane proteins involved in the uptake of vitamins in a wide range of bacteria. Inhibition of the activity of these proteins could reduce the viability of pathogens that depend on vitamin uptake. Because of their central role in the metabolism of bacteria and their absence in humans, ECF transporters are novel potential antimicrobial targets to tackle infection. The hit compound’s metabolic and plasma stability, the potency (20, MIC Streptococcus pneumoniae = 2 µg/mL), the absence of cytotoxicity and a lack of resistance development under the conditions tested here suggest that this scaffold may represent a promising starting point for the development of novel antimicrobial agents with an unprecedented mechanism of action.<br>


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


Author(s):  
Saheb Foroutaifar

AbstractThe main objectives of this study were to compare the prediction accuracy of different Bayesian methods for traits with a wide range of genetic architecture using simulation and real data and to assess the sensitivity of these methods to the violation of their assumptions. For the simulation study, different scenarios were implemented based on two traits with low or high heritability and different numbers of QTL and the distribution of their effects. For real data analysis, a German Holstein dataset for milk fat percentage, milk yield, and somatic cell score was used. The simulation results showed that, with the exception of the Bayes R, the other methods were sensitive to changes in the number of QTLs and distribution of QTL effects. Having a distribution of QTL effects, similar to what different Bayesian methods assume for estimating marker effects, did not improve their prediction accuracy. The Bayes B method gave higher or equal accuracy rather than the rest. The real data analysis showed that similar to scenarios with a large number of QTLs in the simulation, there was no difference between the accuracies of the different methods for any of the traits.


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


2021 ◽  
Vol 13 (3) ◽  
pp. 1589
Author(s):  
Juan Sánchez-Fernández ◽  
Luis-Alberto Casado-Aranda ◽  
Ana-Belén Bastidas-Manzano

The limitations of self-report techniques (i.e., questionnaires or surveys) in measuring consumer response to advertising stimuli have necessitated more objective and accurate tools from the fields of neuroscience and psychology for the study of consumer behavior, resulting in the creation of consumer neuroscience. This recent marketing sub-field stems from a wide range of disciplines and applies multiple types of techniques to diverse advertising subdomains (e.g., advertising constructs, media elements, or prediction strategies). Due to its complex nature and continuous growth, this area of research calls for a clear understanding of its evolution, current scope, and potential domains in the field of advertising. Thus, this current research is among the first to apply a bibliometric approach to clarify the main research streams analyzing advertising persuasion using neuroimaging. Particularly, this paper combines a comprehensive review with performance analysis tools of 203 papers published between 1986 and 2019 in outlets indexed by the ISI Web of Science database. Our findings describe the research tools, journals, and themes that are worth considering in future research. The current study also provides an agenda for future research and therefore constitutes a starting point for advertising academics and professionals intending to use neuroimaging techniques.


Author(s):  
Alma Andersson ◽  
Joakim Lundeberg

Abstract Motivation Collection of spatial signals in large numbers has become a routine task in multiple omics-fields, but parsing of these rich datasets still pose certain challenges. In whole or near-full transcriptome spatial techniques, spurious expression profiles are intermixed with those exhibiting an organized structure. To distinguish profiles with spatial patterns from the background noise, a metric that enables quantification of spatial structure is desirable. Current methods designed for similar purposes tend to be built around a framework of statistical hypothesis testing, hence we were compelled to explore a fundamentally different strategy. Results We propose an unexplored approach to analyze spatial transcriptomics data, simulating diffusion of individual transcripts to extract genes with spatial patterns. The method performed as expected when presented with synthetic data. When applied to real data, it identified genes with distinct spatial profiles, involved in key biological processes or characteristic for certain cell types. Compared to existing methods, ours seemed to be less informed by the genes’ expression levels and showed better time performance when run with multiple cores. Availabilityand implementation Open-source Python package with a command line interface (CLI), freely available at https://github.com/almaan/sepal under an MIT licence. A mirror of the GitHub repository can be found at Zenodo, doi: 10.5281/zenodo.4573237. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.


Sign in / Sign up

Export Citation Format

Share Document