Estimating and correcting the amplitude radiation pattern of a virtual source

Geophysics ◽  
2009 ◽  
Vol 74 (2) ◽  
pp. SI27-SI36 ◽  
Author(s):  
Joost van der Neut ◽  
Andrey Bakulin

In the virtual source (VS) method we crosscorrelate seismic recordings at two receivers to create a new data set as if one of these receivers were a virtual source and the other a receiver. We focus on the amplitudes and kinematics of VS data, generated by an array of active sources at the surface and recorded by an array of receivers in a borehole. The quality of the VS data depends on the radiation pattern of the virtual source, which in turn is controlled by the spatial aperture of the surface source distribution. Theory suggests that when the receivers are surrounded by multi-component sources completely filling a closed surface, then the virtual source has an isotropic radiation pattern and VS data possess true amplitudes. In practical applications, limited sourceaperture and deployment of a single source type create an anisotropic radiation pattern of the virtual source, leading to distorted amplitudes. This pattern can be estimated by autocorrelating the spatial Fourier transform of the downgoing wavefield in the special case of a laterally invariant medium. The VS data can be improved by deconvolving the VS data with the estimated amplitude radiation pattern in the frequency-wavenumber domain. This operation alters the amplitude spectrum but not the phase of the data. We can also steer the virtual source by assigning it a new desired amplitude radiation pattern, given sufficient illumination exists in the desired directions. Alternatively, time-gating the downgoing wavefield before crosscorrelation, already common practice in implementing the VS method, can improve the radiation characteristics of a virtual source.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Mansheng Xiao ◽  
Yuezhong Wu ◽  
Guocai Zuo ◽  
Shuangnan Fan ◽  
Huijun Yu ◽  
...  

Next-generation networks are data-driven by design but face uncertainty due to various changing user group patterns and the hybrid nature of infrastructures running these systems. Meanwhile, the amount of data gathered in the computer system is increasing. How to classify and process the massive data to reduce the amount of data transmission in the network is a very worthy problem. Recent research uses deep learning to propose solutions for these and related issues. However, deep learning faces problems like overfitting that may undermine the effectiveness of its applications in solving different network problems. This paper considers the overfitting problem of convolutional neural network (CNN) models in practical applications. An algorithm for maximum pooling dropout and weight attenuation is proposed to avoid overfitting. First, design the maximum value pooling dropout in the pooling layer of the model to sparse the neurons and then introduce the regularization based on weight attenuation to reduce the complexity of the model when the gradient of the loss function is calculated by backpropagation. Theoretical analysis and experiments show that the proposed method can effectively avoid overfitting and can reduce the error rate of data set classification by more than 10% on average than other methods. The proposed method can improve the quality of different deep learning-based solutions designed for data management and processing in next-generation networks.


Geophysics ◽  
2007 ◽  
Vol 72 (4) ◽  
pp. V79-V86 ◽  
Author(s):  
Kurang Mehta ◽  
Andrey Bakulin ◽  
Jonathan Sheiman ◽  
Rodney Calvert ◽  
Roel Snieder

The virtual source method has recently been proposed to image and monitor below complex and time-varying overburden. The method requires surface shooting recorded at downhole receivers placed below the distorting or changing part of the overburden. Redatuming with the measured Green’s function allows the reconstruction of a complete downhole survey as if the sources were also buried at the receiver locations. There are still some challenges that need to be addressed in the virtual source method, such as limited acquisition aperture and energy coming from the overburden. We demonstrate that up-down wavefield separation can substantially improve the quality of virtual source data. First, it allows us to eliminate artifacts associated with the limited acquisition aperture typically used in practice. Second, it allows us to reconstruct a new optimized response in the absence of downgoing reflections and multiples from the overburden. These improvements are illustrated on a synthetic data set of a complex layered model modeled after the Fahud field in Oman, and on ocean-bottom seismic data acquired in the Mars field in the deepwater Gulf of Mexico.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Yang Zhao ◽  
Tao Liu ◽  
Genyang Tang ◽  
Houzhu Zhang ◽  
Madhumita Sengupta

Abstract Based on seismic interferometry, the virtual source (VS) method is able to produce virtual gathers at buried receiver locations by crosscorrelating the direct-downgoing waves with corresponding reflected-upgoing waves from surface-source gathers. Theoretically, the VS records can improve seismic quality with less negative impact from overburdened complexities. However, shallow complex structures and weathering layers at near surface not only severely distort the wavepaths, but also introduce multiples, surface waves, scattering noise, and interference among different wave modes. These additional seismic responsescontaminate both direct-downgoing and reflected-upgoing wavefields. As a result, the VS gathers experience spurious events and unbalanced illuminations associated with distorted radiation patterns. Conventional stacking operator can produce significant artifacts for sources associated with ineffective-wavepath cancellation. We review three publications and summarize a comprehensive workflow to address these issues using data-driven offset stacking, wavelet-crosscorrelation filtering, and radiation-pattern correction. A data-driven offset stacking theme, with each individual source contribution is weighted by certain quality measures, is applied for available offsets. The wavelet crosscorrelation transforms time-offset data into local time-frequency and local time-frequency-wavenumber domains. Filters are designed for the power-spectrum in each domain. The radiation-pattern correction spatially alters the contaminated direct-wavefields using a zero-phase matched filter, such that the filtered wavefield is consistent with the model-based direct P-wavefields observed at buried receiver locations. Our proposed workflow produces significant improvement as demonstrated in the 13 time-lapse field surveys that included substantial repeatability problems across a 17-month survey gap.


2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 558
Author(s):  
Anping Song ◽  
Xiaokang Xu ◽  
Xinyi Zhai

Rotation-Invariant Face Detection (RIPD) has been widely used in practical applications; however, the problem of the adjusting of the rotation-in-plane (RIP) angle of the human face still remains. Recently, several methods based on neural networks have been proposed to solve the RIP angle problem. However, these methods have various limitations, including low detecting speed, model size, and detecting accuracy. To solve the aforementioned problems, we propose a new network, called the Searching Architecture Calibration Network (SACN), which utilizes architecture search, fully convolutional network (FCN) and bounding box center cluster (CC). SACN was tested on the challenging Multi-Oriented Face Detection Data Set and Benchmark (MOFDDB) and achieved a higher detecting accuracy and almost the same speed as existing detectors. Moreover, the average angle error is optimized from the current 12.6° to 10.5°.


Trials ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Zhuoran Kuang ◽  
◽  
Xiaoyan Li ◽  
Jianxiong Cai ◽  
Yaolong Chen ◽  
...  

Abstract Objective To assess the registration quality of traditional Chinese medicine (TCM) clinical trials for COVID-19, H1N1, and SARS. Method We searched for clinical trial registrations of TCM in the WHO International Clinical Trials Registry Platform (ICTRP) and Chinese Clinical Trial Registry (ChiCTR) on April 30, 2020. The registration quality assessment is based on the WHO Trial Registration Data Set (Version 1.3.1) and extra items for TCM information, including TCM background, theoretical origin, specific diagnosis criteria, description of intervention, and outcomes. Results A total of 136 records were examined, including 129 severe acute respiratory syndrome coronavirus 2 (COVID-19) and 7 H1N1 influenza (H1N1) patients. The deficiencies in the registration of TCM clinical trials (CTs) mainly focus on a low percentage reporting detailed information about interventions (46.6%), primary outcome(s) (37.7%), and key secondary outcome(s) (18.4%) and a lack of summary result (0%). For the TCM items, none of the clinical trial registrations reported the TCM background and rationale; only 6.6% provided the TCM diagnosis criteria or a description of the TCM intervention; and 27.9% provided TCM outcome(s). Conclusion Overall, although the number of registrations of TCM CTs increased, the registration quality was low. The registration quality of TCM CTs should be improved by more detailed reporting of interventions and outcomes, TCM-specific information, and sharing of the result data.


2021 ◽  
Vol 9 (2_suppl) ◽  
pp. 2325967121S0001
Author(s):  
François Sigonney ◽  
Camille Steltzlen ◽  
Pierre Alban Bouché ◽  
Nicolas Pujol

Objectives: The Internet, especially YouTube, is an important and growing source of medical information. The content of this information is poorly evaluated. The objective of this study was to analyze the quality of YouTube video content on meniscus repair. The hypothesis was that this source of information is not relevant for patients. Methods: A YouTube search was carried out using the keywords "meniscus repair". Videos had to have had more than 10,000 views to be included. The videos were analyzed by two evaluators. Various features of the videos were recorded (number of views, date of publication, "likes", "don’t likes", number of comments, source, type of content and the origin of the video). The quality of the video content was analyzed by two validated information system scores: the JAMA benchmark score (0 to 4) and the Modified DISCERN score (0 to 5). A specific meniscus repair score (MRSS scored out of 22) was developed for this study, in the same way that a specific score has been developed for other similar studies (anterior cruciate ligament, spine, etc.). Results: Forty-four (44) videos were included in the study. The average number of views per video was 180,100 (± 222,000) for a total number of views of 7,924,095. The majority of the videos were from North America (90.9%). In most cases, the source (uploader) that published the video was a doctor (59.1%). A manufacturer, an institution and a non-medical source were the other sources. The content actually contained information on meniscus repair in only 50% of the cases. The mean scores for the JAMA benchmark, MD score and MRSS were 1.6/4± 0.75, 1.2/5 ± 1.02 and 4.5/22 (± 4.01) respectively. No correlation was found between the number of views and the quality of the videos. The quality of videos from medical sources was not superior to those from other sources. Conclusion: The content of YouTube videos on meniscus repair is of very low quality. Physicians should inform patients and, more importantly, contribute to the improvement of these contents.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


1960 ◽  
Vol 38 (1) ◽  
pp. 78-99 ◽  
Author(s):  
A. Ishimaru ◽  
G. Held

Part I considers the problem of determining the source distribution over a circular aperture required to produce a prescribed radiation pattern. In particular, the problem of optimizing the narrow broadside pattern from a circular aperture is discussed in detail and an improved design method over Taylor's for line source is devised. Numerical examples are given.Part II deals with the analysis of the radiation pattern from a circular aperture from γ1 to γ2 with the traveling wave type source functions. Expressions suitable to the analysis and the synthesis are obtained and the narrow-beam and shaped-beam synthesis are discussed.


2021 ◽  
Vol 10 (7) ◽  
pp. 436
Author(s):  
Amerah Alghanim ◽  
Musfira Jilani ◽  
Michela Bertolotto ◽  
Gavin McArdle

Volunteered Geographic Information (VGI) is often collected by non-expert users. This raises concerns about the quality and veracity of such data. There has been much effort to understand and quantify the quality of VGI. Extrinsic measures which compare VGI to authoritative data sources such as National Mapping Agencies are common but the cost and slow update frequency of such data hinder the task. On the other hand, intrinsic measures which compare the data to heuristics or models built from the VGI data are becoming increasingly popular. Supervised machine learning techniques are particularly suitable for intrinsic measures of quality where they can infer and predict the properties of spatial data. In this article we are interested in assessing the quality of semantic information, such as the road type, associated with data in OpenStreetMap (OSM). We have developed a machine learning approach which utilises new intrinsic input features collected from the VGI dataset. Specifically, using our proposed novel approach we obtained an average classification accuracy of 84.12%. This result outperforms existing techniques on the same semantic inference task. The trustworthiness of the data used for developing and training machine learning models is important. To address this issue we have also developed a new measure for this using direct and indirect characteristics of OSM data such as its edit history along with an assessment of the users who contributed the data. An evaluation of the impact of data determined to be trustworthy within the machine learning model shows that the trusted data collected with the new approach improves the prediction accuracy of our machine learning technique. Specifically, our results demonstrate that the classification accuracy of our developed model is 87.75% when applied to a trusted dataset and 57.98% when applied to an untrusted dataset. Consequently, such results can be used to assess the quality of OSM and suggest improvements to the data set.


Sign in / Sign up

Export Citation Format

Share Document