scholarly journals Minimization of self-potential survey mis-ties acquired with multiple reference locations

Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. F71-F81 ◽  
Author(s):  
Burke J. Minsley ◽  
Darrell A. Coles ◽  
Yervant Vichabian ◽  
Frank Dale Morgan

Self-potential (SP) surveys often involve many interconnected lines of data along available roads or trails, with the ultimate goal of producing a unique map of electric potentials at each station relative to a single reference point. Multiple survey lines can be tied together by collecting data along intersecting transects and enforcing Kirchhoff’s voltage law, which requires that the total potential drop around any closed loop equals zero. In practice, however, there is often a nonzero loop-closure error caused by noisy data; traditional SP processing methods redistribute this error evenly over the measurements that form each loop. The task of distributing errors and tying lines together becomes nontrivial when many lines of data form multiple interconnected loops because the loop-closure errors are not independent, and a unique potential field cannot be determined by processing lines sequentially. We present a survey-consistent processing method that produces a unique potential field by minimizing the loop-closure errors over all lines of data simultaneously. When there are no interconnected survey loops, the method is equivalent to traditional processing schemes. The task of computing the potential field is posed as a linear inverse problem, which easily incorporates prior information about measurement errors and model constraints. We investigate the use of both [Formula: see text] and [Formula: see text] measures of data misfit, the latter requiring an iterative-solution method with increased computational cost. The [Formula: see text] method produces more reliable results when outliers are present in the data, and is similar to the [Formula: see text] result when only Gaussian noise is present. Two synthetic examples are used to illustrate this methodology, which is subsequently applied to a field data set collected as part of a geothermal exploration campaign in Nevis, West Indies.

Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


Molecules ◽  
2021 ◽  
Vol 26 (13) ◽  
pp. 3978
Author(s):  
Rocco Peter Fornari ◽  
Piotr de Silva

Discovering new materials for energy storage requires reliable and efficient protocols for predicting key properties of unknown compounds. In the context of the search for new organic electrolytes for redox flow batteries, we present and validate a robust procedure to calculate the redox potentials of organic molecules at any pH value, using widely available quantum chemistry and cheminformatics methods. Using a consistent experimental data set for validation, we explore and compare a few different methods for calculating reaction free energies, the treatment of solvation, and the effect of pH on redox potentials. We find that the B3LYP hybrid functional with the COSMO solvation method, in conjunction with thermal contributions evaluated from BLYP gas-phase harmonic frequencies, yields a good prediction of pH = 0 redox potentials at a moderate computational cost. To predict how the potentials are affected by pH, we propose an improved version of the Alberty-Legendre transform that allows the construction of a more realistic Pourbaix diagram by taking into account how the protonation state changes with pH.


2016 ◽  
Vol 311 (3) ◽  
pp. F539-F547 ◽  
Author(s):  
Minhtri K. Nguyen ◽  
Dai-Scott Nguyen ◽  
Minh-Kevin Nguyen

Because changes in the plasma water sodium concentration ([Na+]pw) are clinically due to changes in the mass balance of Na+, K+, and H2O, the analysis and treatment of the dysnatremias are dependent on the validity of the Edelman equation in defining the quantitative interrelationship between the [Na+]pw and the total exchangeable sodium (Nae), total exchangeable potassium (Ke), and total body water (TBW) (Edelman IS, Leibman J, O'Meara MP, Birkenfeld LW. J Clin Invest 37: 1236–1256, 1958): [Na+]pw = 1.11(Nae + Ke)/TBW − 25.6. The interrelationship between [Na+]pw and Nae, Ke, and TBW in the Edelman equation is empirically determined by accounting for measurement errors in all of these variables. In contrast, linear regression analysis of the same data set using [Na+]pw as the dependent variable yields the following equation: [Na+]pw = 0.93(Nae + Ke)/TBW + 1.37. Moreover, based on the study by Boling et al. (Boling EA, Lipkind JB. 18: 943–949, 1963), the [Na+]pw is related to the Nae, Ke, and TBW by the following linear regression equation: [Na+]pw = 0.487(Nae + Ke)/TBW + 71.54. The disparities between the slope and y-intercept of these three equations are unknown. In this mathematical analysis, we demonstrate that the disparities between the slope and y-intercept in these three equations can be explained by how the osmotically inactive Na+ and K+ storage pool is quantitatively accounted for. Our analysis also indicates that the osmotically inactive Na+ and K+ storage pool is dynamically regulated and that changes in the [Na+]pw can be predicted based on changes in the Nae, Ke, and TBW despite dynamic changes in the osmotically inactive Na+ and K+ storage pool.


Author(s):  
Zhihui Yang ◽  
Xiangyu Tang ◽  
Lijuan Zhang ◽  
Zhiling Yang

Human pose estimate can be used in action recognition, video surveillance and other fields, which has received a lot of attentions. Since the flexibility of human joints and environmental factors greatly influence pose estimation accuracy, related research is confronted with many challenges. In this paper, we incorporate the pyramid convolution and attention mechanism into the residual block, and introduce a hybrid structure model which synthetically applies the local and global information of the image for the analysis of keypoints detection. In addition, our improved structure model adopts grouped convolution, and the attention module used is lightweight, which will reduce the computational cost of the network. Simulation experiments based on the MS COCO human body keypoints detection data set show that, compared with the Simple Baseline model, our model is similar in parameters and GFLOPs (giga floating-point operations per second), but the performance is better on the detection of accuracy under the multi-person scenes.


2021 ◽  
Vol 27 (3) ◽  
pp. 8-34
Author(s):  
Tatyana Cherkashina

The article presents the experience of converting non-targeted administrative data into research data, using as an example data on the income and property of deputies from local legislative bodies of the Russian Federation for 2019, collected as part of anticorruption operations. This particular empirical fragment was selected for the pilot study of administrative data, which includes assessing the possibility of integrating scattered fragments of information into a single database, assessing quality of data and their relevance for solving research problems, particularly analysis of high-income strata and the apparent trends towards individualization of private property. The system of indicators for assessing data quality includes their timeliness, availability, interpretability, reliability, comparability, coherence, errors of representation and measurement, and relevance. In the case of the data set in question, measurement errors are more common than representation errors. Overall the article emphasizes the notion that introducing new non-target data into circulation requires their preliminary testing, while data quality assessment becomes distributed both in time and between different subjects. The transition from created data to «obtained» data shifts the functions of evaluating its quality from the researcher-creator to the researcheruser. And though in this case data quality is in part ensured by the legal support for their production, the transformation of administrative data into research data involves assessing a variety of quality measurements — from availability to uniformity and accuracy.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. Q27-Q40 ◽  
Author(s):  
Katrin Löer ◽  
Andrew Curtis ◽  
Giovanni Angelo Meles

We have evaluated an explicit relationship between the representations of internal multiples by source-receiver interferometry and an inverse-scattering series. This provides a new insight into the interaction of different terms in each of these internal multiple prediction equations and explains why amplitudes of estimated multiples are typically incorrect. A downside of the existing representations is that their computational cost is extremely high, which can be a precluding factor especially in 3D applications. Using our insight from source-receiver interferometry, we have developed an alternative, computationally more efficient way to predict internal multiples. The new formula is based on crosscorrelation and convolution: two operations that are computationally cheap and routinely used in interferometric methods. We have compared the results of the standard and the alternative formulas qualitatively in terms of the constructed wavefields and quantitatively in terms of the computational cost using examples from a synthetic data set.


Ocean Science ◽  
2016 ◽  
Vol 12 (5) ◽  
pp. 1067-1090 ◽  
Author(s):  
Marie-Isabelle Pujol ◽  
Yannice Faugère ◽  
Guillaume Taburet ◽  
Stéphanie Dupuy ◽  
Camille Pelloquin ◽  
...  

Abstract. The new DUACS DT2014 reprocessed products have been available since April 2014. Numerous innovative changes have been introduced at each step of an extensively revised data processing protocol. The use of a new 20-year altimeter reference period in place of the previous 7-year reference significantly changes the sea level anomaly (SLA) patterns and thus has a strong user impact. The use of up-to-date altimeter standards and geophysical corrections, reduced smoothing of the along-track data, and refined mapping parameters, including spatial and temporal correlation-scale refinement and measurement errors, all contribute to an improved high-quality DT2014 SLA data set. Although all of the DUACS products have been upgraded, this paper focuses on the enhancements to the gridded SLA products over the global ocean. As part of this exercise, 21 years of data have been homogenized, allowing us to retrieve accurate large-scale climate signals such as global and regional MSL trends, interannual signals, and better refined mesoscale features.An extensive assessment exercise has been carried out on this data set, which allows us to establish a consolidated error budget. The errors at mesoscale are about 1.4 cm2 in low-variability areas, increase to an average of 8.9 cm2 in coastal regions, and reach nearly 32.5 cm2 in high mesoscale activity areas. The DT2014 products, compared to the previous DT2010 version, retain signals for wavelengths lower than  ∼  250 km, inducing SLA variance and mean EKE increases of, respectively, +5.1 and +15 %. Comparisons with independent measurements highlight the improved mesoscale representation within this new data set. The error reduction at the mesoscale reaches nearly 10 % of the error observed with DT2010. DT2014 also presents an improved coastal signal with a nearly 2 to 4 % mean error reduction. High-latitude areas are also more accurately represented in DT2014, with an improved consistency between spatial coverage and sea ice edge position. An error budget is used to highlight the limitations of the new gridded products, with notable errors in areas with strong internal tides.


1988 ◽  
Vol 254 (1) ◽  
pp. E104-E112
Author(s):  
B. Candas ◽  
J. Lalonde ◽  
M. Normand

The aim of this study is the selection of the number of compartments required for a model to represent the distribution and metabolism of corticotropin-releasing factor (CRF) in rats. The dynamics of labeled rat CRF were measured in plasma for seven rats after a rapid injection. The sampling schedule resulted from the combination of the two D-optimal sampling sets of times corresponding to both rival models. This protocol improved the numerical identifiability of the parameters and consequently facilitated the selection of the relevant model. A three-compartment model fits adequately to the seven individual dynamics and better represents four of them compared with the lower-order model. It was demonstrated, using simulations in which the measurement errors and the interindividual variability of the parameters are included, that his four-to-seven ratio of data sets is consistent with the relevance of the three-compartment model for every individual kinetic data set. Kinetic and metabolic parameters were then derived for each individual rat, their values being consistent with the prolonged effects of CRF on pituitary-adrenocortical secretion.


2020 ◽  
Vol 7 ◽  
Author(s):  
James Garforth ◽  
Barbara Webb

Forests present one of the most challenging environments for computer vision due to traits, such as complex texture, rapidly changing lighting, and high dynamicity. Loop closure by place recognition is a crucial part of successfully deploying robotic systems to map forests for the purpose of automating conservation. Modern CNN-based place recognition systems like NetVLAD have reported promising results, but the datasets used to train and test them are primarily of urban scenes. In this paper, we investigate how well NetVLAD generalizes to forest environments and find that it out performs state of the art loop closure approaches. Finally, integrating NetVLAD with ORBSLAM2 and evaluating on a novel forest data set, we find that, although suitable locations for loop closure can be identified, the SLAM system is unable to resolve matched places with feature correspondences. We discuss additional considerations to be addressed in future to deal with this challenging problem.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. A25-A29
Author(s):  
Lele Zhang

Migration of seismic reflection data leads to artifacts due to the presence of internal multiple reflections. Recent developments have shown that these artifacts can be avoided using Marchenko redatuming or Marchenko multiple elimination. These are powerful concepts, but their implementation comes at a considerable computational cost. We have derived a scheme to image the subsurface of the medium with significantly reduced computational cost and artifacts. This scheme is based on the projected Marchenko equations. The measured reflection response is required as input, and a data set with primary reflections and nonphysical primary reflections is created. Original and retrieved data sets are migrated, and the migration images are multiplied with each other, after which the square root is taken to give the artifact-reduced image. We showed the underlying theory and introduced the effectiveness of this scheme with a 2D numerical example.


Sign in / Sign up

Export Citation Format

Share Document