scholarly journals Resistivity and offset error estimations for the small-loop electromagnetic method

Geophysics ◽  
2008 ◽  
Vol 73 (3) ◽  
pp. F91-F95 ◽  
Author(s):  
Yutaka Sasaki ◽  
Jeong-Sul Son ◽  
Changryol Kim ◽  
Jung-Ho Kim

Handheld frequency-domain electromagnetic (EM) instruments are being used increasingly for shallow environmental and geotechnical surveys because of their portability and speed of use in field operations. However, in many cases, the quality of data is so poor that quantitative interpretation is not justified. This is because the small-loop EM method is required to detect very weak signals (the secondary magnetic fields) in the presence of the dominant primary field, so the data are inherently susceptible to calibration errors. Although these errors can be measured by raising the instrument high above the ground so that the effect of the conducting ground is negligible, it is impracticable to do so for every survey. We have developed an algorithm that simultaneously inverts small-loop EM data for a multidimensional resistivity distribution and offset errors. For this inversion method to work successfully the data must be collected at two heights. The forward modeling used in the inversion is based on a staggered-grid 3D finite-difference method; its solution has been checked against a 2.5D finite-element solution. Synthetic and real data examples demonstrate that the inversion recovers reliable resistivity models from multifrequency data that are contaminated severely by offset errors.

2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.


Testing is very essential in Data warehouse systems for decision making because the accuracy, validation and correctness of data depends on it. By looking to the characteristics and complexity of iData iwarehouse, iin ithis ipaper, iwe ihave itried ito ishow the scope of automated testing in assuring ibest data iwarehouse isolutions. Firstly, we developed a data set generator for creating synthetic but near to real data; then in isynthesized idata, with ithe help of hand icoded Extraction, Transformation and Loading (ETL) routine, anomalies are classified. For the quality assurance of data for a Data warehouse and to give the idea of how important the iExtraction, iTransformation iand iLoading iis, some very important test cases were identified. After that, to ensure the quality of data, the procedures of automated testing iwere iembedded iin ihand icoded iETL iroutine. Statistical analysis was done and it revealed a big enhancement in the quality of data with the procedures of automated testing. It enhances the fact that automated testing gives promising results in the data warehouse quality. For effective and easy maintenance of distributed data,a novel architecture was proposed. Although the desired result of this research is achieved successfully and the objectives are promising, but still there's a need to validate the results with the real life environment, as this research was done in simulated environment, which may not always give the desired results in real life environment. Hence, the overall potential of the proposed architecture can be seen until it is deployed to manage the real data which is distributed globally.


2021 ◽  
Vol 10 (3) ◽  
pp. 44-53
Author(s):  
Modar Abdullatif ◽  
Aya Banna ◽  
Duha El-Sahsah ◽  
Taher Wafa

This study aims to explore the application of analytical procedures (AP) as a major external auditing procedure in the developing country context of Jordan, a context characterised by the prevalence of closely held businesses, and limited demand for an external audit of high quality (Abdullatif, 2016; Almarayeh, Aibar-Guzman, & Abdullatif, 2020). To do so, the researchers conducted semi-structured interviews with twelve experienced Jordanian external auditors. The main issues covered are the detailed use of AP as an audit procedure and the most significant issues that may limit the effectiveness and reliability of this procedure in the Jordanian context. The main findings of the study include that AP are generally used and favoured by Jordanian auditors, despite their recognition of several problems facing the application of AP, and potentially limiting its reliability and effectiveness. These problems include weak internal controls of some clients, low quality of data provided by some clients, a lack of availability of specialised audit software for many auditors, and a lack of local Jordanian industry benchmarks that can be used to develop expectations necessary for the proper application of AP. The study recommends the establishment of such industry benchmarks, along with better monitoring by the regulatory authorities of the quality of company data, and increasing the efforts of these authorities on promoting the auditors’ use of specialised audit software in performing AP


2021 ◽  
Vol 946 (1) ◽  
pp. 012023
Author(s):  
P Korolev ◽  
Yu Korolev ◽  
A Loskutov

Abstract Three earthquakes occurred in the North Pacific in 2020, causing observable tsunamis. The tsunamis were not devastating. Numerical modelling of tsunami propagation was performed to reproduce operational forecasting (retrospective analysis) of waveforms at deep-water stations. Direct calculation of tsunami using USGS finite-fault source data on GPU was carried out. The leap-frog (Arakawa staggered grid) scheme calculation over the Pacific Ocean on a regular grid with a spatial step of 0.5 arc minutes of 1440 min (1 day) tsunami propagation was performed in approximately 90 min of computer time. With use of a hybrid cluster with several GPU accelerators and proper optimization of the simulation algorithm, this time can be reduced by tens of times. Consequently, the time for estimating the transfer function will be comparable to the travel time of a tsunami to the stations, where the forecasts data is. It will make possible to forecast the shape of a tsunami at any point with a lead time enough to decide for tsunami alert at sites where a tsunami poses a real danger. The calculation results are in good agreement with the real data of deep-ocean measurements. The quality of the forecast is comparable to the quality of calculations by other methods.


Author(s):  
Anh Duy Tran ◽  
Somjit Arch-int ◽  
Ngamnij Arch-int

Conditional functional dependencies (CFDs) have been used to improve the quality of data, including detecting and repairing data inconsistencies. Approximation measures have significant importance for data dependencies in data mining. To adapt to exceptions in real data, the measures are used to relax the strictness of CFDs for more generalized dependencies, called approximate conditional functional dependencies (ACFDs). This paper analyzes the weaknesses of dependency degree, confidence and conviction measures for general CFDs (constant and variable CFDs). A new measure for general CFDs based on incomplete knowledge granularity is proposed to measure the approximation of these dependencies as well as the distribution of data tuples into the conditional equivalence classes. Finally, the effectiveness of stripped conditional partitions and this new measure are evaluated on synthetic and real data sets. These results are important to the study of theory of approximation dependencies and improvement of discovery algorithms of CFDs and ACFDs.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. E221-E232 ◽  
Author(s):  
Zhenwei Guo ◽  
Hefeng Dong ◽  
Åge Kristensen

Marine electromagnetic (EM) inverse methods have recently been rapidly developed for offshore exploration. However, inverted resistivity results with low resolution are provided by the EM method. To improve this quality of the results, we have developed an image-guided regularization method for inversion of the marine EM data. The method incorporates seismic constraints into EM inversion. Information is extracted from seismic/geologic images and consists of the metric tensor field and sampling on the geologic structure. In addition to the regularization, geologic horizons picked from the seismic images and samplings on the structure can be used to generate an irregular sparse mesh. Compared with an unstructured regular dense mesh, a coherence-based irregular sparse mesh can reduce computational costs. Furthermore, image-guided regularization represents an improvement compared with traditional regularization that are structurally based on seismic images by following geologic features more closely and handling anomalies better. We have determined that image-guided regularization improves the results of EM inversions with irregular sparse meshes. The image-guided regularized inversion can be applied to marine controlled-source electromagnetic (CSEM) data and magnetotelluric (MT) data, and it can be used for joint inversion of CSEM and MT data. Regarding its application to real data, image-guided inversion was successfully applied to CSEM data on the Troll area, using an anisotropic model.


Author(s):  
B. L. Armbruster ◽  
B. Kraus ◽  
M. Pan

One goal in electron microscopy of biological specimens is to improve the quality of data to equal the resolution capabilities of modem transmission electron microscopes. Radiation damage and beam- induced movement caused by charging of the sample, low image contrast at high resolution, and sensitivity to external vibration and drift in side entry specimen holders limit the effective resolution one can achieve. Several methods have been developed to address these limitations: cryomethods are widely employed to preserve and stabilize specimens against some of the adverse effects of the vacuum and electron beam irradiation, spot-scan imaging reduces charging and associated beam-induced movement, and energy-filtered imaging removes the “fog” caused by inelastic scattering of electrons which is particularly pronounced in thick specimens.Although most cryoholders can easily achieve a 3.4Å resolution specification, information perpendicular to the goniometer axis may be degraded due to vibration. Absolute drift after mechanical and thermal equilibration as well as drift after movement of a holder may cause loss of resolution in any direction.


Author(s):  
Nur Maimun ◽  
Jihan Natassa ◽  
Wen Via Trisna ◽  
Yeye Supriatin

The accuracy in administering the diagnosis code was the important matter for medical recorder, quality of data was the most important thing for health information management of medical recorder. This study aims to know the coder competency for accuracy and precision of using ICD 10 at X Hospital in Pekanbaru. This study was a qualitative method with case study implementation from five informan. The result show that medical personnel (doctor) have never received a training about coding, doctors writing that hard and difficult to read, failure for making diagnoses code or procedures, doctor used an usual abbreviations that are not standard, theres still an officer who are not understand about the nomenclature and mastering anatomy phatology, facilities and infrastructure were supported for accuracy and precision of the existing code. The errors of coding always happen because there is a human error. The accuracy and precision in coding very influence against the cost of INA CBGs, medical and the committee did most of the work in the case of severity level III, while medical record had a role in monitoring or evaluation of coding implementation. If there are resumes that is not clearly case mix team check file needed medical record the result the diagnoses or coding for conformity. Keywords: coder competency, accuracy and precision of coding, ICD 10


2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


1996 ◽  
Vol 33 (9) ◽  
pp. 101-108 ◽  
Author(s):  
Agnès Saget ◽  
Ghassan Chebbo ◽  
Jean-Luc Bertrand-Krajewski

The first flush phenomenon of urban wet weather discharges is presently a controversial subject. Scientists do not agree with its reality, nor with its influences on the size of treatment works. Those disagreements mainly result from the unclear definition of the phenomenon. The objective of this article is first to provide a simple and clear definition of the first flush and then to apply it to real data and to obtain results about its appearance frequency. The data originate from the French database based on the quality of urban wet weather discharges. We use 80 events from 7 separately sewered basins, and 117 events from 7 combined sewered basins. The main result is that the first flush phenomenon is very scarce, anyway too scarce to be used to elaborate a treatment strategy against pollution generated by urban wet weather discharges.


Sign in / Sign up

Export Citation Format

Share Document