scholarly journals An area matching process to estimate the hydraulic parameters using transient constant-head test data

2016 ◽  
Vol 47 (5) ◽  
pp. 919-931 ◽  
Author(s):  
A. Ufuk Şahin ◽  
Emin Çiftçi

A new parameter estimation methodology was established for the interpretation of the transient constant-head test to identify the hydrogeological parameters of an aquifer. The proposed method, referred as the area matching process (AMP), is based on linking the field data to the theoretical type curve through a unique area computed above these curves bounded by a user specified integration interval. The proposed method removes the need of superimposition of theoretical type curves and field data collected during the test, which may lead to the unexpected errors in assessing aquifer parameters. The AMP approach was implemented for a number of synthetically generated hypothetical test data augmented with several random noise levels, which mimic the uncertainty in site measurement together with porous media heterogeneity, and to an actual field data set available in the literature. The estimation performance of the AMP method was also compared with the existing traditional and recently developed techniques. As demonstrated by the conducted test results, the accuracy, reliability, robustness and simplicity of the proposed technique provide significant flexibility in field applications.

Geophysics ◽  
1973 ◽  
Vol 38 (2) ◽  
pp. 327-338 ◽  
Author(s):  
E. R. Kanasewich ◽  
C. D. Hemmings ◽  
T. Alpaslan

A nonlinear multichannel filter is developed which appears to be particularly useful for enhancement of seismic refraction and teleseismic array data. The basic filter involves the extraction of the Nth root of each element in the matrix forming the data set, where N is any positive integer, and the Nth power of the summation over the channels. The filter is effective in reducing random noise, whereas identical signals which are in‐phase on all channels are retained at the expense of some distortion. The output from this nonlinear filter has far greater resolution in specifying phase velocity than any multichannel linear filter we have employed. Examples of theoretical and actual field seismograms are presented after various forms of filtering to illustrate their effectiveness.


2021 ◽  
Vol 13 (21) ◽  
pp. 4250
Author(s):  
Jordi Mahardika Puntu ◽  
Ping-Yu Chang ◽  
Ding-Jiun Lin ◽  
Haiyina Hasbia Amania ◽  
Yonatan Garkebo Doyoro

We aim to develop a comprehensive tunnel lining detection method and clustering technique for semi-automatic rebar identification in order to investigate the ten tunnels along the South-link Line Railway of Taiwan (SLRT). We used the Ground Penetrating Radar (GPR) instrument with a 1000 MHz antenna frequency, which was placed on a versatile antenna holder that is flexible to the tunnel’s condition. We called it a Vehicle-mounted Ground Penetrating Radar (VMGPR) system. We detected the tunnel lining boundary according to the Fresnel Reflection Coefficient (FRC) in both A-scan and B-scan data, then estimated the thinning lining of the tunnels. By applying the Hilbert Transform (HT), we extracted the envelope to see the overview of the energy distribution in our data. Once we obtained the filtered radargram, we used it to estimate the Two-dimensional Forward Modeling (TDFM) simulation parameters. Specifically, we produced the TDFM model with different random noise (0–30%) for the rebar model. The rebar model and the field data were identified with the Hierarchical Agglomerative Clustering (HAC) in machine learning and evaluated using the Silhouette Index (SI). Taken together, these results suggest three boundaries of the tunnel lining i.e., the air–second lining boundary, the second–first lining boundary, and the first–wall rock boundary. Among the tunnels that we scanned, the Fangye 1 tunnel is the only one in category B, with the highest percentage of the thinning lining, i.e., 13.39%, whereas the other tunnels are in category A, with a percentage of the thinning lining of 0–1.71%. Based on the clustered radargram, the TDFM model for rebar identification is consistent with the field data, where k = 2 is the best choice to represent our data set. It is interesting to observe in the clustered radargram that the TDFM model can mimic the field data. The most striking result is that the TDFM model with 30% random noise seems to describe our data well, where the rebar response is rough due to the high noise level on the radargram.


2003 ◽  
Vol 42 (05) ◽  
pp. 564-571 ◽  
Author(s):  
M. Schumacher ◽  
E. Graf ◽  
T. Gerds

Summary Objectives: A lack of generally applicable tools for the assessment of predictions for survival data has to be recognized. Prediction error curves based on the Brier score that have been suggested as a sensible approach are illustrated by means of a case study. Methods: The concept of predictions made in terms of conditional survival probabilities given the patient’s covariates is introduced. Such predictions are derived from various statistical models for survival data including artificial neural networks. The idea of how the prediction error of a prognostic classification scheme can be followed over time is illustrated with the data of two studies on the prognosis of node positive breast cancer patients, one of them serving as an independent test data set. Results and Conclusions: The Brier score as a function of time is shown to be a valuable tool for assessing the predictive performance of prognostic classification schemes for survival data incorporating censored observations. Comparison with the prediction based on the pooled Kaplan Meier estimator yields a benchmark value for any classification scheme incorporating patient’s covariate measurements. The problem of an overoptimistic assessment of prediction error caused by data-driven modelling as it is, for example, done with artificial neural nets can be circumvented by an assessment in an independent test data set.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


Author(s):  
Joshua Auld ◽  
Abolfazl (Kouros) Mohammadian ◽  
Marcelo Simas Oliveira ◽  
Jean Wolf ◽  
William Bachman

Research was undertaken to determine whether demographic characteristics of individual travelers could be derived from travel pattern information when no information about the individual was available. This question is relevant in the context of anonymously collected travel information, such as cell phone traces, when used for travel demand modeling. Determining the demographics of a traveler from such data could partially obviate the need for large-scale collection of travel survey data, depending on the purpose for which the data were to be used. This research complements methodologies used to identify activity stops, purposes, and mode types from raw trace data and presumes that such methods exist and are available. The paper documents the development of procedures for taking raw activity streams estimated from GPS trace data and converting these into activity travel pattern characteristics that are then combined with basic land use information and used to estimate various models of demographic characteristics. The work status, education level, age, and license possession of individuals and the presence of children in their households were all estimated successfully with substantial increases in performance versus null model expectations for both training and test data sets. The gender, household size, and number of vehicles proved more difficult to estimate, and performance was lower on the test data set; these aspects indicate overfitting in these models. Overall, the demographic models appear to have potential for characterizing anonymous data streams, which could extend the usability and applicability of such data sources to the travel demand context.


2021 ◽  
pp. 004912412098618
Author(s):  
Tim de Leeuw ◽  
Steffen Keijl

Although multiple organizational-level databases are frequently combined into one data set, there is no overview of the matching methods (MMs) that are utilized because the vast majority of studies does not report how this was done. Furthermore, it is unclear what the differences are between the utilized methods, and it is unclear whether research findings might be influenced by the utilized method. This article describes four commonly used methods for matching databases and potential issues. An empirical comparison of those methods used to combine regularly used organizational-level databases reveals large differences in the number of observations obtained. Furthermore, empirical analyses of these different methods reveal that several of them produce both systematic and random errors. These errors can result in erroneous estimations of regression coefficients in terms of direction and/or size as well as an issue where truly significant relationships might be found to be insignificant. This shows that research findings can be influenced by the MM used, which would argue in favor of the establishment of a preferred method as well as more transparency on the utilized method in future studies. This article provides insight into the matching process and methods, suggests a preferred method, and should aid researchers, reviewers, and editors with both combining multiple databases and describing and assessing them.


2021 ◽  
Author(s):  
David Cotton ◽  

<p><strong>Introduction</strong></p><p>HYDROCOASTAL is a two year project funded by ESA, with the objective to maximise exploitation of SAR and SARin altimeter measurements in the coastal zone and inland waters, by evaluating and implementing new approaches to process SAR and SARin data from CryoSat-2, and SAR altimeter data from Sentinel-3A and Sentinel-3B. Optical data from Sentinel-2 MSI and Sentinel-3 OLCI instruments will also be used in generating River Discharge products.</p><p>New SAR and SARin processing algorithms for the coastal zone and inland waters will be developed and implemented and evaluated through an initial Test Data Set for selected regions. From the results of this evaluation a processing scheme will be implemented to generate global coastal zone and river discharge data sets.</p><p>A series of case studies will assess these products in terms of their scientific impacts.</p><p>All the produced data sets will be available on request to external researchers, and full descriptions of the processing algorithms will be provided</p><p> </p><p><strong>Objectives</strong></p><p>The scientific objectives of HYDROCOASTAL are to enhance our understanding  of interactions between the inland water and coastal zone, between the coastal zone and the open ocean, and the small scale processes that govern these interactions. Also the project aims to improve our capability to characterize the variation at different time scales of inland water storage, exchanges with the ocean and the impact on regional sea-level changes</p><p>The technical objectives are to develop and evaluate  new SAR  and SARin altimetry processing techniques in support of the scientific objectives, including stack processing, and filtering, and retracking. Also an improved Wet Troposphere Correction will be developed and evaluated.</p><p><strong>Project  Outline</strong></p><p>There are four tasks to the project</p><ul><li>Scientific Review and Requirements Consolidation: Review the current state of the art in SAR and SARin altimeter data processing as applied to the coastal zone and to inland waters</li> <li>Implementation and Validation: New processing algorithms with be implemented to generate a Test Data sets, which will be validated against models, in-situ data, and other satellite data sets. Selected algorithms will then be used to generate global coastal zone and river discharge data sets</li> <li>Impacts Assessment: The impact of these global products will be assess in a series of Case Studies</li> <li>Outreach and Roadmap: Outreach material will be prepared and distributed to engage with the wider scientific community and provide recommendations for development of future missions and future research.</li> </ul><p> </p><p><strong>Presentation</strong></p><p>The presentation will provide an overview to the project, present the different SAR altimeter processing algorithms that are being evaluated in the first phase of the project, and early results from the evaluation of the initial test data set.</p><p> </p>


Author(s):  
Yanxiang Yu ◽  
◽  
Chicheng Xu ◽  
Siddharth Misra ◽  
Weichang Li ◽  
...  

Compressional and shear sonic traveltime logs (DTC and DTS, respectively) are crucial for subsurface characterization and seismic-well tie. However, these two logs are often missing or incomplete in many oil and gas wells. Therefore, many petrophysical and geophysical workflows include sonic log synthetization or pseudo-log generation based on multivariate regression or rock physics relations. Started on March 1, 2020, and concluded on May 7, 2020, the SPWLA PDDA SIG hosted a contest aiming to predict the DTC and DTS logs from seven “easy-to-acquire” conventional logs using machine-learning methods (GitHub, 2020). In the contest, a total number of 20,525 data points with half-foot resolution from three wells was collected to train regression models using machine-learning techniques. Each data point had seven features, consisting of the conventional “easy-to-acquire” logs: caliper, neutron porosity, gamma ray (GR), deep resistivity, medium resistivity, photoelectric factor, and bulk density, respectively, as well as two sonic logs (DTC and DTS) as the target. The separate data set of 11,089 samples from a fourth well was then used as the blind test data set. The prediction performance of the model was evaluated using root mean square error (RMSE) as the metric, shown in the equation below: RMSE=sqrt(1/2*1/m* [∑_(i=1)^m▒〖(〖DTC〗_pred^i-〖DTC〗_true^i)〗^2 + 〖(〖DTS〗_pred^i-〖DTS〗_true^i)〗^2 ] In the benchmark model, (Yu et al., 2020), we used a Random Forest regressor and conducted minimal preprocessing to the training data set; an RMSE score of 17.93 was achieved on the test data set. The top five models from the contest, on average, beat the performance of our benchmark model by 27% in the RMSE score. In the paper, we will review these five solutions, including preprocess techniques and different machine-learning models, including neural network, long short-term memory (LSTM), and ensemble trees. We found that data cleaning and clustering were critical for improving the performance in all models.


Sign in / Sign up

Export Citation Format

Share Document