uncalibrated model
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 1)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Po-Chih Kuo ◽  
Cheng Che Tsai ◽  
Diego M. López ◽  
Alexandros Karargyris ◽  
Tom J. Pollard ◽  
...  

AbstractImage-based teleconsultation using smartphones has become increasingly popular. In parallel, deep learning algorithms have been developed to detect radiological findings in chest X-rays (CXRs). However, the feasibility of using smartphones to automate this process has yet to be evaluated. This study developed a recalibration method to build deep learning models to detect radiological findings on CXR photographs. Two publicly available databases (MIMIC-CXR and CheXpert) were used to build the models, and four derivative datasets containing 6453 CXR photographs were collected to evaluate model performance. After recalibration, the model achieved areas under the receiver operating characteristic curve of 0.80 (95% confidence interval: 0.78–0.82), 0.88 (0.86–0.90), 0.81 (0.79–0.84), 0.79 (0.77–0.81), 0.84 (0.80–0.88), and 0.90 (0.88–0.92), respectively, for detecting cardiomegaly, edema, consolidation, atelectasis, pneumothorax, and pleural effusion. The recalibration strategy, respectively, recovered 84.9%, 83.5%, 53.2%, 57.8%, 69.9%, and 83.0% of performance losses of the uncalibrated model. We conclude that the recalibration method can transfer models from digital CXRs to CXR photographs, which is expected to help physicians’ clinical works.


Author(s):  
Kevin OFlaherty ◽  
Zachary Graves ◽  
Lie Xiong ◽  
Mark Andrews

The paper presents an application of statistical calibration techniques to a bracket design fatigue model simulated in COMSOL Multiphysics®. The calibration will tune the bracket’s material properties and fatigue characteristics. For illustrative purposes, the test data used to calibrate the simulation model will be generated from the same simulation routine with the addition of an intentionally applied bias and random noise to simulate model form and physical testing errors. The accuracy and conclusions from the statistically calibrated model will be compared with the uncalibrated model as well as a model calibrated with conventional error minimization methods. Multiple metrics will be shown which can be used for model validation, including a discrepancy map which characterizes inadequacies in the simulation. The metrics used in the comparison will also include results from optimization, sensitivity analysis, and propagation of uncertainties motivated by manufacturing variations during bracket fabrication. The results will demonstrate the importance of calibrating a model before drawing design conclusions.


2016 ◽  
Vol 20 (10) ◽  
pp. 4391-4407 ◽  
Author(s):  
Esraa Tarawneh ◽  
Jonathan Bridge ◽  
Neil Macdonald

Abstract. This study uses the Soil and Water Assessment Tool (SWAT) model to quantitatively compare available input datasets in a data-poor dryland environment (Wala catchment, Jordan; 1743 km2). Eighteen scenarios combining best available land-use, soil and weather datasets (1979–2002) are considered to construct SWAT models. Data include local observations and global reanalysis data products. Uncalibrated model outputs assess the variability in model performance derived from input data sources only. Model performance against discharge and sediment load data are compared using r2, Nash–Sutcliffe efficiency (NSE), root mean square error standard deviation ratio (RSR) and percent bias (PBIAS). NSE statistic varies from 0.56 to −12 and 0.79 to −85 for best- and poorest-performing scenarios against observed discharge and sediment data respectively. Global weather inputs yield considerable improvements on discontinuous local datasets, whilst local soil inputs perform considerably better than global-scale mapping. The methodology provides a rapid, transparent and transferable approach to aid selection of the most robust suite of input data.


2016 ◽  
Author(s):  
Esraa Tarawneh ◽  
Jonathan Bridge ◽  
Neil Macdonald

Abstract. This study uses the Soil and Water Assessment Tool (SWAT) model to quantitatively compare available input datasets in a data-poor dryland environment (Wala catchment, Jordan; 1743 km2). Eighteen scenarios combining best available land-use, soil and weather datasets (1979–2002) are considered to construct SWAT models. Data include local observations and global reanalysis data products. Uncalibrated model outputs assess the variability in model performance derived from input data sources only. Model performance against discharge and sediment load data are compared using r2, Nash–Sutcliffe Efficiency (NSE), RSR and PBIAS. NSE statistic varies from 0.56 to −12 and 0.79 to −85 for best and poorest-performing scenarios against observed discharge and sediment data respectively. Global weather inputs yield considerable improvements on discontinuous local datasets, whilst local soil inputs perform considerably better than global-scale mapping. The methodology provides a rapid, transparent and transferable approach to aid selection of the most robust suite of input data.


2013 ◽  
Vol 8 (3-4) ◽  
pp. 479-486
Author(s):  
C. Bonhomme ◽  
G. Petrucci

Nowadays, high resolution geographical data are widely available for research and operational purposes. The inclusion of these data into hydrological models may suggest a direct and clear improvement of their performances. Different configurations and model structures, including an increasing quantity of geographical information, are tested using the widely used Stormwater Management Model 5 on a 2.5 km2 pilot catchment (located in Sucy en Brie, close to Paris, France). The Nash-Sutcliffe criterion is used to estimate the goodness of fit between model simulations and available measurements. If including some basic spatial information on landuse clearly improves the performance of the uncalibrated model, the increase in performance is less obvious if the user continues to refine geographical information on the catchment. Moreover, model predictions are comparable between a model calibrated with an efficient calibration procedure and a more physical approach including a fine spatial description and no calibration. Finally, the quality of data used for calibration and validation seems to be a key parameter to obtain a good fit between measurements and model predictions.


2012 ◽  
Vol 9 (8) ◽  
pp. 9687-9714 ◽  
Author(s):  
I. Engelhardt ◽  
J. G. De Aguinaga ◽  
H. Mikat ◽  
C. Schüth ◽  
O. Lenz ◽  
...  

Abstract. A groundwater model characterized by a lack of field data to estimate hydraulic model parameters and boundary conditions combined with many piezometric head observations was investigated concerning model uncertainty. Different conceptual models with a stepwise increase from 0 to 30 adjustable parameters were calibrated using PEST. Residuals, sensitivities, the Akaike Information Criterion (AIC), and the likelihood of each model were computed. As expected, residuals and standard errors decreased with an increasing amount of adjustable model parameters. However, the model with only 15 adjusted parameters was evaluated by AIC as the best option with a likelihood of 98%, while the uncalibrated model obtained the worst AIC value. Computing of the AIC yielded the most important information to assess the model likelihood. Comparing only residuals of different conceptual models was less valuable and would result in an overparameterization of the conceptual model approach. Sensitivities of piezometric heads were highest for the model with five adjustable parameters reflecting also changes of extracted groundwater volumes. With increasing amount of adjustable parameters piezometric heads became less sensitive for the model calibration and changes of pumping rates were no longer displayed by the sensitivity coefficients. Therefore, when too many model parameters were adjusted, these parameters lost their impact on the model results. Additionally, using only sedimentological data to derive hydraulic parameters resulted in a large bias between measured and simulated groundwater level.


Author(s):  
Sylvain Proulx ◽  
Genevie`ve Miron ◽  
Alexandre Girard ◽  
Jean-Se´bastien Plante

Polymer-based binary robots and mechatronics devices can lead to simple, robust, and cost effective solutions for Magnetic Resonnace Image-guided (MRI) medical procedures. A binary manipulator using 12 elastically averaged air muscles has been proposed for MRI-guided biopsies and brachytherapies procedures used for prostate cancer diagnostic and treatment. In this design, radially-distributed air muscles position a needle guide relatively to the MRI table. The system constitutes an active compliant mechanism where the compliance relieves the over-constraint imposed by the redundant parallel architecture. This paper presents experimental results for repeatability, accuracy, and stiffness of a fully functional manipulator prototype. Results show an experimental repeatability of 0.1 mm for point-to-point manipulation on a workspace diameter of 80 mm. Manipulator average accuracy is 4.7 mm when based on the nominal (uncalibrated) model and improves to 2.1 mm when using a calibrated model. The estimated stiffness at the end-effector is ∼0.95 N/mm and is sufficient to withstand the needle insertion forces without major deflection. Needle trajectories during state change appear to be primarily driven by the system’s elastic energy gradient. The study shows the manipulator prototype to meet its design criteria and to have the potential of becoming an effective and low-cost manipulator for MRI-guided prostate cancer treatment.


Sign in / Sign up

Export Citation Format

Share Document