scholarly journals Model-based dose finding under model uncertainty using general parametric models

2013 ◽  
Vol 33 (10) ◽  
pp. 1646-1661 ◽  
Author(s):  
José Pinheiro ◽  
Björn Bornkamp ◽  
Ekkehard Glimm ◽  
Frank Bretz
Author(s):  
Pavel Mozgunov ◽  
Rochelle Knight ◽  
Helen Barnett ◽  
Thomas Jaki

There is growing interest in Phase I dose-finding studies studying several doses of more than one agent simultaneously. A number of combination dose-finding designs were recently proposed to guide escalation/de-escalation decisions during the trials. The majority of these proposals are model-based: a parametric combination-toxicity relationship is fitted as data accumulates. Various parameter shapes were considered but the unifying theme for many of these is that typically between 4 and 6 parameters are to be estimated. While more parameters allow for more flexible modelling of the combination-toxicity relationship, this is a challenging estimation problem given the typically small sample size in Phase I trials of between 20 and 60 patients. These concerns gave raise to an ongoing debate whether including more parameters into combination-toxicity model leads to more accurate combination selection. In this work, we extensively study two variants of a 4-parameter logistic model with reduced number of parameters to investigate the effect of modelling assumptions. A framework to calibrate the prior distributions for a given parametric model is proposed to allow for fair comparisons. Via a comprehensive simulation study, we have found that the inclusion of the interaction parameter between two compounds does not provide any benefit in terms of the accuracy of selection, on average, but is found to result in fewer patients allocated to the target combination during the trial.


2020 ◽  
Vol 76 (10) ◽  
pp. 912-925
Author(s):  
Thomas C. Terwilliger ◽  
Oleg V. Sobolev ◽  
Pavel V. Afonine ◽  
Paul D. Adams ◽  
Randy J. Read

Density modification uses expectations about features of a map such as a flat solvent and expected distributions of density in the region of the macromolecule to improve individual Fourier terms representing the map. This process transfers information from one part of a map to another and can improve the accuracy of a map. Here, the assumptions behind density modification for maps from electron cryomicroscopy are examined and a procedure is presented that allows the incorporation of model-based information. Density modification works best in cases where unfiltered, unmasked maps with clear boundaries between the macromolecule and solvent are visible, and where there is substantial noise in the map, both in the region of the macromolecule and the solvent. It also is most effective if the characteristics of the map are relatively constant within regions of the macromolecule and the solvent. Model-based information can be used to improve density modification, but model bias can in principle occur. Here, model bias is reduced by using ensemble models that allow an estimation of model uncertainty. A test of model bias is presented that suggests that even if the expected density in a region of a map is specified incorrectly by using an incorrect model, the incorrect expectations do not strongly affect the final map.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 16
Author(s):  
Ali Mohammad-Djafari

Signale and image processing has always been the main tools in many area and in particular in Medical and Biomedical applications. Nowadays, there are great number of toolboxes, general purpose and very specialized, in which classical techniques are implemented and can be used: all the transformation based methods (Fourier, Wavelets, ...) as well as model based and iterative regularization methods. Statistical methods have also shown their success in some area when parametric models are available. Bayesian inference based methods had great success, in particular, when the data are noisy, uncertain, incomplete (missing values) or with outliers and where there is a need to quantify uncertainties. In some applications, nowadays, we have more and more data. To use these “Big Data” to extract more knowledge, the Machine Learning and Artificial Intelligence tools have shown success and became mandatory. However, even if in many domains of Machine Learning such as classification and clustering these methods have shown success, their use in real scientific problems are limited. The main reasons are twofold: First, the users of these tools cannot explain the reasons when the are successful and when they are not. The second is that, in general, these tools can not quantify the remaining uncertainties. Model based and Bayesian inference approach have been very successful in linear inverse problems. However, adjusting the hyper parameters is complex and the cost of the computation is high. The Convolutional Neural Networks (CNN) and Deep Learning (DL) tools can be useful for pushing farther these limits. At the other side, the Model based methods can be helpful for the selection of the structure of CNN and DL which are crucial in ML success. In this work, I first provide an overview and then a survey of the aforementioned methods and explore the possible interactions between them.


2017 ◽  
Vol 117 (3) ◽  
pp. 332-339 ◽  
Author(s):  
Sharon B Love ◽  
Sarah Brown ◽  
Christopher J Weir ◽  
Chris Harbron ◽  
Christina Yap ◽  
...  
Keyword(s):  

Trials ◽  
2015 ◽  
Vol 16 (S2) ◽  
Author(s):  
Christina Yap ◽  
Lucinda Billingham ◽  
Charles Craddock ◽  
John O'Quigley

2015 ◽  
Vol 807 ◽  
pp. 89-98 ◽  
Author(s):  
Jan Würtenberger ◽  
Sebastian Gramlich ◽  
Tillmann Freund ◽  
Julian Lotz ◽  
Maximilian Zocholl ◽  
...  

This paper gives an overview about how to locate uncertainty in product modelling within the development process. Therefore, the process of product modelling is systematized with the help of characteristics of product models and typical working steps to develop a product model. Based on that, it is possible to distinguish between product modelling uncertainty, mathematic modelling uncertainty, parameter uncertainty, simulation uncertainty and product model uncertainty.


Author(s):  
Thomas C. Terwilliger ◽  
Oleg V. Sobolev ◽  
Pavel V. Afonine ◽  
Paul D. Adams ◽  
Randy J. Read

AbstractDensity modification uses expectations about features of a map such as a flat solvent and expected distributions of density in the region of the macromolecule to improve individual Fourier terms representing the map. This process transfers information from one part of a map to another and can improve the accuracy of a map. Here the assumptions behind density modification for maps from electron cryomicroscopy are examined and a procedure is presented that allows incorporation of model-based information. Density modification works best in cases where unfiltered, unmasked maps with clear boundaries between macromolecule and solvent are visible and where there is substantial noise in the map, both in the region of the macromolecule and the solvent. It also is most effective if the characteristics of the map are relatively constant within regions of the macromolecule and the solvent. Model-based information can be used to improve density modification, but model bias can in principle occur. Here model bias is reduced by using ensemble models that allow estimation of model uncertainty. A test of model bias is presented suggesting that even if the expected density in a region of a map is specified incorrectly by using an incorrect model, the incorrect expectations do not strongly affect the final map.SynopsisThe prerequisites for density modification of maps from electron cryomicroscopy are examined and a procedure for incorporating model-based information is presented.


Author(s):  
Sugathevan Suranthiran ◽  
Suhada Jayasuriya

In an attempt to facilitate the design and implementation of memory-less nonlinear sensors, the signal reconstruction schemes are analyzed and necessary modifications are proposed to improve the accuracy and minimize errors in sensor measurements. The problem of recovering chirp signal from the distorted nonlinear output is considered and an efficient reconstruction approach is developed. Model uncertainty is a serious issue with any model-based algorithms and a novel technique, which uses a norminal model instead of an accurate model and produces the results that are robust to model uncertainty, is proposed.


Sign in / Sign up

Export Citation Format

Share Document