scholarly journals New Approaches for Quantitative Reconstruction of Radiation Dose in Human Blood Cells

2019 ◽  
Author(s):  
Shanaz A. Ghandhi ◽  
Igor Shuryak ◽  
Shad R. Morton ◽  
Sally A. Amundson ◽  
David J. Brenner

AbstractIn the event of a nuclear attack or radiation event, there would be an urgent need for assessing and reconstructing the dose to which hundreds or thousands of individuals were exposed. These measurements would need a rapid assay to facilitate triage and medical management for individuals based on dose. Our approaches to development of rapid assays for reconstructing dose, using transcriptomics, have led to identification of gene sets that have potential to be used in the field; but need further testing. This was a proof-of-principle study for new methods using radiation-responsive genes to generate quantitative, rather than categorical, radiation-dose reconstructions based on a blood sample. We used a new normalization method to reduce effects of variability of gene signals in unirradiated samples across studies; developed a quantitative dose-reconstruction method that is generally under-utilized compared to categorical methods; and combined these to determine a gene-set as a reconstructor. Our dose-reconstruction biomarker was trained on two data sets and tested on two independent ones. It was able to predict dose up to 4.5 Gy with root mean squared error (RMSE) of ± 0.35 Gy on test datasets (same platform), and up to 6.0 Gy with RMSE of 1.74 Gy on another (different platform).

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Shanaz A. Ghandhi ◽  
Igor Shuryak ◽  
Shad R. Morton ◽  
Sally A. Amundson ◽  
David J. Brenner

AbstractIn the event of a nuclear attack or large-scale radiation event, there would be an urgent need for assessing the dose to which hundreds or thousands of individuals were exposed. Biodosimetry approaches are being developed to address this need, including transcriptomics. Studies have identified many genes with potential for biodosimetry, but, to date most have focused on classification of samples by exposure levels, rather than dose reconstruction. We report here a proof-of-principle study applying new methods to select radiation-responsive genes to generate quantitative, rather than categorical, radiation dose reconstructions based on a blood sample. We used a new normalization method to reduce effects of variability of signal intensity in unirradiated samples across studies; developed a quantitative dose-reconstruction method that is generally under-utilized compared to categorical methods; and combined these to determine a gene set as a reconstructor. Our dose-reconstruction biomarker was trained using two data sets and tested on two independent ones. It was able to reconstruct dose up to 4.5 Gy with root mean squared error (RMSE) of ± 0.35 Gy on a test dataset using the same platform, and up to 6.0 Gy with RMSE of ± 1.74 Gy on a test set using a different platform.


2016 ◽  
Vol 8 (3) ◽  
pp. 321-339
Author(s):  
R. Pandey ◽  
K. Yadav ◽  
N. S. Thakur

The present paper provides alternative improved Factor-Type (F-T) estimators of population mean in presence of item non-response for the practitioners. The proposed estimators have been shown to be more efficient than the four existing estimators which are more efficient than the usual ratio and the mean estimators. Optimum conditions for minimum mean squared error are obtained for the new estimators. Empirical comparisons based on three different data sets establish that the proposed estimators record least mean squared error and hence a substantial gain in Percentage Relative Efficiency (P.R.E.), over these five contemporary estimators.


2020 ◽  
Author(s):  
Ashkan Esmaeili ◽  
Mohammadamin Fakharian ◽  
Yasaman Amiri Abyaneh

The linear system with missing information is <div>investigated in this paper. New methods are </div><div>introduced to improve the Mean Squared Error (MSE) </div><div>on the test set in comparison to state-of-the-art method</div><div>s, through appropriate tuning of Bias-Variance </div><div>trade-off. The concept is to cluster the data and </div><div>adapt the learning model to each cluster. Hence, </div><div>we set forth a controlled bias into the problem and </div><div>positively utilize it to enhance learning capability on </div><div>the instances considered in some specific </div><div>neighborhood. To deal with missing infrormation, </div><div>we propose a novel algorithm "Missing-SCOP" based </div><div>on SCOP-KMEANS algorithm introduced by Wagstaff,</div><div> et al., utilizing the missing pattern of the dataset for </div><div>construction of a soft-constraint matrix and clustering </div><div>in missing scenario. It is shown that controlled </div><div>over-fitting suggested by our algorithm improves </div><div>prediction accuracy in various cases. </div><div>Numerical experiments approve the efficacy of our</div><div> proposed algorithm in enhancing the prediction</div><div> accuracy.</div>


2017 ◽  
Author(s):  
Simone Lederer ◽  
Tjeerd M. H. Dijkstra ◽  
Tom Heskes

AbstractHigh-throughput techniques allow for massive screening of drug combinations. To find combinations that exhibit an interaction effect, one filters for promising compound combinations by comparing to a response without interaction. A common principle for no interaction is Loewe Additivity which is based on the assumption that no compound interacts with itself and that doses of both compounds for a given effect are equivalent. For the model to be consistent, the doses of both compounds have to be proportional. We call this restriction the Loewe Additivity Consistency Condition (LACC). We derive explicit and implicit null reference models from the Loewe Additivity principle that are equivalent when the LACC holds. Of these two formulations, the implicit formulation is the known General Isobole Equation [1], whereas the explicit one is the novel contribution. The LACC is violated in a significant number of cases. In this scenario the models make different predictions. We analyze two data sets of drug screening that are non-interactive [2, 3] and show that the LACC is mostly violated and Loewe Additivity not defined. Further, we compare the measurements of the non-interactive cases of both data sets to the theoretical null reference models in terms of bias and mean squared error. We demonstrate that the explicit formulation of the null reference model leads to smaller mean squared errors than the implicit one and is much faster to compute.


Author(s):  
Sofi Mudasir Ahad ◽  
Sheikh Parvaiz Ahmad ◽  
Sheikh Aasimeh Rehman

In this paper, Bayesian and non-Bayesian methods are used for parameter estimation of weighted Rayleigh (WR) distribution. Posterior distributions are derived under the assumption of informative and non-informative priors. The Bayes estimators and associated risks are obtained under different symmetric and asymmetric loss functions. Results are compared on the basis of posterior risk and mean square error using simulated and real life data sets. The study depicts that in order to estimate the scale parameter of the weighted Rayleigh distribution use of entropy loss function under Gumbel type II prior can be preferred. Also, Bayesian method of estimation having least values of mean squared error gives better results as compared to maximum likelihood method of estimation.


2018 ◽  
Vol 44 (1) ◽  
pp. 25-44
Author(s):  
Sandip Sinharay

The value-added method of Haberman is arguably one of the most popular methods to evaluate the quality of subscores. According to the method, a subscore has added value if the reliability of the subscore is larger than a quantity referred to as the proportional reduction in mean squared error of the total score. This article shows how well-known statistical tests can be used to determine the added value of subscores and augmented subscores. The usefulness of the suggested tests is demonstrated using two operational data sets.


2018 ◽  
Vol 3 (1) ◽  
pp. 24-32
Author(s):  
Muhammad Ali ◽  
Muhammad Khalil ◽  
Muhammad Hanif ◽  
Nasir Jamal ◽  
Usman Shahzad

In this research study, modified family of estimators is proposed to estimate the population variance of the study variable when the population variance, quartiles, median and the coefficient of correlation of auxiliary variable are known. The expression of bias and mean squared error (MSE) of the proposed estimator are derived. Comparisons of the proposed estimator with the other existing are conducted estimators. The results obtained were illustrated numerically by using primary data sets. Theoretical and numerical justification of the proposed estimator was done to show its dominance.


Author(s):  
HENRIK BOSTRÖM

Probability estimation trees (PETs) generalize classification trees in that they assign class probability distributions instead of class labels to examples that are to be classified. This property has been demonstrated to allow PETs to outperform classification trees with respect to ranking performance, as measured by the area under the ROC curve (AUC). It has further been shown that the use of probability correction improves the performance of PETs. This has lead to the use of probability correction also in forests of PETs. However, it was recently observed that probability correction may in fact deteriorate performance of forests of PETs. A more detailed study of the phenomenon is presented and the reasons behind this observation are analyzed. An empirical investigation is presented, comparing forests of classification trees to forests of both corrected and uncorrected PETS on 34 data sets from the UCI repository. The experiment shows that a small forest (10 trees) of probability corrected PETs gives a higher AUC than a similar-sized forest of classification trees, hence providing evidence in favor of using forests of probability corrected PETs. However, the picture changes when increasing the forest size, as the AUC is no longer improved by probability correction. For accuracy and squared error of predicted class probabilities (Brier score), probability correction even leads to a negative effect. An analysis of the mean squared error of the trees in the forests and their variance, shows that although probability correction results in trees that are more correct on average, the variance is reduced at the same time, leading to an overall loss of performance for larger forests. The main conclusions are that probability correction should only be employed in small forests of PETs, and that for larger forests, classification trees and PETs are equally good alternatives.


Author(s):  
S. K. Yadav ◽  
Dinesh Sharma ◽  
Julius Alade

Introduction: Variation is an inherent phenomenon whether in nature made things or man made. Thus, it looks important to estimate this variation. Various authors have worked in the direction of improved estimation of population variance utilizing the known auxiliary parameters for better policy making. Methods: In this article, a new searls ratio type class of estimator is suggested for elevated estimation of population variance of main variable. As the suggested estimator is biased, so its bias and mean squared error (MSE) have been derived up to the approximation of order-one. The optimum values for the Searls characterizing scalars are obtained. The minimum MSE of the introduced estimator is obtained for the optimum Searls characterizing scalars. A theoretical comparison between suggested estimator and the competing estimators has been made through their mean squared errors. The efficiency conditions of suggested estimator over competing estimators are also obtained. These theoretical conditions are verified using some natural data sets. The computation of R codes for the biases and MSEs of the suggested and competing estimators are developed and are used for three natural populations in Naz et al. (2019). The estimator with least MSE is recommended for practical utility. The empirical study has been done using R programming. Results: The MSEs of different competing and the suggested estimators are obtained for three natural populations. The estimator under comparison with the least MSE is recommended for practical applications. Discussion: The aim to search for the most efficient estimation for improved estimation, is fulfilled through the proper use of the auxiliary parameters obtained from the known auxiliary variable. The suggested estimator may be used for elevated estimation of population variance. Conclusion: The introduced estimator is having least MSE as compared to competing estimators of popularion variance for all three natural populations. Thus it may be recommended for the application in various fields.


Author(s):  
A. Usman ◽  
S. I. S. Doguwa ◽  
B. B. Alhaji ◽  
A. T. Imam

We introduced a new generalized Weibull- Odd Frѐchet family of distributions with three extra parameters and we derived some of its structural properties. We derived comprehensive mathematical properties which include moments, moment generating function, Entropies and Order Statistics. One family of this distribution called new generalized Weibull- Odd Frѐchet -Frѐchet distribution is used to fit two data sets using the MLE procedure. A Monte Carlo simulation is used to test the robustness of the parameters of this distribution, in terms of the bias and mean squared error. The results of fitting this new distribution to two different data sets suggest that the new distribution outperforms its competitors.


Sign in / Sign up

Export Citation Format

Share Document