scholarly journals Predictive model in the presence of missing data: the centroid criterion for variable selection

2018 ◽  
Author(s):  
Jean Gaudart ◽  
Pascal Adalian ◽  
George Leonetti

AbstractIntroductionIn many studies, covariates are not always fully observed because of missing data process. Usually, subjects with missing data are excluded from the analysis but the number of covariates can be greater than the size of the sample when the number of removed subjects is high. Subjective selection or imputation procedures are used but this leads to biased or powerless models.The aim of our study was to develop a method based on the selection of the nearest covariate to the centroid of a homogeneous cluster of covariates. We applied this method to a forensic medicine data set to estimate the age of aborted fetuses.AnalysisMethodsWe measured 46 biometric covariates on 50 aborted fetuses. But the covariates were complete for only 18 fetuses.First, to obtain homogeneous clusters of covariates we used a hierarchical cluster analysis.Second, for each obtained cluster we selected the nearest covariate to the centroid of the cluster, maximizing the sum of correlations (the centroid criterion).Third, with the covariate selected this way, the sample size was sufficient to compute a classical linear regression model.We have shown the almost sure convergence of the centroid criterion and simulations were performed to build its empirical distribution.We compared our method to a subjective deletion method, two simple imputation methods and to the multiple imputation method.ResultsThe hierarchical cluster analysis built 2 clusters of covariates and 6 remaining covariates. After the selection of the nearest covariate to the centroid of each cluster, we computed a stepwise linear regression model. The model was adequate (R2=90.02%) and the cross-validation showed low prediction errors (2.23 10−3).The empirical distribution of the criterion provided empirical mean (31.91) and median (32.07) close to the theoretical value (32.03).The comparisons showed that deletion and simple imputation methods provided models of inferior quality than the multiple imputation method and the centroid method.ConclusionWhen the number of continuous covariates is greater than the sample size because of missing process, the usual procedures are biased. Our selection procedure based on the centroid criterion is a valid alternative to compose a set of predictors.

Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5947
Author(s):  
Liang Zhang

Building operation data are important for monitoring, analysis, modeling, and control of building energy systems. However, missing data is one of the major data quality issues, making data imputation techniques become increasingly important. There are two key research gaps for missing sensor data imputation in buildings: the lack of customized and automated imputation methodology, and the difficulty of the validation of data imputation methods. In this paper, a framework is developed to address these two gaps. First, a validation data generation module is developed based on pattern recognition to create a validation dataset to quantify the performance of data imputation methods. Second, a pool of data imputation methods is tested under the validation dataset to find an optimal single imputation method for each sensor, which is termed as an ensemble method. The method can reflect the specific mechanism and randomness of missing data from each sensor. The effectiveness of the framework is demonstrated by 18 sensors from a real campus building. The overall accuracy of data imputation for those sensors improves by 18.2% on average compared with the best single data imputation method.


RMD Open ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. e001708
Author(s):  
Nasim A Khan ◽  
Karina D Torralba ◽  
Fawad Aslam

ObjectivesTo analyse the amount, reporting and handling of missing data, approach to intention-to-treat (ITT) principle application and sensitivity analysis utilisation in randomised clinical trials (RCTs) of rheumatoid arthritis (RA). To assess the trend in such reporting 10 years apart (2006 and 2016).MethodsParallel group drug therapy RA RCTs with a clinical primary endpoint.Results176 studies enrolling a median of 160 (IQR 62–339) patients were eligible. In terms of actual analysis: 81 (46%) RCTs conducted ITT, 42 (23.9%) conducted modified ITT while 53 (30.1%) conducted non-ITT analysis. Only 58 of 97 (59.8%) RCTs reporting an ITT analysis actually performed it. The median (IQR) numbers of participants completing the trial and included in analysis for primary outcome were 86% (74%–91%) and 100% (97.1%–100%), respectively. 53 (32.7%) and 65 (40.1%) RCTs had >20% and 10%–20% missing primary outcome data, respectively. Missing data handling was unreported by 58 of 171 (33.9%) RCTs. When reported, vast majority used simple imputation methods. No significant trend towards improved reporting was seen between 2006 and 2016. Sensitivity analysis numerically improved from 2006 to 2016 (14.7% vs 21.4%).ConclusionsThere is significant discrepancy in the reported and the actual performed analysis in RA drug therapy RCTs. Nearly one-third of RCTs had >20% missing data. The reporting and methods of missing data handling remain inadequate with high usage of non-preferred simple imputation methods. Sensitivity analysis utilisation was low. No trend towards better missing data reporting and handling was seen.


2019 ◽  
Vol 6 (339) ◽  
pp. 73-98
Author(s):  
Małgorzata Aleksandra Misztal

The problem of incomplete data and its implications for drawing valid conclusions from statistical analyses is not related to any particular scientific domain, it arises in economics, sociology, education, behavioural sciences or medicine. Almost all standard statistical methods presume that every object has information on every variable to be included in the analysis and the typical approach to missing data is simply to delete them. However, this leads to ineffective and biased analysis results and is not recommended in the literature. The state of the art technique for handling missing data is multiple imputation. In the paper, some selected multiple imputation methods were taken into account. Special attention was paid to using principal components analysis (PCA) as an imputation method. The goal of the study was to assess the quality of PCA‑based imputations as compared to two other multiple imputation techniques: multivariate imputation by chained equations (MICE) and missForest. The comparison was made by artificially simulating different proportions (10–50%) and mechanisms of missing data using 10 complete data sets from the UCI repository of machine learning databases. Then, missing values were imputed with the use of MICE, missForest and the PCA‑based method (MIPCA). The normalised root mean square error (NRMSE) was calculated as a measure of imputation accuracy. On the basis of the conducted analyses, missForest can be recommended as a multiple imputation method providing the lowest rates of imputation errors for all types of missingness. PCA‑based imputation does not perform well in terms of accuracy.


2004 ◽  
pp. 109-128 ◽  
Author(s):  
D. Strebkov

The paper provides the analysis of citizens' credit behavior issues based on sociological research data collected in autumn 2002 in Russian big regional centers. The method of principal components is used for defining main factors which characterize attitudes to money, savings, debt and credit. The paper examines the influence of gender, age, education and other social and demographic variables on the inclination to borrowing. The hierarchical cluster analysis used helps to define six groups (clusters) of people. Each group has its specific attitude to saving and borrowing. The differences between groups in basic components of the credit behavior (experience of borrowing, selection of a creditor, preferences with regard to the credit amount and time of payment, acceptable interest rate) are considered.


2021 ◽  
pp. 188-196 ◽  
Author(s):  
Lauren C. Benson ◽  
Carlyn Stilling ◽  
Oluwatoyosi B.A. Owoeye ◽  
Carolyn A. Emery

Missing data can influence calculations of accumulated athlete workload. The objectives were to identify the best single imputation methods and examine workload trends using multiple imputation. External (jumps per hour) and internal (rating of perceived exertion; RPE) workload were recorded for 93 (45 females, 48 males) high school basketball players throughout a season. Recorded data were simulated as missing and imputed using ten imputation methods based on the context of the individual, team and session. Both single imputation and machine learning methods were used to impute the simulated missing data. The difference between the imputed data and the actual workload values was computed as root mean squared error (RMSE). A generalized estimating equation determined the effect of imputation method on RMSE. Multiple imputation of the original dataset, with all known and actual missing workload data, was used to examine trends in longitudinal workload data. Following multiple imputation, a Pearson correlation evaluated the longitudinal association between jump count and sRPE over the season. A single imputation method based on the specific context of the session for which data are missing (team mean) was only outperformed by methods that combine information about the session and the individual (machine learning models). There was a significant and strong association between jump count and sRPE in the original data and imputed datasets using multiple imputation. The amount and nature of the missing data should be considered when choosing a method for single imputation of workload data in youth basketball. Multiple imputation using several predictor variables in a regression model can be used for analyses where workload is accumulated across an entire season.


2018 ◽  
Vol 55 (2) ◽  
pp. 97-121 ◽  
Author(s):  
Anderson Cristiano Neisse ◽  
Jhessica Letícia Kirch ◽  
Kuang Hongyu

SummaryThe presence of genotype-environment interaction (GEI) influences production making the selection of cultivars in a complex process. The two most used methods to analyze GEI and evaluate genotypes are AMMI and GGE Biplot, being used for the analysis of multi environment trials data (MET). Despite their different approaches, both models complement each other in order to strengthen decision making. However, both models are based on biplots, consequently, biplot-based interpretation doesn’t scale well beyond two-dimensional plots, which happens whenever the first two components don’t capture enough variation. This paper proposes an approach to such cases based on cluster analysis combined with the concept of medoids. It also applies AMMI and GGE Biplot to the adjusted data in order to compare both models. The data is provided by the International Maize and Wheat Improvement Center (CIMMYT) and comes from the 14th Semi-Arid Wheat Yield Trial (SAWYT), an experiment concerning 50 genotypes of spring bread wheat (Triticum aestivum) germplasm adapted to low rainfall. It was performed in 36 environments across 14 countries. The analysis provided 25 genotypes clusters and 6 environments clusters. Both models were equivalent for the data’s evaluation, permitting increased reliability in the selection of superior cultivars and test environments.


2013 ◽  
Vol 11 (7) ◽  
pp. 2779-2786
Author(s):  
Rahul Singhai

One relevant problem in data preprocessing is the presence of missing data that leads the poor quality of patterns, extracted after mining. Imputation is one of the widely used procedures that replace the missing values in a data set by some probable values. The advantage of this approach is that the missing data treatment is independent of the learning algorithm used. This allows the user to select the most suitable imputation method for each situation. This paper analyzes the various imputation methods proposed in the field of statistics with respect to data mining. A comparative analysis of three different imputation approaches which can be used to impute missing attribute values in data mining are given that shows the most promising method. An artificial input data (of numeric type) file of 1000 records is used to investigate the performance of these methods. For testing the significance of these methods Z-test approach were used.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3252
Author(s):  
Encarnación Álvarez-Verdejo ◽  
Pablo J. Moya-Fernández ◽  
Juan F. Muñoz-Rosas

The problem of missing data is a common feature in any study, and a single imputation method is often applied to deal with this problem. The first contribution of this paper is to analyse the empirical performance of some traditional single imputation methods when they are applied to the estimation of the Gini index, a popular measure of inequality used in many studies. Various methods for constructing confidence intervals for the Gini index are also empirically evaluated. We consider several empirical measures to analyse the performance of estimators and confidence intervals, allowing us to quantify the magnitude of the non-response bias problem. We find extremely large biases under certain non-response mechanisms, and this problem gets noticeably worse as the proportion of missing data increases. For a large correlation coefficient between the target and auxiliary variables, the regression imputation method may notably mitigate this bias problem, yielding appropriate mean square errors. We also find that confidence intervals have poor coverage rates when the probability of data being missing is not uniform, and that the regression imputation method substantially improves the handling of this problem as the correlation coefficient increases.


2020 ◽  
Vol 29 (9) ◽  
pp. 2647-2664
Author(s):  
Lili Yu ◽  
Liang Liu ◽  
Karl E Peace

Iterative multiple imputation is a popular technique for missing data analysis. It updates the parameter estimators iteratively using multiple imputation method. This technique is convenient and flexible. However, the parameter estimators do not converge point-wise and are not efficient for finite imputation size m. In this paper, we propose a regression multiple imputation method. It uses the parameter estimators obtained from multiple imputation method to estimate the parameter estimators based on expectation maximization algorithm. We show that the resulting estimators are asymptotically efficient and converge point-wise for small m values, when the iteration k of the iterative multiple imputation goes to infinity. We evaluate the performance of the new proposed methods through simulation studies. A real data analysis is also conducted to illustrate the new method.


2012 ◽  
Vol 68 (6) ◽  
pp. 649-658 ◽  
Author(s):  
Rita Giordano ◽  
Ricardo M. F. Leal ◽  
Gleb P. Bourenkov ◽  
Sean McSweeney ◽  
Alexander N. Popov

Sign in / Sign up

Export Citation Format

Share Document