A resampling procedure for nonparametric combination of several dependent tests

1992 ◽  
Vol 1 (1) ◽  
pp. 87-101 ◽  
Author(s):  
Fortunato Pesarin
2021 ◽  
pp. 105477382110032
Author(s):  
Nurul Huda ◽  
Yun-Yen ◽  
Hellena Deli ◽  
Malissa Kay Shaw ◽  
Tsai-Wei Huang ◽  
...  

The purpose of this study was to test the mediating effects of coping on relationships of psychological distress and stress with anxiety, depression, and quality of life. A cross-sectional and correlational research study was used to recruit a sample of 440 patients with advanced cancer in Indonesia. A bootstrap resampling procedure was used to test the significance of the total and specific indirect effects of coping. Data analysis showed that problem-focused coping (PFC) mediated relationships of psychological distress and stress on depression, anxiety and functional well-being. PFC also mediated the relationship between stress and social well-being. Emotional-focused coping (EFC) mediated the relationship of stress with physical and emotional well-being. EFC also mediated the relationships between psychological distress and physical well-being. Thus, proper assessments and interventions should be tailored and implemented for patients in order to facilitate their use of coping strategies when needed in stressful situations.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Margherita Mottola ◽  
Stephan Ursprung ◽  
Leonardo Rundo ◽  
Lorena Escudero Sanchez ◽  
Tobias Klatte ◽  
...  

AbstractComputed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances.


2012 ◽  
Vol 51 (9) ◽  
pp. 1633-1638 ◽  
Author(s):  
Martin Hirschi ◽  
Christoph Spirig ◽  
Andreas P. Weigel ◽  
Pierluigi Calanca ◽  
Jörg Samietz ◽  
...  

AbstractMonthly weather forecasts (MOFCs) were shown to have skill in extratropical continental regions for lead times up to 3 weeks, in particular for temperature and if weekly averaged. This skill could be exploited in practical applications for implementations exhibiting some degree of memory or inertia toward meteorological drivers, potentially even for longer lead times. Many agricultural applications fall into these categories because of the temperature-dependent development of biological organisms, allowing simulations that are based on temperature sums. Most such agricultural models require local weather information at daily or even hourly temporal resolution, however, preventing direct use of the spatially and temporally aggregated information of MOFCs, which may furthermore be subject to significant biases. By the example of forecasting the timing of life-phase occurrences of the codling moth (Cydia pomonella), which is a major insect pest in apple orchards worldwide, the authors investigate the application of downscaled weekly temperature anomalies of MOFCs for use in an impact model requiring hourly input. The downscaling and postprocessing included the use of a daily weather generator and a resampling procedure for creating hourly weather series and the application of a recalibration technique to correct for the original underconfidence of the forecast occurrences of codling moth life phases. Results show a clear skill improvement of up to 3 days in root-mean-square error over the full forecast range when incorporating MOFCs as compared with deterministic benchmark forecasts using climatological information for predicting the timing of codling moth life phases.


2003 ◽  
Vol 60 (1) ◽  
pp. 97-103 ◽  
Author(s):  
Luciana Aparecida Carlini-Garcia ◽  
Roland Vencovsky ◽  
Alexandre Siqueira Guedes Coelho

Studying the genetic structure of natural populations is very important for conservation and use of the genetic variability available in nature. This research is related to genetic population structure analysis using real and simulated molecular data. To obtain variance estimates of pertinent parameters, the bootstrap resampling procedure was applied over different sampling units, namely: individuals within populations (I), populations (P), and individuals and populations simultaneously (I, P). The considered parameters were: the total fixation index (F or F IT), the fixation index within populations (f or F IS) and the divergence among populations or intrapopulation coancestry (theta or F ST). The aim of this research was to verify if the variance estimates of <IMG SRC="/img/fbpe/sa/v60n1/14549x09.gif">, <IMG SRC="/img/fbpe/sa/v60n1/14549x10.gif">and <IMG SRC="/img/fbpe/sa/v60n1/14549x11.gif">, found through the resampling over individuals and populations simultaneously (I, P), correspond to the sum of the respective variance estimates obtained from separated resampling over individuals and populations (I+P). This equivalence was verified in all cases, showing that the total variance estimate of <IMG SRC="/img/fbpe/sa/v60n1/14549x09.gif">, <IMG SRC="/img/fbpe/sa/v60n1/14549x10.gif">and <IMG SRC="/img/fbpe/sa/v60n1/14549x11.gif">can be obtained summing up the variances estimated for each source of variation separately. Results also showed that this facilitates the use of the bootstrap method on data with hierarchical structure and opens the possibility of obtaining the relative contribution of each source of variation to the total variation of estimated parameters.


2005 ◽  
Vol 225 (5) ◽  
Author(s):  
Sandra Gottschalk

SummaryNonparametric resampling is a method for generating synthetic microdata and is introduced as a procedure for microdata disclosure limitation. Theoretically, re-identification of individuals or firms is not possible with synthetic data. The resampling procedure creates datasets - the resample - which nearly have the same empirical cumulative distribution functions as the original survey data and thus permit econometricians to calculate meaningful regression results. The idea of nonparametric resampling, especially, is to draw from univariate or multivariate empirical distribution functions without having to estimate these explicitly. Until now, the resampling procedure shown here has only been applicable to variables with continuous distribution functions. Monte Carlo simulations and applications with data from the Mannheim Innovation Panel show that results of linear and nonlinear regression analyses can be reproduced quite precisely by nonparametric resamples. A univariate and a multivariate resampling version are examined. The univariate version as well as the multivariate version which is using the correlation structure of the original data as a scaling instrument turn out to be able to retain the coefficients of model estimations. Furthermore, multivariate resampling best reproduces regression results if all variables are anonymised.


2020 ◽  
Vol 13 (12) ◽  
pp. 314
Author(s):  
José Manuel Cueto ◽  
Aurea Grané ◽  
Ignacio Cascos

In this paper, we propose multifactor models for the pan-European Equity Market using a block-bootstrap method and compare the results with those of traditional inferential techniques. The new factors are built from statistical measurements on stock prices—in particular, coefficient of variation, skewness, and kurtosis. Data come from Reuters, correspond to nearly 2000 EU companies, and span from January 2008 to February 2018. Regarding methodology, we propose a non-parametric resampling procedure that accounts for time dependency in order to test the validity of the model and the significance of the parameters involved. We compare our bootstrap-based inferential results with classical proposals (based on F-statistics). Methods under assessment are time-series regression, cross-sectional regression, and the Fama–MacBeth procedure. The main findings indicate that the two factors that better improve the Capital Asset Pricing Model with regard to the adjusted R2 in the time-series regressions are the skewness and the coefficient of variation. For this reason, a model including those two factors together with the market is thoroughly studied. We also observe that our block-bootstrap methodology seems to be more conservative with the null of the GRS test than classical procedures.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Shi-Ming Chen ◽  
Jun-Feng Yuan ◽  
Fang Zhang ◽  
Hua-Jing Fang

Considering the influence of uncertain map information on multirobot SLAM problem, a multirobot FastSLAM algorithm based on landmark consistency correction is proposed. Firstly, electromagnetism-like mechanism is introduced to the resampling procedure in single-robot FastSLAM, where we assume that each sampling particle is looked at as a charged electron and attraction-repulsion mechanism in electromagnetism field is used to simulate interactive force between the particles to improve the distribution of particles. Secondly, when multiple robots observe the same landmarks, every robot is regarded as one node and Kalman-Consensus Filter is proposed to update landmark information, which further improves the accuracy of localization and mapping. Finally, the simulation results show that the algorithm is suitable and effective.


2017 ◽  
Vol 32 (1) ◽  
pp. 243-252 ◽  
Author(s):  
Christopher M. Godfrey ◽  
Chris J. Peterson

Abstract Enhanced Fujita (EF) scale estimates following tornadoes remain challenging in rural areas with few traditional damage indicators. In some cases, such as the 27 April 2011 tornadoes that passed through mostly inaccessible terrain in the Great Smoky Mountains National Park and the Chattahoochee National Forest in the southeastern United States, traditional ground-based tornado damage surveys are nearly impossible. This work presents a novel method to infer EF-scale categories in forests using levels of tree damage and a coupled wind and tree resistance model. High-resolution aerial imagery allows detailed analyses based on a field of nearly half a million trees labeled with their geographic location and fall direction. Ground surveys also provide details on the composition of tree species and tree diameters within each tornado track. A statistical resampling procedure randomly draws a sample of trees from this database of observed trees. The coupled wind and tree resistance model determines the percentage of trees in that sample that fall for a given wind speed. By repeating this procedure, each wind speed value corresponds with a distribution of treefall percentages in the sampled plots. Comparing these results with the observed treefall percentage in small subplots along the entire tornado track allows estimation of the most probable wind speed associated with each subplot. Maps of estimated EF-scale levels reveal the relationship between complex terrain and wind speeds and show the variability of the intensity of each tornado along both tracks. This approach may lead to methods for the straightforward estimation of EF-scale categories in remote or inaccessible locations.


Sign in / Sign up

Export Citation Format

Share Document