Wild Bootstrap Tests

2007 ◽  
Vol 24 (4) ◽  
pp. 31-37 ◽  
Author(s):  
Jurgen Franke ◽  
Siana Halim
2018 ◽  
Vol 21 (2) ◽  
pp. 87-113 ◽  
Author(s):  
H. Peter Boswijk ◽  
Yang Zu

2021 ◽  
Vol 28 (3) ◽  
pp. 519-552
Author(s):  
Giuseppe Cavaliere ◽  
Anders Rahbek ◽  
A. M. Robert Taylor

Permanent-transitory decompositions and the analysis of the time series properties of economic variables at the business cycle frequencies strongly rely on the correct detection of the number of common stochastic trends (co-integration). Standard techniques for the determination of the number of common trends, such as the well-known sequential procedure proposed in Johansen (1996), are based on the assumption that shocks are homoskedastic. This contrasts with empirical evidence which documents that many of the key macro-economic and financial variables are driven by heteroskedastic shocks. In a recent paper, Cavaliere et al., (2010, Econometric Theory) demonstrate that Johansen's (LR) trace statistic for co-integration rank and both its i.i.d. and wild bootstrap analogues are asymptotically valid in non-stationary systems driven by heteroskedastic (martingale difference) innovations, but that the wild bootstrap performs substantially better than the other two tests in finite samples. In this paper we analyse the behaviour of sequential procedures to determine the number of common stochastic trends present based on these tests. Numerical evidence suggests that the procedure based on the wild bootstrap tests performs best in small samples under a variety of heteroskedastic innovation processes.


2010 ◽  
Vol 28 (1) ◽  
pp. 128-144 ◽  
Author(s):  
Russell Davidson ◽  
James G. MacKinnon

Author(s):  
David Roodman ◽  
Morten Ørregaard Nielsen ◽  
James G. MacKinnon ◽  
Matthew D. Webb

The wild bootstrap was originally developed for regression models with heteroskedasticity of unknown form. Over the past 30 years, it has been extended to models estimated by instrumental variables and maximum likelihood and to ones where the error terms are (perhaps multiway) clustered. Like bootstrap methods in general, the wild bootstrap is especially useful when conventional inference methods are unreliable because large-sample assumptions do not hold. For example, there may be few clusters, few treated clusters, or weak instruments. The package boottest can perform a wide variety of wild bootstrap tests, often at remarkable speed. It can also invert these tests to construct confidence sets. As a postestimation command, boottest works after linear estimation commands, including regress, cnsreg, ivregress, ivreg2, areg, and reghdfe, as well as many estimation commands based on maximum likelihood. Although it is designed to perform the wild cluster bootstrap, boottest can also perform the ordinary (nonclustered) version. Wrappers offer classical Wald, score/Lagrange multiplier, and Anderson–Rubin tests, optionally with (multiway) clustering. We review the main ideas of the wild cluster bootstrap, offer tips for use, explain why it is particularly amenable to computational optimization, state the syntax of boottest, artest, scoretest, and waldtest, and present several empirical examples.


2005 ◽  
Vol 23 (4) ◽  
pp. 325-340 ◽  
Author(s):  
L. G. Godfrey ◽  
J. M. C. Santos Silva
Keyword(s):  

Statistics ◽  
2016 ◽  
Vol 50 (4) ◽  
pp. 750-774
Author(s):  
Taeyoon Kim ◽  
Cheolyong Park ◽  
Jeongcheol Ha ◽  
Zhi-Ming Luo ◽  
Sun Young Hwang

Sign in / Sign up

Export Citation Format

Share Document