scholarly journals Inference in Regression Discontinuity Designs with a Discrete Running Variable

2018 ◽  
Vol 108 (8) ◽  
pp. 2277-2304 ◽  
Author(s):  
Michal Kolesár ◽  
Christoph Rothe

We consider inference in regression discontinuity designs when the running variable only takes a moderate number of distinct values. In particular, we study the common practice of using confidence intervals (CIs) based on standard errors that are clustered by the running variable as a means to make inference robust to model misspecification (Lee and Card 2008). We derive theoretical results and present simulation and empirical evidence showing that these CIs do not guard against model misspecification, and that they have poor coverage properties. We therefore recommend against using these CIs in practice. We instead propose two alternative CIs with guaranteed coverage properties under easily interpretable restrictions on the conditional expectation function. (JEL C13, C51, J13, J31, J64, J65)

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Otávio Bartalotti

AbstractIn regression discontinuity designs (RD), for a given bandwidth, researchers can estimate standard errors based on different variance formulas obtained under different asymptotic frameworks. In the traditional approach the bandwidth shrinks to zero as sample size increases; alternatively, the bandwidth could be treated as fixed. The main theoretical results for RD rely on the former, while most applications in the literature treat the estimates as parametric, implementing the usual heteroskedasticity-robust standard errors. This paper develops the “fixed-bandwidth” alternative asymptotic theory for RD designs, which sheds light on the connection between both approaches. I provide alternative formulas (approximations) for the bias and variance of common RD estimators, and conditions under which both approximations are equivalent. Simulations document the improvements in test coverage that fixed-bandwidth approximations achieve relative to traditional approximations, especially when there is local heteroskedasticity. Feasible estimators of fixed-bandwidth standard errors are easy to implement and are akin to treating RD estimators aslocallyparametric, validating the common empirical practice of using heteroskedasticity-robust standard errors in RD settings. Bias mitigation approaches are discussed and a novel bootstrap higher-order bias correction procedure based on the fixed bandwidth asymptotics is suggested.


2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Patrick Button

AbstractParametric (polynomial) models are popular in research employing regression discontinuity designs and are required when data are discrete. However, researchers often choose a parametric model based on data inspection or pretesting. These approaches lead to standard errors and confidence intervals that are too small because they do not incorporate model uncertainty. I propose using Frequentist model averaging to incorporate model uncertainty into parametric models. My Monte Carlo experiments show that Frequentist model averaging leads to mean square error and coverage probability improvements over pretesting. An application to [Lee, D. S. 2008. “Randomized Experiments From Non-Random Selection in US House Elections.”


2019 ◽  
Vol 101 (3) ◽  
pp. 442-451 ◽  
Author(s):  
Sebastian Calonico ◽  
Matias D. Cattaneo ◽  
Max H. Farrell ◽  
Rocío Titiunik

We study regression discontinuity designs when covariates are included in the estimation. We examine local polynomial estimators that include discrete or continuous covariates in an additive separable way, but without imposing any parametric restrictions on the underlying population regression functions. We recommend a covariate-adjustment approach that retains consistency under intuitive conditions and characterize the potential for estimation and inference improvements. We also present new covariate-adjusted mean-squared error expansions and robust bias-corrected inference procedures, with heteroskedasticity-consistent and cluster-robust standard errors. We provide an empirical illustration and an extensive simulation study. All methods are implemented in R and Stata software packages.


Econometrica ◽  
2014 ◽  
Vol 82 (6) ◽  
pp. 2295-2326 ◽  
Author(s):  
Sebastian Calonico ◽  
Matias D. Cattaneo ◽  
Rocio Titiunik

2020 ◽  
Vol 23 (2) ◽  
pp. 211-231
Author(s):  
Yang He ◽  
Otávio Bartalotti

Summary This paper develops a novel wild bootstrap procedure to construct robust bias-corrected valid confidence intervals for fuzzy regression discontinuity designs, providing an intuitive complement to existing robust bias-corrected methods. The confidence intervals generated by this procedure are valid under conditions similar to the procedures proposed by Calonico et al. (2014) and related literature. Simulations provide evidence that this new method is at least as accurate as the plug-in analytical corrections when applied to a variety of data-generating processes featuring endogeneity and clustering. Finally, we demonstrate its empirical relevance by revisiting Angrist and Lavy (1999) analysis of class size on student outcomes.


2020 ◽  
pp. 1-17
Author(s):  
Erin Hartman

Abstract Regression discontinuity (RD) designs are increasingly common in political science. They have many advantages, including a known and observable treatment assignment mechanism. The literature has emphasized the need for “falsification tests” and ways to assess the validity of the design. When implementing RD designs, researchers typically rely on two falsification tests, based on empirically testable implications of the identifying assumptions, to argue the design is credible. These tests, one for continuity in the regression function for a pretreatment covariate, and one for continuity in the density of the forcing variable, use a null of no difference in the parameter of interest at the discontinuity. Common practice can, incorrectly, conflate a failure to reject evidence of a flawed design with evidence that the design is credible. The well-known equivalence testing approach addresses these problems, but how to implement equivalence tests in the RD framework is not straightforward. This paper develops two equivalence tests tailored for RD designs that allow researchers to provide statistical evidence that the design is credible. Simulation studies show the superior performance of equivalence-based tests over tests-of-difference, as used in current practice. The tests are applied to the close elections RD data presented in Eggers et al. (2015b).


2017 ◽  
Vol 3 (2) ◽  
pp. 134-146
Author(s):  
Matias D. Cattaneo ◽  
Gonzalo Vazquez-Bare

Sign in / Sign up

Export Citation Format

Share Document