scholarly journals Why Experimenters Might Not Always Want to Randomize, and What They Could Do Instead

2016 ◽  
Vol 24 (3) ◽  
pp. 324-338 ◽  
Author(s):  
Maximilian Kasy

Suppose that an experimenter has collected a sample as well as baseline information about the units in the sample. How should she allocate treatments to the units in this sample? We argue that the answer does not involve randomization if we think of experimental design as a statistical decision problem. If, for instance, the experimenter is interested in estimating the average treatment effect and evaluates an estimate in terms of the squared error, then she should minimize the expected mean squared error (MSE) through choice of a treatment assignment. We provide explicit expressions for the expected MSE that lead to easily implementable procedures for experimental design.

2013 ◽  
Vol 1 (1) ◽  
pp. 135-154 ◽  
Author(s):  
Peter M. Aronow ◽  
Joel A. Middleton

AbstractWe derive a class of design-based estimators for the average treatment effect that are unbiased whenever the treatment assignment process is known. We generalize these estimators to include unbiased covariate adjustment using any model for outcomes that the analyst chooses. We then provide expressions and conservative estimators for the variance of the proposed estimators.


2011 ◽  
Vol 19 (2) ◽  
pp. 205-226 ◽  
Author(s):  
Kevin M. Esterling ◽  
Michael A. Neblo ◽  
David M. J. Lazer

If ignored, noncompliance with a treatment or nonresponse on outcome measures can bias estimates of treatment effects in a randomized experiment. To identify and estimate causal treatment effects in the case where compliance and response depend on unobservables, we propose the parametric generalized endogenous treatment (GET) model. GET incorporates behavioral responses within an experiment to measure each subject's latent compliance type and identifies causal effects via principal stratification. Using simulation methods and an application to field experimental data, we show GET has a dramatically lower mean squared error for treatment effect estimates than existing approaches to principal stratification that impute, rather than measure, compliance type. In addition, we show that GET allows one to relax and test the instrumental variable exclusion restriction assumption, to test for the presence of treatment effect heterogeneity across a range of compliance types, and to test for treatment ignorability when treatment and control samples are balanced on observable covariates.


2017 ◽  
Vol 84 (4) ◽  
pp. 1583-1605 ◽  
Author(s):  
Jeff Dominitz ◽  
Charles F. Manski

AbstractWhen designing data collection, crucial questions arise regarding how much data to collect and how much effort to expend to enhance the quality of the collected data. To make choice of sample design a coherent subject of study, it is desirable to specify an explicit decision problem. We use the Wald framework of statistical decision theory to study allocation of a budget between two or more sampling processes. These processes all draw random samples from a population of interest and aim to collect data that are informative about the sample realizations of an outcome. They differ in the cost of data collection and the quality of the data obtained. One may incur lower cost per sample member but yield lower data quality than another. Increasing the allocation of budget to a low-cost process yields more data, while increasing the allocation to a high-cost process yields better data. We initially view the concept of “better data” abstractly and then fix attention on two important cases. In both cases, a high-cost sampling process accurately measures the outcome of each sample member. The cases differ in the data yielded by a low-cost process. In one, the low-cost process has non-response and in the other it provides a low-resolution interval measure of each sample member’s outcome. In these settings, we study minimax-regret sample design for prediction of a real-valued outcome under square loss; that is, design which minimizes maximum mean square error. The analysis imposes no assumptions that restrict the unobserved outcomes. Hence, the decision maker must cope with both the statistical imprecision of finite samples and the partial identification of the true state of nature.


1966 ◽  
Vol 3 (2) ◽  
pp. 538-549 ◽  
Author(s):  
J. A. Bather

This paper discusses an optimization problem arising in the theory of inventory control. Much of the previous work in this field has been focused on the Arrow-Harris-Marschak model, [1], [2], in which the inventory level can be modified only at the instants of discrete time. Here, we shall be concerned with a continuous time analogue of the model, in an attempt to avoid the difficulties experienced in solving the basic integral equations. The approach was suggested by recent investigations of a statistical decision problem, [3], [5], which exploited the advantages of a continuous treatment. Although the ideas discussed here are relatively straightforward and involve strong assumptions as to the behavior of the inventory, the explicit character of the optimal policy is encouraging and particular solutions might nevertheless provide useful restocking procedures.


2021 ◽  
Author(s):  
Youmi Suk ◽  
Peter Steiner ◽  
Jee-Seon Kim ◽  
Hyunseung Kang

Regression discontinuity designs are commonly used for program evaluation with continuous treatment assignment variables. But in practice, treatment assignment is frequently based on discrete or ordinal variables. In this study, we propose a regression discontinuity design with an ordinal running variable to assess the effects of extended time accommodations (ETA) for English language learners (ELL). ETA eligibility is determined by ordinal ELL English proficiency categories of National Assessment of Educational Progress data. We discuss the identification and estimation of the average treatment effect, intent-to-treat effect, and the local average treatment effect at the cutoff. We also propose a series of sensitivity analyses to probe the effect estimates' robustness to the choices of scaling functions and cutoff scores, and unmeasured confounding.


Sign in / Sign up

Export Citation Format

Share Document