Hi everyone!
This week at the Applied Statistics Workshop we will be welcoming *Luke
Miratrix*, Assistant Professor of Education at Harvard University. He will
be presenting work entitled *Estimating and assessing treatment effect
variation in large-scale randomized trials with randomization
inference**. *This
is joint work with Avi Feller and Peng Ding. Please find the abstract
below and on the website
<http://projects.iq.harvard.edu/applied.stats.workshop-gov3009/presentations/luke-miratrix-harvard-title-coming-soon>
.
As usual, we will meet in CGIS Knafel Room 354 from noon to 1:30pm, and
lunch will be provided. See you all there! To view previous Applied
Statistics presentations, please visit the website
<http://projects.iq.harvard.edu/applied.stats.workshop-gov3009/videos>.
-- Aaron Kaufman
Title: Estimating and assessing treatment effect variation in large-scale
randomized trials with randomization inference
Recent literature has underscored the critical role of treatment effect
variation in estimating and understanding causal effects. This approach,
however, is in contrast to much of the foundational research on causal
inference; Neyman, for example, avoided such variation through his focus on
the average treatment effect (ATE) and his definition of the confidence
interval. We extend the Neymanian framework to explicitly allow both for
treatment effect variation explained by covariates, known as the systematic
component, and for unexplained treatment effect variation, known as the
idiosyncratic component. This perspective enables estimation and testing of
impact variation without imposing a model on the marginal distributions of
potential outcomes, with the workhorse approach of regression with
interaction terms being a special case. Our approach leads to two practical
results. First, estimates of systematic impact variation give sharp bounds
on overall treatment variation as well as bounds on the proportion of total
impact variation explained by a given model---this is essentially an R^2
for treatment effect variation. Second, by using covariates to partially
account for the correlation of potential outcomes, we sharpen the bounds on
the variance of the unadjusted average treatment effect estimate itself. As
long as the treatment effect varies across observed covariates, these
bounds are sharper than the current sharp bounds in the literature. We
demonstrate these ideas on the Head Start Impact Study, a large randomized
evaluation in educational research, showing that these results are
meaningful in practice.
Show replies by date