Dear Applied Statistics Workshop Community,
Our next meeting of the semester will be on September 28 (12:00 EST). Luke
Miratrix and Dae Woong Ham will present "A devil’s bargain? Repairing a
Difference in Differences parallel trends assumption with an initial
matching step."
<Where>
In-person: CGIS K354
Bagged lunches are available for pick-up at 11:40 (CGIS K354).
Zoom:
https://harvard.zoom.us/j/99181972207?pwd=Ykd3ZzVZRnZCSDZqNVpCSURCNnVvQT09
<Abstract>
The Difference in Difference (DiD) estimator is a popular estimator built
on the "parallel trends" assumption that the treatment group, absent
treatment, would change "similarly" to the control group over time. To
increase the plausibility of this assumption, a natural idea is to match
treated and control units prior to a DiD analysis. In this paper, we
characterize the bias of matching under a class of linear structural models
with both observed and unobserved confounders that have time varying
effects. Given this framework, we find that matching on baseline covariates
generally reduces the bias associated with these covariates, when compared
to the original DiD estimator. We further find that additionally matching
on pre-treatment outcomes has both cost and benefit. First, matching on
pre-treatment outcomes will partially balance unobserved confounders, which
mitigates some bias. This reduction is proportional to the outcome's
reliability, a measure of how coupled the outcomes are with the latent
covariates. On the other hand, we find that matching on pre-treatment
outcomes also undermines the second "difference" in a DiD estimate by
forcing the treated and control group's pre-treatment outcomes to be equal.
This injects bias into the final estimate, creating a bias-bias tradeoff.
We extend our bias results to multivariate confounders with multiple
pre-treatment periods and find similar results. We summarize our findings
with heuristic guidelines on whether to match prior to a DiD analysis,
along with a method for roughly estimating the reduction in bias. We
illustrate our guidelines by reanalyzing a recent empirical study that used
matching prior to a DiD analysis to explore the impact of principal
turnover on student achievement.
<2022 Schedule>
GOV 3009 Website:
https://projects.iq.harvard.edu/applied.stats.workshop-gov3009
Calendar:
https://calendar.google.com/calendar/embed?src=c_3v93pav9fjkkldrbu9snbhned8…
Best,
Shusei
Dear Applied Statistics Workshop Community,
Our next meeting of the semester will be on September 21 (12:00 EST).
Matthew Blackwell will present "Difference-in-differences Designs for
Controlled Direct Effects."
<Where>
In-person: CGIS K354
Bagged lunches are available for pick-up at 11:40 (CGIS K354).
Zoom:
https://harvard.zoom.us/j/99181972207?pwd=Ykd3ZzVZRnZCSDZqNVpCSURCNnVvQT09
<Abstract>
Political scientists are increasingly interested in controlled direct
effects, which are important quantities of interest for understanding why,
how, and when causal effects will occur. Unfortunately, their
identification has usually required strong and often unreasonable
selection-on-observables assumptions for the mediator. In this paper, we
show how to identify and estimate controlled direct effects under a
difference-in-differences design where we have measurements of the outcome
and mediator before and after treatment assignment. This design allows us
to weaken the identification assumptions to allow for linear, time-constant
unmeasured confounding between the mediator and the outcome. Furthermore,
we develop a semiparametrically efficient and multiply robust estimator for
these quantities and apply our approach to a recent experiment evaluating
the effectiveness of short conversations at reducing intergroup prejudice.
An open-source software package implements the methodology with a variety
of flexible, machine-learning algorithms to avoid bias from
misspecification.
<2022 Schedule>
GOV 3009 Website:
https://projects.iq.harvard.edu/applied.stats.workshop-gov3009
Calendar:
https://calendar.google.com/calendar/embed?src=c_3v93pav9fjkkldrbu9snbhned8…
Best,
Shusei
Dear Applied Statistics Workshop Community,
Our next meeting of the semester will be on September 14 (12:00 EST). Cory
McCartan will present "Individual and Differential Harm in Redistricting."
<When>
September 14, 12:00 to 1:30 PM, EST
Bagged lunches are available for pick-up at 11:40 (CGIS K354).
<Where>
In-person: CGIS K354
Zoom:
https://harvard.zoom.us/j/99181972207?pwd=Ykd3ZzVZRnZCSDZqNVpCSURCNnVvQT09
<Abstract>
Social scientists have developed dozens of measures for assessing partisan
bias in redistricting. But these measures cannot be easily adapted to other
groups, including those defined by race, class, or geography. Nor are they
applicable to single- or no-party contexts such as local redistricting. To
overcome these limitations, we propose a unified framework of harm for
evaluating the impacts of a districting plan on individual voters and the
groups to which they belong. We consider a voter harmed if their chosen
candidate is not elected under the current plan, but would be under a
different plan. Harm improves on existing measures by both focusing on the
choices of individual voters and directly incorporating counterfactual
plans. We discuss strategies for estimating harm, and demonstrate the
utility of our framework through analyses of partisan gerrymandering in New
Jersey, voting rights litigation in Alabama, and racial dynamics of Boston
City Council elections.
<2022 Schedule>
GOV 3009 Website:
https://projects.iq.harvard.edu/applied.stats.workshop-gov3009
Calendar:
https://calendar.google.com/calendar/embed?src=c_3v93pav9fjkkldrbu9snbhned8…
Best,
Shusei
Dear Applied Statistics Workshop Community,
Our next meeting of the semester will be on September 7 (12:00 EST).
Professor Xiang Zhou presents "Marginal Interventional Effects."
<When>
September 7, 12:00 to 1:30 PM, EST
Bagged lunches are available for pick-up at 11:30 (CGIS K354).
<Where>
In-person: CGIS K354
Zoom:
https://harvard.zoom.us/j/99181972207?pwd=Ykd3ZzVZRnZCSDZqNVpCSURCNnVvQT09
<Abstract>
Abstract: Conventional causal estimands, such as the average treatment
effect (ATE), reflect how the mean outcome in a population or subpopulation
would change if all units received treatment versus control. Real-world
policy changes, however, are often incremental, changing the treatment
status for only a small segment of the population who are at or near “the
margin of participation.” To capture this notion, two parallel lines of
inquiry have developed in economics and in statistics and epidemiology that
define, identify, and estimate what we call interventional effects. In this
article, we bridge these two strands of literature by defining
interventional effect (IE) as the per capita effect of a treatment
intervention on an outcome of interest, and marginal interventional effect
(MIE) as its limit when the size of the intervention approaches zero. The
IE and MIE can be viewed as the unconditional counterparts of the
policy-relevant treatment effect (PRTE) and marginal PRTE (MPRTE) proposed
in the economics literature. However, different from PRTE and MPRTE, IE and
MIE are defined without reference to a latent index model, and, as we show,
can be identified either under unconfoundedness or through the use of
instrumental variables. For both scenarios, we show that MIEs are typically
identified without the strong positivity assumption required of the ATE,
highlight several “stylized interventions” that may be of particular
interest in policy analysis, discuss several parametric and semiparametric
estimation strategies, and illustrate the proposed methods with an
empirical example.
Paper link: https://arxiv.org/abs/2206.10717
<2022 Schedule>
GOV 3009 Website:
https://projects.iq.harvard.edu/applied.stats.workshop-gov3009
Calendar:
https://calendar.google.com/calendar/embed?src=c_3v93pav9fjkkldrbu9snbhned8…
Best,
Shusei