Dear Applied Statistics Workshop Community,

Our next meeting of the semester will be on February 8 (12:00 EST). Yi Zhang will present "Safe Policy Learning under Regression Discontinuity Designs."

<Where>
CGIS K354
Bagged lunches are available for pick-up at 11:40 (CGIS K354).
Zoom: https://harvard.zoom.us/j/99181972207?pwd=Ykd3ZzVZRnZCSDZqNVpCSURCNnVvQT09

<Abstract>
The regression discontinuity (RD) design is widely used for program evaluation with observational data. The RD design enables the identification of the local average treatment effect (LATE) at the treatment cutoff by exploiting known deterministic treatment assignment mechanisms. The primary focus of the existing literature has been the development of rigorous estimation methods for the LATE. In contrast, we consider policy learning under the RD design. We develop a robust optimization approach to finding an optimal treatment cutoff that improves upon the existing one. Under the RD design, policy learning requires extrapolation. We address this problem by partially identifying the conditional expectation function of counterfactual outcome under a smoothness assumption commonly used for the estimation of LATE. We then minimize the worst case regret relative to the status quo policy. The resulting new treatment cutoffs have a safety guarantee, enabling policy makers to limit the probability that they yield a worse outcome than the existing cutoff. Going beyond the standard single-cutoff case, we generalize the proposed methodology to the multi-cutoff RD design by developing a doubly robust estimator. We establish the asymptotic regret bounds for the learned policy using the semi-parametric efficiency theory. Finally, we apply the proposed methodology to empirical and simulated data sets.

<2022-2023 Schedule>
GOV 3009 Website: https://projects.iq.harvard.edu/applied.stats.workshop-gov3009
Calendar: https://calendar.google.com/calendar/embed?src=c_3v93pav9fjkkldrbu9snbhned8%40group.calendar.google.com&ctz=America%2FNew_York

Best,
Shusei