Hi everyone!
This week is the final workshop of the semester and we will be welcoming *James
Robins*, Professor of Epidemiology at the Harvard School of Public Health.
He will be presenting work entitled *Confidence Intervals for Causal
Effects with Propensity Score and Outcome Regression Estimated with Machine
Learning: When are They Valid?*. Please find the abstract below and on the
Applied Stats website here
<https://projects.iq.harvard.edu/applied.stats.workshop-gov3009>.
As usual, we will meet at noon in CGIS Knafel Room 354 and lunch will be
provided. See you all there!
-- Dana Higgins
*Title:* *Confidence Intervals for Causal Effects with Propensity Score and
Outcome Regression Estimated with Machine Learning: When are They Valid?*
*Abstract:* In estimation of causal effects (such as the average causal
effect or the variance weighted average causal effect) in the presence of
high dimensional covariates sufficient to control confounding, an
increasingly popular procedure is to estimate the causal effect using
doubly robust estimators with the propensity score and outcome regression
estimated by machine learning and then to construct Wald confidence
intervals based on a estimator of the standard error. The validity of these
intervals depends critically on the assumption that the bias is less than
the standard error. If the latter assumption is wrong, the intervals will
undercover, perhaps dramatically. Can anything be done about this problem
since the bias of the estimator is unknown. Recently a number of approaches
to this problem have been offered. I will discuss these and then offer my
own approach which generally greatly improves upon alternatives..
Show replies by date