Hi all,
This week at the Applied Statistics workshop we will be welcoming Marc Ratkovic, an Assistant Professor of Politics at Princeton University. He will be presenting joint work with Dustin Tingley, entitled "Causal Inference through the Method of Direct Estimation." Please find the abstract below and on the website. The paper can be accessed here: http://scholar.harvard.edu/dtingley/files/mde.pdf
We will meet in CGIS Knafel Room 354 at noon and lunch will be provided. *Note that this is the last week we will be meeting this year.
Best,
Pam
"Causal Inference through the Method of Direct Estimation"
Marc Ratkovic and Dustin Tingley
Abstract: The intersection of causal inference and machine learning is a rapidly advancing field. We propose a new approach, the method of direct estimation, that draws on both traditions in order to obtain nonparametric estimates of treatment effects. The approach focuses on estimating the effect of fluctuations in a treatment variable on an outcome. A tensor-spline implementation enables rich interactions between functional bases allowing for the approach to capture treatment/covariate interactions. We show how new innovations in Bayesian sparse modeling readily handle the proposed framework, and then document its performance in simulation and applied examples. Furthermore we show how the method of direct estimation can easily extend to structural estimators commonly used in a variety of disciplines, like instrumental variables, mediation analysis, and sequential g-estimation.
Hi all,
This week at the Applied Statistics workshop we will be welcoming Doug Rivers, a Professor of Political Science at Stanford University. He will be presenting work entitled “What the Hell Happened? The Perils of Polling in the 2016 U.S. Election.“ Please find the abstract below and on the website.
We will meet in CGIS Knafel Room 354 at noon and lunch will be provided.
Best,
Pam
Title: What the Hell Happened? The Perils of Polling in the 2016 U.S. Election
Abstract: Most polls at the end of the 2016 U.S. Presidential election campaign showed Hillary Clinton leading Donald Trump and she did indeed win the popular vote by a margin of over two percent. However, several anomalies are apparent in 2016 polling:
1) Some polling exhibited "phantom swings" in support for Clinton and Trump.
2) There is clear evidence of bias in midwestern state polls.
3) Underestimates of mean squared polling error caused poll aggregators to over-estimate Clinton's chances of winning.
4) Republican turnout in 2016 was underestimated by both likely voter screens and historical turnout models.
With the benefit of hindsight, most of these problems could have been avoided. Improved methods are discussed, along with speculations about the limits of campaign analytics.
Hi all,
This week at the Applied Statistics workshop we will be welcoming James Robins, the Mitchell L. and Robin LaFoley Dong Professor of Epidemiology at the Harvard Chan School of Public Health. He will be presenting work entitled "Variable Selection for Estimation of Causal Effects - Art or Science: What is Done in Practice, What Ought to be Done, and What Cannot be Done?" Please find the abstract below and on the website.
We will meet in CGIS Knafel Room 354 at noon and lunch will be provided.
Best,
Pam
Title: Variable Selection for Estimation of Causal Effects - Art or Science: What is Done in Practice, What Ought to be Done, and What Cannot be Done?
Abstract: I consider the problem of estimation of the average treatment effect (ATE) of a binary treatment in the presence of a possibly high dimensional vector of potential pre-treatment confounding factors. How does one choose which confounders to adjust for and once chosen how should one adjust? Many methods have been proposed and the number of such methods seem to be proliferating at a rapid rate. Authors often advocate for their proposed method based on simulation studies, often comparing their method to others using either mean square error and/or confidence interval length and coverage as a criterion. However any simulation study can explore only a small part of the “parameter space” leading to conflicting claims and recommendations. In this talk I try to begin to provide some small semblance of order to this disorder by reviewing both known and new results concerning (i) the statistical guarantees offered by each approach (ii) mathematical relationships between different approaches, and (iii) more generally the limits to the guarantees that any possible approach can offer.
Hi all,
This week at the Applied Statistics workshop we will be welcoming Isaiah Andrews, an Assistant Professor in the Department of Economics at MIT. He will be presenting work entitled "Identification of and correction for publication bias" (joint work with Maximilian Kasy). Please find the abstract below and on the website.
We will meet in CGIS Knafel Room 354 at noon and lunch will be provided.
Best,
Pam
Title: "Identification of and correction for publication bias"
Joint with Maximilian Kasy (Harvard)
Abstract:
Not all empirical results are published, and the probability that a given result is published may depend on the result. Such selective publication can lead to biased estimators and distorted inference. We discuss identification of selectivity, and in particular of the conditional probability of publication as a function of a study's results. We propose two approaches to identification, the first based on systematic replication studies, and the second based on meta-studies. Having identified the form of selectivity, we propose median-unbiased estimators and associated confidence sets. We apply our methods to recent large-scale replication studies in experimental economics and experimental psychology, where we find strong evidence of selection based on statistical significance. We also apply our methods to a meta-study of minimum wage effects, where we find larger publication probabilities for studies reporting a negative effect on employment, and to a meta-study of de-worming programs, where our findings are ambiguous.