Hi everyone!
This week at the Applied Statistics Workshop we will be welcoming *Eitan
Hersh*, Associate Professor at Tufts University. He will be presenting work
entitled *Behavioral Applications of Voter Files*. Please find the
abstract below and on the Applied Stats website here
<https://projects.iq.harvard.edu/applied.stats.workshop-gov3009>.
As usual, we will meet at noon in CGIS Knafel Room 354 and lunch will be
provided. See you all there!
-- Dana Higgins
*Title:* *Behavioral Applications of Voter Files*
*Abstract:* Over the last decade, state-level voter registration records
have become a regular source of data in research in political behavior. The
research value in voter files typically depends on the ability to link the
voter files to external sources of data. In this presentation, I will
describe several approaches to using voter files in behavioral research. I
will focus on the practice of record linkage and demonstrate a range of
substantive questions that can be answered with these voter files.
Substantive topics include the study of geography, family networks, the
effects of terrorist activities, partisan bias in medical treatment,
religious civic engagement, and voter suppression/access laws.
Hi everyone!
This week at the Applied Statistics Workshop we will be welcoming *Mohammad
Jalali*, research faculty at MIT Sloan School of Management. He will be
presenting work entitled *A Flexible Method for Aggregation of Prior
Statistical Findings*. Please find the abstract below and the full text
here
<https://urldefense.proofpoint.com/v2/url?u=http-3A__journals.plos.org_ploso…>
.
As usual, we will meet at noon in CGIS Knafel Room 354 and lunch will be
provided. See you all there!
-- Dana Higgins
*Title:* *A Flexible Method for Aggregation of Prior Statistical Findings*
*Abstract:* Rapid growth in scientific output requires methods for
quantitative synthesis of prior research, yet current meta-analysis methods
limit aggregation to studies with similar designs. Here we describe and
validate Generalized Model Aggregation (GMA), which allows researchers to
combine prior estimated models of a phenomenon into a quantitative
meta-model, while imposing few restrictions on the structure of prior
models or on the meta-model. In an empirical validation, building on 27
published equations from 16 studies, GMA provides a predictive equation for
Basal Metabolic Rate that outperforms existing models, identifies novel
nonlinearities, and estimates biases in various measurement methods.
Additional numerical examples demonstrate the ability of GMA to obtain
unbiased estimates from potentially mis-specified prior studies. Thus, in
various domains, GMA can leverage previous findings to compare alternative
theories, advance new models, and assess the reliability of prior studies,
extending meta-analysis toolbox to many new problems.
Hi everyone!
This week at the Applied Statistics Workshop we will be welcoming *Stephen
Raudenbush*, a Professor of Sociology at the University of Chicago. He will
be presenting work entitled *Estimands and Estimators for Multi-Site
Randomized Trials*. Please find the abstract below.
As usual, we will meet at noon in CGIS Knafel Room 354 and lunch will be
provided. See you all there!
-- Dana Higgins
*Title:* *Estimands and Estimators for Multi-Side Randomized Trials*
*Abstract:* In a multi-site randomized trial, sites such as schools or
hospitals are sampled; within each site, persons are assigned at random to
treatments. Such studies are increasingly common in social welfare,
medicine, and education. In this talk, I’ll first use potential outcomes
and a super-population framework to precisely describe different potential
populations and parameters of interest, which may diverge considerably when
treatment effects vary. Second, I’ll show that maximizing a weighted
two-level likelihood produces consistent estimators of all parameters,
but only after we introduce a correction for estimating between-site
variance components. Third, we’ll see that these weighted estimators, while
consistent, may be embarrassingly inefficient (to the point of being
improved by throwing out data). Precision weighting may help but
may introduce large-sample bias. In the interest of time, I will focus on
two parameters: (1) the average impact of treatment assignment (“intention
to treat effect”); (2) in trials with non-compliance, the average impact of
participation in the treatment on those induced by random assignment to
participate (“complier average causal effect”). I’ll illustrate with data
from the National Head Start Impact Study.
Hi everyone!
Due to a scheduling issue, Stephen Raudenbush will be presenting next
week. With less than 24 hours notice, *Mayya Komisarchik and Aaron Kaufman
*have graciously offered to present their work in progress, entitled *How
to Measure Legislative Compactness If You Only Know It When You See It*.
Please find the abstract below.
As usual, we will meet at noon in CGIS Knafel, Room 354, and lunch will be
provided. See you there!
-- Dana Higgins
*Title: **How to Measure Legislative Compactness If You Only Know It When
You See It*
*Authors: *Aaron Kaufman, Gary King, and Mayya Komisarchik
*Abstract: *The US Supreme Court, many state constitutions, and numerous
judicial opinions
require that legislative districts be “compact,” a concept assumed so
simple that no
definition is offered other than “you know it when you see it.” Academics,
in contrast,
have concluded that the concept is so complex that it has multiple
theoretical
dimensions requiring large numbers of conflicting empirical measures. We
hypothesize
that both are correct — that the concept is complex and multidimensional,
but
one particular unidimensional ordering represents a common understanding of
compactness
in the law and across people. We develop a survey design to elicit this
understanding,
without bias in favor of one’s own political views, and with high levels of
intracoder and intercoder reliability (even though the standard paired
comparisons approach fails). We then create a statistical model that
predicts, with high accuracy and solely from the geometric features of the
district, compactness evaluations by judges and other public officials from
many jurisdictions (as well as by redistricting consultants and expert
witnesses, law professors, law students, graduate students,
undergraduates, ordinary
citizens, and Mechanical Turk workers). As a companion to
this paper, we offer data on compactness from our validated measure for
18,215 US
state legislative and congressional districts, as well as software to
compute this measure
from any district shape. We also discuss what may be the wider
applicability of
our general methodological approach to measuring important concepts that
you only
know when you see.
Hi everyone!
Welcome to the Applied Statistics Workshop 2017-2018! Our first session
will be this Wednesday, September 6.
This week at the Applied Statistics Workshop we will be welcoming *Stephen
Raudenbush*, a Professor of Sociology at the University of Chicago. He will
be presenting work entitled *Estimands and Estimators for Multi-Site
Randomized Trials*. Please find the abstract below.
As usual, we will meet at noon in CGIS Knafel Room 354 and lunch will be
provided. See you all there!
-- Dana Higgins
*Title:* *Estimands and Estimators for Multi-Side Randomized Trials*
*Abstract:* In a multi-site randomized trial, sites such as schools or
hospitals are sampled; within each site, persons are assigned at random to
treatments. Such studies are increasingly common in social welfare,
medicine, and education. In this talk, I’ll first use potential outcomes
and a super-population framework to precisely describe different potential
populations and parameters of interest, which may diverge considerably when
treatment effects vary. Second, I’ll show that maximizing a weighted
two-level likelihood produces consistent estimators of all parameters,
but only after we introduce a correction for estimating between-site
variance components. Third, we’ll see that these weighted estimators, while
consistent, may be embarrassingly inefficient (to the point of being
improved by throwing out data). Precision weighting may help but
may introduce large-sample bias. In the interest of time, I will focus on
two parameters: (1) the average impact of treatment assignment (“intention
to treat effect”); (2) in trials with non-compliance, the average impact of
participation in the treatment on those induced by random assignment to
participate (“complier average causal effect”). I’ll illustrate with data
from the National Head Start Impact Study.
Hi everyone!
Welcome to the Applied Statistics Workshop 2017-2018! Our first session
will be this Wednesday, September 6.
This week at the Applied Statistics Workshop we will be welcoming *Stephen
Raudenbush*, a Professor of Sociology at the University of Chicago. He will
be presenting work entitled *Estimands and Estimators for Multi-Site
Randomized Trials*. Please find the abstract below.
As usual, we will meet at noon in CGIS Knafel Room 354 and lunch will be
provided. See you all there!
-- Dana Higgins
*Title:* *Estimands and Estimators for Multi-Side Randomized Trials*
*Abstract:* In a multi-site randomized trial, sites such as schools or
hospitals are sampled; within each site, persons are assigned at random to
treatments. Such studies are increasingly common in social welfare,
medicine, and education. In this talk, I’ll first use potential outcomes
and a super-population framework to precisely describe different potential
populations and parameters of interest, which may diverge considerably when
treatment effects vary. Second, I’ll show that maximizing a weighted
two-level likelihood produces consistent estimators of all parameters,
but only after we introduce a correction for estimating between-site
variance components. Third, we’ll see that these weighted estimators, while
consistent, may be embarrassingly inefficient (to the point of being
improved by throwing out data). Precision weighting may help but
may introduce large-sample bias. In the interest of time, I will focus on
two parameters: (1) the average impact of treatment assignment (“intention
to treat effect”); (2) in trials with non-compliance, the average impact of
participation in the treatment on those induced by random assignment to
participate (“complier average causal effect”). I’ll illustrate with data
from the National Head Start Impact Study.