Dear workshop community,
We will convene for the Applied Statistics Workshop (Gov 3009) *TOMORROW*
on Wednesday (2/27).
The speaker is* Roland Neil *(Harvard Sociology) who will be presenting his
work, "Testing for Racial and Ethnic Discrimination in Police Stops".
*Where:* CGIS Knafel Building, Room K354 (see this link
<https://map.harvard.edu/?bld=04471&level=9> for directions).
*When: *Wednesday, February 27th at 12 noon - 1:30 pm.
*Abstract: *
Whether police discriminate on the basis of race and ethnicity when making
stops is a topic of frequent debate among academics, in courts, and beyond.
However, due to implausible assumptions about police behavior, the most
commonly used tests are quite susceptible to indicating discrimination when
it is not present or to indicating a lack of discrimination when it is
present. This is true of research on the New York Police Department’s
(NYPD) practice of Stop, Question, and Frisk (SQF), a particularly
contentious case where findings have been mixed. Using data from 2008 to
2012 on over 700,000 NYPD weapon stops, I develop a three-step technique
with which to test for discrimination that uses machine learning algorithms
to estimate stop-level hit rates which are then compared through matching
and semi-parametric models. This technique addresses several of the most
worrisome inferential challenges faced when testing for discrimination in
police stops. I find that the NYPD was discriminatory against blacks, and
to a lesser extent against Hispanics, in its SQF stops for suspected
criminal possession of a weapon. However, differences in the contexts in
which stops happened—especially where they happened—account for most of the
raw disparities in hit rates.
*All are welcome! Lunch is provided! *
Best,
Connor Jerzak
Applied Statistics Workshop -- Graduate Student Coordinator
An anonymous feedback form for the workshop can be found here at this link
<https://docs.google.com/forms/d/e/1FAIpQLScp4lPVBtp4Akf6K6ggmfcTUSIUHEJX89-…>.
Workshop listserv sign-up at this link
<https://lists.fas.harvard.edu/mailman/listinfo/gov3009-l>.
Dear workshop community,
We will convene for the Applied Statistics Workshop (Gov 3009) next week on
Wednesday (2/27).
The speaker is* Roland Neil *(Harvard Sociology) who will be presenting his
work, "Testing for Racial and Ethnic Discrimination in Police Stops".
*Where:* CGIS Knafel Building, Room K354 (see this link
<https://map.harvard.edu/?bld=04471&level=9> for directions).
*When: *Wednesday, February 27th at 12 noon - 1:30 pm.
*Abstract: *
Whether police discriminate on the basis of race and ethnicity when making
stops is a topic of frequent debate among academics, in courts, and beyond.
However, due to implausible assumptions about police behavior, the most
commonly used tests are quite susceptible to indicating discrimination when
it is not present or to indicating a lack of discrimination when it is
present. This is true of research on the New York Police Department’s
(NYPD) practice of Stop, Question, and Frisk (SQF), a particularly
contentious case where findings have been mixed. Using data from 2008 to
2012 on over 700,000 NYPD weapon stops, I develop a three-step technique
with which to test for discrimination that uses machine learning algorithms
to estimate stop-level hit rates which are then compared through matching
and semi-parametric models. This technique addresses several of the most
worrisome inferential challenges faced when testing for discrimination in
police stops. I find that the NYPD was discriminatory against blacks, and
to a lesser extent against Hispanics, in its SQF stops for suspected
criminal possession of a weapon. However, differences in the contexts in
which stops happened—especially where they happened—account for most of the
raw disparities in hit rates.
*All are welcome! Lunch is provided! *
Best,
Connor Jerzak
Applied Statistics Workshop -- Graduate Student Coordinator
An anonymous feedback form for the workshop can be found here at this link
<https://docs.google.com/forms/d/e/1FAIpQLScp4lPVBtp4Akf6K6ggmfcTUSIUHEJX89-…>.
Workshop listserv sign-up at this link
<https://lists.fas.harvard.edu/mailman/listinfo/gov3009-l>.
Dear workshop community,
We will convene for the Applied Statistics Workshop (Gov 3009) tomorrow on
Wednesday (2/20).
The speaker is* Ankur Pandya *(Harvard School of Public Health) who will be
presenting his work, "Modeling the Cost Effectiveness of Two Big League
Pay-for-Performance Policies."
*Where:* CGIS Knafel Building, Room K354 (see this link
<https://map.harvard.edu/?bld=04471&level=9> for directions).
*When: *Wednesday, February 20th at 12 noon - 1:30 pm.
*Abstract: *
To date, evidence on pay-for-performance has been mixed. When
pay-for-performance policies improve health outcomes, researchers should
evaluate whether these health gains are worth the incremental costs
(financial incentives and increased utilization) needed to achieve them. We
used simulation modeling to evaluate the cost-effectiveness of two
pay-for-performance policies that were recently evaluated in major
journals: 1) a randomized controlled trial of financial incentives on
patients, physicians, or both for cholesterol control (Asch et al. JAMA
2015); and 2) a retrospective cross-country analysis of the United
Kingdom’s Quality and Outcomes Framework, the world’s largest primary care
pay-for-performance program (Ryan et al. Lancet 2016). We worked with the
authors of these studies to estimate the cost-effectiveness of these
programs and to identify the key drivers (e.g., levels of health effects,
levels of incentive payments, or modeling assumptions) of our model-based
results.
*All are welcome! Lunch is provided! *
Best,
Connor Jerzak
Applied Statistics Workshop -- Graduate Student Coordinator
An anonymous feedback form for the workshop can be found here at this link
<https://docs.google.com/forms/d/e/1FAIpQLScp4lPVBtp4Akf6K6ggmfcTUSIUHEJX89-…>.
Workshop listserv sign-up at this link
<https://lists.fas.harvard.edu/mailman/listinfo/gov3009-l>.
Dear workshop community,
We will convene for the Applied Statistics Workshop (Gov 3009) next week on
Wednesday (2/20).
The speaker is* Ankur Pandya *(Harvard Public Health) who will be
presenting his work, "Modeling the Cost Effectiveness of Two Big League
Pay-for-Performance Policies."
*Where:* CGIS Knafel Building, Room K354 (see this link
<https://map.harvard.edu/?bld=04471&level=9> for directions).
*When: *Wednesday, February 20th at 12 noon - 1:30 pm.
*Abstract: *
To date, evidence on pay-for-performance has been mixed. When
pay-for-performance policies improve health outcomes, researchers should
evaluate whether these health gains are worth the incremental costs
(financial incentives and increased utilization) needed to achieve them. We
used simulation modeling to evaluate the cost-effectiveness of two
pay-for-performance policies that were recently evaluated in major
journals: 1) a randomized controlled trial of financial incentives on
patients, physicians, or both for cholesterol control (Asch et al. JAMA
2015); and 2) a retrospective cross-country analysis of the United
Kingdom’s Quality and Outcomes Framework, the world’s largest primary care
pay-for-performance program (Ryan et al. Lancet 2016). We worked with the
authors of these studies to estimate the cost-effectiveness of these
programs and to identify the key drivers (e.g., levels of health effects,
levels of incentive payments, or modeling assumptions) of our model-based
results.
*All are welcome! Lunch is provided! *
Best,
Connor Jerzak
Applied Statistics Workshop -- Graduate Student Coordinator
An anonymous feedback form for the workshop can be found here at this link
<https://docs.google.com/forms/d/e/1FAIpQLScp4lPVBtp4Akf6K6ggmfcTUSIUHEJX89-…>.
Workshop listserv sign-up at this link
<https://lists.fas.harvard.edu/mailman/listinfo/gov3009-l>.
Dear workshop community,
We will convene for the Applied Statistics Workshop (Gov 3009) tomorrow on
Wednesday (2/13).
The speaker is* Ilya Shpitser *(Johns Hopkins Computer Science) who will be
presenting his work "Fair Inference on Outcomes."
*Where:* CGIS Knafel Building, Room K354 (see this link
<https://map.harvard.edu/?bld=04471&level=9> for directions).
*When: *Wednesday, February 13th at 12 noon - 1:30 pm.
*Abstract: *Systematic discriminatory biases present in our society
influence the way data is collected and stored, the way variables are
defined, and the way scientific findings are put into practice as policy.
Automated decision procedures and learning algorithms applied to such data
may serve to perpetuate existing injustice or unfairness in our society.
We consider how to solve prediction and policy learning problems in a way
which breaks the cycle of injustice'' by correcting for the unfair
dependence of outcomes, decisions, or both, on sensitive features (e.g.,
variables that correspond to gender, race, disability, or other protected
attributes). We use methods from causal inference and constrained
optimization to learn outcome predictors and optimal policies in a way that
addresses multiple potential biases which afflict data analysis in
sensitive contexts. Our proposal comes equipped with the guarantee that
solving prediction or decision problems on new instances will result in a
joint distribution where the given fairness constraint is satisfied. We
illustrate our approach with both synthetic data and real criminal justice
data.
*All are welcome! Lunch is provided! *
Best,
Connor Jerzak
Applied Statistics Workshop -- Graduate Student Coordinator
An anonymous feedback form for the workshop can be found here at this link
<https://docs.google.com/forms/d/e/1FAIpQLScp4lPVBtp4Akf6K6ggmfcTUSIUHEJX89-…>.
Workshop listserv sign-up at this link
<https://lists.fas.harvard.edu/mailman/listinfo/gov3009-l>.
Dear workshop community,
We will convene for the Applied Statistics Workshop (Gov 3009) next week on
Wednesday (2/13).
The speaker is* Ilya Shpitser *(Johns Hopkins Computer Science) who will be
presenting his work "Fair Inference on Outcomes."
*Where:* CGIS Knafel Building, Room K354 (see this link
<https://map.harvard.edu/?bld=04471&level=9> for directions).
*When: *Wednesday, February 13th at 12 noon - 1:30 pm.
*Abstract: *Systematic discriminatory biases present in our society
influence the way data is collected and stored, the way variables are
defined, and the way scientific findings are put into practice as policy.
Automated decision procedures and learning algorithms applied to such data
may serve to perpetuate existing injustice or unfairness in our society.
We consider how to solve prediction and policy learning problems in a way
which breaks the cycle of injustice'' by correcting for the unfair
dependence of outcomes, decisions, or both, on sensitive features (e.g.,
variables that correspond to gender, race, disability, or other protected
attributes). We use methods from causal inference and constrained
optimization to learn outcome predictors and optimal policies in a way that
addresses multiple potential biases which afflict data analysis in
sensitive contexts. Our proposal comes equipped with the guarantee that
solving prediction or decision problems on new instances will result in a
joint distribution where the given fairness constraint is satisfied. We
illustrate our approach with both synthetic data and real criminal justice
data.
*All are welcome! Lunch is provided! *
Best,
Connor Jerzak
Applied Statistics Workshop -- Graduate Student Coordinator
An anonymous feedback form for the workshop can be found here at this link
<https://docs.google.com/forms/d/e/1FAIpQLScp4lPVBtp4Akf6K6ggmfcTUSIUHEJX89-…>.
Workshop listserv sign-up at this link
<https://lists.fas.harvard.edu/mailman/listinfo/gov3009-l>.
Dear workshop community,
We will convene for the Applied Statistics Workshop (Gov 3009) TOMORROW on
Wednesday (2/6).
The speaker is* Maya Mathur *(Harvard Epidemiology) who will be presenting
her work "Sensitivity analysis for publication bias and selective reporting
in meta-analysis."
*Where:* CGIS Knafel Building, Room K354 (see this link
<https://map.harvard.edu/?bld=04471&level=9> for directions).
*When: *Wednesday, February 6th at 12 noon - 1:30 pm.
*Abstract:* We propose sensitivity analyses for selection in meta-analysis
due to publication bias, selective reporting, and "p-hacking". We consider
a publication process such that "statistically significant'' positive
results are more likely to be published than negative or "nonsignificant''
results by an unknown ratio. Using inverse-probability weighting and robust
estimation that accommodates non-normal true effects, small meta-analyses,
and clustering, we develop sensitivity analyses that enable statements such
as: "For publication bias to shift the observed point estimate to the null,
'significant' positive results would need to be at least 30-fold more
likely to be published than negative or 'nonsignificant' results.''
Comparable statements can be made regarding shifting to a chosen non-null
value or shifting the confidence interval. We show that a worst-case
meta-analytic point estimate under maximal publication bias can be obtained
simply by conducting a standard meta-analysis of only the negative and
"nonsignificant'' studies; this method sometimes indicates that no amount
of publication bias could "explain away'' the results. We illustrate the
proposed methods using real-life meta-analyses. An R package is
forthcoming.
*All are welcome! Lunch is provided! *
Best,
Connor Jerzak
Applied Statistics Workshop -- Graduate Student Coordinator
An anonymous feedback form for the workshop can be found here at this link
<https://docs.google.com/forms/d/e/1FAIpQLScp4lPVBtp4Akf6K6ggmfcTUSIUHEJX89-…>.
Workshop listserv sign-up at this link
<https://lists.fas.harvard.edu/mailman/listinfo/gov3009-l>.
Dear workshop community,
We will convene for the Applied Statistics Workshop (Gov 3009) next week on
Wednesday (2/6).
The speaker is* Maya Mathur *(Harvard Epidemiology) who will be presenting
her work "Sensitivity analysis for publication bias and selective reporting
in meta-analysis."
*Where:* CGIS Knafel Building, Room K354 (see this link
<https://map.harvard.edu/?bld=04471&level=9> for directions).
*When: *Wednesday, February 6th at 12 noon - 1:30 pm.
*Abstract:* We propose sensitivity analyses for selection in meta-analysis
due to publication bias, selective reporting, and "p-hacking". We consider
a publication process such that "statistically significant'' positive
results are more likely to be published than negative or "nonsignificant''
results by an unknown ratio. Using inverse-probability weighting and robust
estimation that accommodates non-normal true effects, small meta-analyses,
and clustering, we develop sensitivity analyses that enable statements such
as: "For publication bias to shift the observed point estimate to the null,
'significant' positive results would need to be at least 30-fold more
likely to be published than negative or 'nonsignificant' results.''
Comparable statements can be made regarding shifting to a chosen non-null
value or shifting the confidence interval. We show that a worst-case
meta-analytic point estimate under maximal publication bias can be obtained
simply by conducting a standard meta-analysis of only the negative and
"nonsignificant'' studies; this method sometimes indicates that no amount
of publication bias could "explain away'' the results. We illustrate the
proposed methods using real-life meta-analyses. An R package is
forthcoming.
*All are welcome! Lunch is provided! *
Best,
Connor Jerzak
Applied Statistics Workshop -- Graduate Student Coordinator
An anonymous feedback form for the workshop can be found here at this link
<https://docs.google.com/forms/d/e/1FAIpQLScp4lPVBtp4Akf6K6ggmfcTUSIUHEJX89-…>.
Workshop listserv sign-up at this link
<https://lists.fas.harvard.edu/mailman/listinfo/gov3009-l>.