Dear Applied Statistics Workshop Community,
Our last meeting of this semester will be on November 29 (12:00 EST). Yi
Zhang presents "Individualized Policy Evaluation and Learning under
Clustered Network Interference."
<When>
November 29, 12:00 to 1:30 PM, EST
Lunch will be available for pick-up inside CGIS K354.
<Where>
In-person: CGIS K354
Zoom:
https://harvard.zoom.us/j/93217566507?pwd=elBwYjRJcWhlVE5teE1VNDZoUXdjQT09
<Abstract>
While there now exists a large literature on policy evaluation and
learning, much of prior work assumes that the treatment assignment of one
unit does not affect the outcome of another unit. Unfortunately, ignoring
interference may lead to biased policy evaluation and yield ineffective
learned policies. For example, treating influential individuals who have
many friends can generate positive spillover effects, thereby improving the
overall performance of an individualized treatment rule (ITR). We consider
the problem of evaluating and learning an optimal ITR under clustered
network (or partial) interference where clusters of units are sampled from
a population and units may influence one another within each cluster. Under
this model, we propose an estimator that can be used to evaluate the
empirical performance of an ITR. We show that this estimator is
substantially more efficient than the standard inverse probability
weighting estimator, which does not impose any assumption about spillover
effects. We derive the finite-sample regret bound for a learned ITR,
showing that the use of our efficient evaluation estimator leads to the
improved performance of learned policies. Finally, we conduct simulation
and empirical studies to illustrate the advantages of the proposed
methodology.
The most recent draft can be found here <https://arxiv.org/abs/2311.02467>.
<2023 Schedule>
GOV 3009 Website:
https://projects.iq.harvard.edu/applied.stats.workshop-gov3009
Calendar:
https://calendar.google.com/calendar/u/0?cid=Y18zdjkzcGF2OWZqa2tsZHJidTlzbm…
Best,
Jialu
--
Jialu Li
Department of Government
Harvard University
jialu_li(a)g.harvard.edu
Dear Applied Statistics Workshop Community,
We will not be meeting this Wednesday due to the holiday. Hope you have a
restful break and see you on Nov 29th for our last session this semester!
Best,
Jialu
--
Jialu Li
Department of Government
Harvard University
jialu_li(a)g.harvard.edu
Dear Applied Statistics Workshop Community,
Our next meeting will be on November 15 (12:00 EST). Ashesh Rambachan
presents "From Predictive Algorithms to Automatic Generation of Anomalies"
(joint with Sendhil Mullainathan).
<When>
November 15, 12:00 to 1:30 PM, EST
Lunch will be available for pick-up inside CGIS K354.
<Where>
In-person: CGIS K354
Zoom:
https://harvard.zoom.us/j/93217566507?pwd=elBwYjRJcWhlVE5teE1VNDZoUXdjQT09
<Abstract>
Economic theories often progress through the discovery of anomalies.''
Canonical examples of anomalies include the Allais Paradox and the
Kahneman-Tversky choice experiments, which are constructed menus of
lotteries that highlighted particular flaws in expected utility theory and
spurred the development of new theories for decision-making under risk. In
this paper, we develop algorithmic procedures to automatically generate
such anomalies. Our algorithmic procedures take as inputs an existing
theory and data it seeks to explain, and then generate examples on which we
would likely observe violations of our existing theory if we were to
collect data. As an illustration, we produce anomalies for expected utility
theory using simulated lottery choice data from individuals who behave
according to cumulative prospect theory. Our procedures recover known
anomalies for expected utility theory in behavioral economics and discover
novel anomalies based on the probability weighting function. We conduct
incentivized experiments to collect choice data on our algorithmically
generated anomalies, finding that participants violate expected utility
theory at similar rates to the Allais Paradox and Common Ratio Effect.
While this illustration is specific, our anomaly generation procedures are
general and can be applied in any domain where there exists a formal theory
and rich data that the theory seeks to explain.
The most recent draft can be found here
<https://economics.mit.edu/sites/default/files/inline-files/mr_anomalies.pdf>
.
<2023 Schedule>
GOV 3009 Website:
https://projects.iq.harvard.edu/applied.stats.workshop-gov3009
Calendar:
https://calendar.google.com/calendar/u/0?cid=Y18zdjkzcGF2OWZqa2tsZHJidTlzbm…
Best,
Jialu
--
Jialu Li
Department of Government
Harvard University
jialu_li(a)g.harvard.edu
Dear Applied Statistics Workshop Community,
Our next meeting will be on November 8 (12:00 EST). Zeyang Jia presents
"Bayesian Safe Policy Learning with Chance Constrained Optimization:
Application to Military Security Assessment during the Vietnam War."
<When>
November 8, 12:00 to 1:30 PM, EST
Lunch will be available for pick-up inside CGIS K354.
<Where>
In-person: CGIS K354
Zoom:
https://harvard.zoom.us/j/93217566507?pwd=elBwYjRJcWhlVE5teE1VNDZoUXdjQT09
<Abstract>
Algorithmic and data-driven decisions and recommendations are commonly used
in high-stakes decision-making settings such as criminal justice, medicine,
and public policy. We investigate whether it would have been possible to
improve a security assessment algorithm employed during the Vietnam War,
using outcomes measured immediately after its introduction in late 1969.
This empirical application raises several methodological challenges that
frequently arise in high-stakes algorithmic decision-making. First, before
implementing a new algorithm, it is essential to characterize and control
the risk of yielding worse outcomes than the existing algorithm. Second,
the existing algorithm is deterministic, and learning a new algorithm
requires transparent extrapolation. Third, the existing algorithm involves
discrete decision tables that are common but difficult to optimize over. To
address these challenges, we introduce the Average Conditional Risk
(ACRisk), which first quantifies the risk that a new algorithmic policy
leads to worse outcomes for subgroups of individual units and then averages
this over the distribution of subgroups. We also propose a Bayesian policy
learning framework that maximizes the posterior expected value while
controlling the posterior expected ACRisk. This framework separates the
estimation of heterogeneous treatment effects from policy optimization,
enabling flexible estimation of effects and optimization over complex
policy classes. We characterize the resulting chance-constrained
optimization problem as a constrained linear programming problem. Our
analysis shows that compared to the actual algorithm used during the
Vietnam War, the learned algorithm assesses most regions as more secure and
emphasizes economic and political factors over military factors.
<2023 Schedule>
GOV 3009 Website:
https://projects.iq.harvard.edu/applied.stats.workshop-gov3009
Calendar:
https://calendar.google.com/calendar/u/0?cid=Y18zdjkzcGF2OWZqa2tsZHJidTlzbm…
Best,
Jialu
--
Jialu Li
Department of Government
Harvard University
jialu_li(a)g.harvard.edu