Estimating the causal effect of dynamic treatment regimes in observational
studies. Miguel Hernan, MD, Instructor in Epidemiology, Harvard School of Public Health. Joint work
with James Robins and Stephen Cole
Wednesday November 13 at noon
Center for Basic Research in the Social Sciences
34 Kirkland Street, Room 22
Lunch will be served.
Abstract:
Large randomized experiments with full adherence and no losses to
follow-up can be used to make causal inferences without further
assumptions. These ideal experiments may be used to evaluate the effect of
treatment regimes that a) fully describe each subject's treatment
experience during the study even before the study starts (e.g., always
treat, never treat, treat every other month), or b) do not fully describe
each subject's treatment experience during the study before the study
starts because the treatment actually received depends on the subject's
time-varying characteristics (e.g., treat until side effects appear, treat
only when some blood parameter reaches the value 200). We call the latter
dynamic treatment regimes. The analysis of ideal experiments is
straightforward: simply compare the mean outcome (or survival) in subjects
under each (dynamic or non-dynamic) treatment regime.
Observational studies can be used to make causal inferences only under the
assumption that they are able to provide the same results as a (possibly
hypothetical) randomized experiment. This assumption, also known as "the
assumption of no unmeasured confounders" or "of sequential randomization",
is not testable. But even if this assumption holds, and in the absence of
model misspecification, the standard analysis of observational studies
with time-varying treatments may yield estimates that cannot be endowed
with a causal interpretation. This limitation of standard methods applies
to the estimation of the effects of both dynamic and non-dynamic treatment
regimes. In contrast, methods based on marginal structural models and
structural nested models would yield valid causal estimates.
We have previously applied a marginal structural Cox model to estimate the
effects of non-dynamic treatments in HIV/AIDS epidemiology. The
subject-matter question we addressed was "should HIV-infected individuals
be treated with highly active antiretroviral therapy?" In this talk, I
will review our previous results and describe the application of a
structural nested accelerated failure time model for estimating the effect
of dynamic treatments among HIV-infected individuals. The question is
"should treatment with highly active antiretroviral therapy start when CD4
count drops to 200 or to 350?" Also, I will discuss computational issues
and different methods to handle censoring.
When the paper becomes available, a link will be sent to those requesting
it (email corr(a)fas.harvard.edu) and to those on the workshop email list.
Those unfamiliar with methods for causal inference from observational data
may find it useful to read about estimating non-dynamic treatments as
described in
http://www.hsph.harvard.edu/causal/publications/joint-causal.pdf
The Research Workshop in Applied Statistics is a forum for graduate
students, faculty, and visiting scholars to present and discuss
statistical innovations and applications in the social sciences. For more
information or to subscribe to the permanent workshop list, email
corr(a)fas.harvard.edu.
Paper now available at:
http://www.courses.fas.harvard.edu/~gov3009/handouts/comovementinternet.pdf
Shrewd, Crude, or Simply Deluded? Market Classification and the Internet
Stock Phenomenon. Ezra Zuckerman (MIT, Sloan School of Management)
Wednesday November 6 at noon
Center for Basic Research in the Social Sciences
34 Kirkland Street, Room 22
Lunch will be served.
Abstract:
We analyze comovement among Internet and other categories of stocks during
the late 1990s and 2000 in an effort to assess the sophistication of
stock-market valuation. Prominent accounts of the Internet stock
phenomenon suggest that the prices of these stocks were determined by
simplistic thinking. In particular, investors were not discriminating as
they crudely grouped all Internet stocks into an undifferentiated and
highly attractive investment category. We find that, in fact, comovement
among Internet stocks was high throughout much of this period but did not
reach the very high levels assumed by prevailing accounts. In addition, we
describe two additional patterns that are problematic for such
interpretations. First, comovement is less characteristic of price
increases than of price drops. Second, Internet stocks exhibited moderate
to high period-to-period consistency in the manner by which category
members were distinguished by investors. Together, our evidence supports
our view that, rather than being anomalous, the Internet stock phenomenon
was symptomatic of the general way by which equity prices are determined:
valuation is driven by prevailing theories of value, which are reasonable
but quite fallible. This view has important implications for how scholars
and managers understand and react to stock market dynamics.
The paper will be posted as soon as it becomes available on the workshop
website, www.courses.fas.harvard.edu/~gov3009/fall02/
The Research Workshop in Applied Statistics is a forum for graduate
students, faculty, and visiting scholars to present and discuss
statistical innovations and applications in the social sciences. For more
information, contact corr(a)fas.harvard.edu.
Shrewd, Crude, or Simply Deluded? Market Classification and the Internet
Stock Phenomenon. Ezra Zuckerman (MIT, Sloan School of Management)
Wednesday November 6 at noon
Center for Basic Research in the Social Sciences
34 Kirkland Street, Room 22
Lunch will be served.
Abstract:
We analyze comovement among Internet and other categories of stocks during
the late 1990s and 2000 in an effort to assess the sophistication of
stock-market valuation. Prominent accounts of the Internet stock
phenomenon suggest that the prices of these stocks were determined by
simplistic thinking. In particular, investors were not discriminating as
they crudely grouped all Internet stocks into an undifferentiated and
highly attractive investment category. We find that, in fact, comovement
among Internet stocks was high throughout much of this period but did not
reach the very high levels assumed by prevailing accounts. In addition, we
describe two additional patterns that are problematic for such
interpretations. First, comovement is less characteristic of price
increases than of price drops. Second, Internet stocks exhibited moderate
to high period-to-period consistency in the manner by which category
members were distinguished by investors. Together, our evidence supports
our view that, rather than being anomalous, the Internet stock phenomenon
was symptomatic of the general way by which equity prices are determined:
valuation is driven by prevailing theories of value, which are reasonable
but quite fallible. This view has important implications for how scholars
and managers understand and react to stock market dynamics.
The paper will be posted as soon as it becomes available on the workshop
website, www.courses.fas.harvard.edu/~gov3009/fall02/
The Research Workshop in Applied Statistics is a forum for graduate
students, faculty, and visiting scholars to present and discuss
statistical innovations and applications in the social sciences. For more
information, contact corr(a)fas.harvard.edu.