You are correct! Indeed, I wish that I had put it that way
myself. However, you still need to think (at least for your published
work and/or job market paper) about testing other definitions (or at
least other *models* for estimating your preferred definition). Recall
GK's discussion of including lagged incumbency in the regression.
If you fail to do this, then you will not have a good answer when a
reviewer or listener says, "What about this, that or the other
thing?". Sometimes the questioner wil be someone who is an honest
scientist searching for the truth. Sometimes it will be someone who
just disagrees with your conclusions and is looking for a way to
discredit them. In either case, you need to consider the most
plausiable alternate models (meaning alternate ways of estimating your
definition of the interesting thing). It is of secondary importance,
but still important, to consider alternate definitions of what is the
interesting thing.
Thanks to Anna for taking the time to clarify this.
Dave
Anna Lorien Nelson writes:
I think Olivia is right about how GK defined
"uncontested" elections.
Moreover, GK point out that there are plausible arguments for *not*
dumping *any* uncontested elections from the data (see their section on
uncontested seats). The crucial thing, it seems to me, is to clearly
explain what your results mean.
I don't agree with Dave that it is absolutely necessary to test 3, 6,
or more mathematical definitions for "contested" elections. As best
I can tell, GK didn't do so in their article. The important thing is to
make an argument why defining incumbency advantage with a particular
meaning of "contested" elections is valid, then calculate your results
and tell readers precisely what your results mean. (They will mean
something different -- the whole phrase "incumbency advantage" means
something a little different -- depending on what data you use.)
This is not to say that testing multiple meanings of "uncontested"
elections is not worthwhile, just that I don't think it is absolutely
necessary to produce interesting and valid results.
Dave or anybody else, if I am wrong, please correct me!
Thanks,
Anna
On Wed, 8 Jan 2003, Olivia Lau wrote:
I actually think we were wrong on the problem
set. I looked at the
article closely and I think they only discard uncontested elections.
Please correct me if I'm wrong...preferably with a page number. I'm
confused on this point.
At any rate, isn't excluding 0.3 > Y, 0.7 < Y selecting on the dependent
variable and a big no-no?
Olivia.
On Wed, 8 Jan 2003 dkane(a)latte.harvard.edu wrote:
> Stanislav Markus writes:
> > Dear Dave, Gary, Tao,
> >
> > I remember Dave pointing out in one of the homeworks that extreme
> > observations (say dprct below 0.3 and over 0.7) should be excluded from
> > the dataset during cleaning - I wonder why such reduction in n is
> > justifiable ex ante, before, say, looking at the distribution of
> > residuals.
>
> It is not necessarily justified. I *think* that we did it bacause that is what
> they did in GK and the first step was to replicate their results.
>
> Other issues to think about in this context (as always) is precisely what you
> are trying to estimate. For GK it was incumbency advantage is *contested*
> elections. We might all agree that any election in which no Republican runs is
> not "contested". But what about an election in which a Republican runs,
but he
> gets only 1% of the vote? Is that election meaningfully contested? Probably
> not. In most such cases, the Republican candidate isn't really a professional
> politician, just someone who thought it would be fun to see his name on the
> ballot.
>
> The difficult part is then deciding where to draw the line.
>
> In my mind, the key, however, is to do the analysis with various definitions of
> "contested" and then reporting to the reader (either in a table or a
footnote,
> depending on how interesting the result is) what impact this has on your key
> quantity of interest. In the articles we read for the last class, you could see
> lots of examples of this sort of stuff, where the authors said, "We tried this
> and that and the other thing, but the results were largely the same." Of
> course, you can only say this if it is true.
>
> Dave
_______________________________________________
gov1000-list mailing list
gov1000-list(a)fas.harvard.edu
http://www.fas.harvard.edu/mailman/listinfo/gov1000-list
--
David Kane
Lecturer in Government
617-563-0122
dkane(a)latte.harvard.edu