Sorry for the delay, but we have finally finished up the final exam
grades. Here are a few comments:
1) Overall, we are *extremely* pleased about the exams. I firmly
believe that, for the vast majority of the students in the class, the
pedagagical goals of the course (see the syllabus for details) have
been accomplished. There are no more than a handful of your first-year
peers at other elite programs who could have done what you did (take a
zipped dataset, in a format that you had not seen before, process the
data, apply a regression model from a published article to that data
and then write up your results in a professional manner). The fact
that you also can performing a matching exercise and produce some very
nice looking graphics is icing on the cake.
2) As usual, we have ranked the papers from 1-18. But, as with the
second midterm, the top papers are largely indistinguishable --- in
the sense that the rankings might easily be made differently by other
reasonable observors. In this case, however, the top quality went a
lot deeper into the class then it did on the midterm. In fact, the top
10 papers all got A's. The were all excellent. Now, we did still rank
them (and the ranks matter a very little), but a different grader
might easily have had paper number 6, for example, as the best in the
class.
3) As before, it mattered a lot to us that you followed the
instructions. We wanted to see graphics (including use of lattice). We
wanted to see some tables. We wanted you to discuss incumbency
advantage. We wanted to see clean and fully documented R code. And so
on. A failure to do the things that we specified was costly.
4) In terms of letter grades, we would say that 1-10 were A's; 11-14
were A-'s; 15-16 were B+'s, 17 was a B (failed to do the matching) and
18 was an F (didn't even try).
5) I would be happy to discuss your paper in detail with you, should
you so desire. I could do this in person or on the phone. Those of you
who met with me about the midterms know the drill on this. In essence,
I made all sorts of scribbles in the margins as I did the grading. I
would be happy to offer my thoughts on what you did well and on what
you could have done better.
6) Please send your call sign to Tao (cc'ing Gary and me).
7) Here are the ranks:
1) Fluffy little hamster.
2) Mammalia Canivore . . .
3) Richard Winters
4) Manticore.
5) Herodotus
6) Vitalstatistix
7) Matchmaker
8) Dogjacket
9) Chico
10) Alice
11) Hedgehog
12) Baby Duke
13) Hopeful
14) J. Alfred Prufrock
15) Schmidt
16) Panoon
17) Zoolander
18) Shahriyar (Should feel free to contact either Gary or I in private
if you want to talk about the F, at your discretion.)
8) The answer was somewhere in the vicinity 3%-5% depending on the
assumptions that you made and the model that you used. Some people ended
up far away from this, probably because of errors reading in and/or
processing the data. As long as your R code was clean and
well-documented, we didn't penalize this much (if at all).
9) My wife thought that Zoolander was the coolest call sign.
10) I would urge you to read at least a few of these papers, especially
the top 5. This is probably the last chance in your career to see
how 5 smart minds independently tackled a problem that you have spent
a lot of time on. Although the best papers overwhelmingly came to the
same conclusion, they attacked the problem in a variety of ways and
made several interesting discoveries. There is something (different)
worth checking out in almost every paper, which is why we post them
on-line.
Congratulations, as a class, on a job well-done.
Dave
--
David Kane
Lecturer in Government
617-563-0122
dkane(a)latte.harvard.edu
Show replies by date