I found some recovered files when I did "ls -a". apparently they were saved as
hidden files like this #filename#. I can't find the code from my R session -
the .Rdata file seems to contain something from homework 5... but I guess I'm in
a little better shape than I thought I was. I'll follow your SOPs in the future :p
-Phillip.
-------------------------------------------------
Phillip Y. Lipscy
Perkins Hall Room #129
35 Oxford Street
Cambridge, MA 02138
(617)493-4893
lipscy(a)fas.harvard.edu
Ph.D. Candidate
Harvard University, FAS, Department of Government
-------------------------------------------------
dhopkins(a)fas.harvard.edu writes:
> Dear Dave,
>
> Thanks as always for dealing with our questions.
What else would I spend time on.
> Mine is relatively dumb: as I
> work, every time I give R a new command, I get the following warning:
>
> (74) (warning/warning) Error caught in `font-lock-pre-idle-hook': (error
> Nesting too deep for parser)
>
> Various experiments have not gotten it to go away, and I don't want to try
> anything to crazy for fear of destroying important work--any suggestions?
Hmm. There are a lot of moving parts in xemacs and ESS, and I am no
expert. My guess is that there is too much stuff in your R buffer
(*R*) which you need to clean out.
So, I would quit R. Kill the *R* buffer. And then restart R.
Dave
> Best,
> Dan
>
>
>
--
David Kane
Lecturer in Government
617-563-0122
dkane(a)latte.harvard.edu
Dear all,
I just encountered some sort of fatal error, and all of my windows closed. Can
somebody instruct me on how to recover the lost files? I tried 'M-x
recover-session' but it only gave me one file back and it wasn't my R-session,
so all my code disappeared... Trying 'M-x recover-session' again returns a
message that says "no files can be recovered from this session now."
I guess the lesson for everybody is to make sure you save your code in a latex
document often or somewhere it won't get lost when emacs decides to bump you
off... My latex files contain the stuff I did yesterday but all the work I did
today was lost... :(
-Phillip.
---
Fatal error (1).
Your files have been auto-saved.
Use `M-x recover-session' to recover them.
If you have access to the PROBLEMS file that came with your
version of XEmacs, please check to see if your crash is described
there, as there may be a workaround available.
Otherwise, please report this bug by running the send-pr
script included with XEmacs, or selecting `Send Bug Report'
from the help menu.
As a last resort send ordinary email to `crashes(a)xemacs.org'.
*MAKE SURE* to include the information in the command
M-x describe-installation.
If at all possible, *please* try to obtain a C stack backtrace;
it will help us immensely in determining what went wrong.
To do this, locate the core file that was produced as a result
of this crash (it's usually called `core' and is located in the
directory in which you started the editor, or maybe in your home
directory), and type
gdb /usr/bin/xemacs core
then type `where' when the debugger prompt comes up.
(If you don't have GDB on your system, you might have DBX,
or XDB, or SDB. A similar procedure should work for all of
these. Ask your system administrator if you need more help.)
Lisp backtrace follows:
# (condition-case ... . error)
# (cion-case ... . error)
# (catch top-level ...)
atch top-level ...)
-------------------------------------------------
Phillip Y. Lipscy
Perkins Hall Room #129
35 Oxford Street
Cambridge, MA 02138
(617)493-4893
lipscy(a)fas.harvard.edu
Ph.D. Candidate
Harvard University, FAS, Department of Government
-------------------------------------------------
We realize that most of you are working hard on the midterm. So, the
only new reading for Monday's lecture is a brief article by Don Rubin
available at:
http://www.acponline.org/journals/annals/supplement/causalef.htm
Although the revised schedule for GOV 1000 will not permit us to get
to propensity scores (and therefore to replicate Kosuke's excellent
paper), we still need to be clear on the relationship between matching
and regression. This article helps with that. I will also try to cover
this in lecture.
To the extent that you have time to review anything else, you might
want to take a look at Chapter 5 of Neter. I will be spending some
time on matrix algrebra.
For next Monday (the 25th) and for the problem set that is due 12/5,
you should read these two articles by Gary.
http://gking.harvard.edu/files/abs/mist-abs.shtmlhttp://gking.harvard.edu/files/abs/making-abs.shtml
(Tao, could you please add links in the appropriate locations on the
web site for all three readings?)
All three articles will feature prominently in problem set 7.
As you can see, there has been a significant drop in the reading load
for this course. In retrospect, it might have been better to have the
same amount of reading, but spread more evenly throughout the
semester. Then again, papers and such for your other classes are
probably picking up steam in the second half of the semester, so maybe
the front loading was a good idea. You will have an opportunity to
comment on these (and other issues) when we distribute feedback forms
during the last class. (Although, as always, we welcome feedback
during the semester as well.)
Good luck with the midterm.
Dave
--
David Kane
Lecturer in Government
617-563-0122
dkane(a)latte.harvard.edu
i have a number of xemacs running right now, greatly annoying people who want
to work on their mid-terms.
i am really sorry and want to say that all i did was accidental. does anyone
know how to kill xemacs?
yongwook
-----------------------------
Yongwook Ryu
PhD Candidate
Department of Government
Harvard University
Tel:617-493-3397
Email: yryu(a)fas.harvard.edu
-----------------------------
A student writes:
> Another question about the GK article:
>
> For Figure 2, where "chi" is plotted over time, it does not seem that "chi" is
> significant at conventional levels for many of the early years (i.e. most of the
> 1920's - this is based on the code we used as a class so I'm not sure this was
> the case in the original).
That is correct.
> Is it still OK to plot the results like this?
Yes.
After all, GK do it! ;-)
> Even if the results are mostly insignificant, is it defensible to
> claim that incumbency seems to have a broadly positive effect
> (i.e. although insignificant, they seem to be mostly positive)?
Yes, although you should be very careful in your wording. That is, you
should make clear in the text what is going on. (Note that GK also
discuss averages of the incumbency effect over time. That is, they
claim that the average incumbency effect prior to 1950 was around
2%. They can make that claim, by averaging over several years, even if
no *single* year is statistically significant.
> Alternatively, would it be best to plot only the results that are
> significant at, say the 95% confidence level?
It would depend on your purpose and conclusions. I have seen graphs in
which people plot both the mean effect and the 95% confidence
intervals over time, but since we have not practiced this in the
problem sets, I would not expect you to be able to do this ---
although it wouldn't be that hard.
Dave
--
David Kane
Lecturer in Government
617-563-0122
dkane(a)latte.harvard.edu
More than one student has expressed a concern about the need to go
through the data "line by line" to check for errors.
Don't do this!
Please don't do this.
The purpose of the midterm is not to see who can crosscheck the most
of Michael Ting's (author as listed in docfiles.txt) coding
decisions. It is great if you remember the year that Richard Shelby
switched parties. It is even nice if you know who Richard Shelby
is. But it is not what the midterm is about.
So, do not "check" the data line by line.
Of course, as said before, you do need to "check" the data
*programatically* and poke around to understand things. Commands like
summary and table are quite useful. You need to make (and document)
decisions about what should be kept and what should be deleted and/or
modified. But you should only be using the data itself. Do not spend
time in the library reading books on "Famous Senate Races in the 1920s".
Dave
--
David Kane
Lecturer in Government
617-563-0122
dkane(a)latte.harvard.edu
Good question. the logic is that it is causally prior to the key causal
variable, I_2 (inc status at time 2) and correlated with I_1. If it also
predicts v_2 controlling for the other variables (which include V_1, the
vote at time 1, and P_2, the winner of the election at time 1 -- yes
that's time 1), then you'd need to control for it or you'd have omitted
variable bias. (Your explanations seem as plausible as any.) I discussed
this issue at the end of the last class (and I'll go into it in more
detail in a few weeks). We found that I_1 didn't have much of a
predictable effect over time, and so we dropped it. I'd be surprised if
you found an effect for it in the senate, but it is probably true that no
one in the history of the world has ever studied this question before!
Gary
On Sat, 16 Nov 2002, Phillip Y. Lipscy wrote:
> Question about the GK article:
>
> On page 1151-1152, it is noted that I_1 should be included in an
> application of this model to a different legislature, at least to see if
> it is significant. I do not understand the reasoning behind why I_1
> might be significant (and it is not explained in the paper). Is it
> because 1. there is some "afterglow" that a retired incumbent exerts
> after he is long-gone (I am not American politics so I don't know what
> the arguments are in the literature), 2. we would like to evaluate some
> cumulative effect of incumbency, 3. something else?
>
> Thanks,
> Phillip.
>
>
>
> -------------------------------------------------
> Phillip Y. Lipscy
> Perkins Hall Room #129
> 35 Oxford Street
> Cambridge, MA 02138
> (617)493-4893
> lipscy(a)fas.harvard.edu
>
> Ph.D. Candidate
> Harvard University, FAS, Department of Government
> -------------------------------------------------
>
>
> Quoting "Phillip Y. Lipscy" <lipscy(a)fas.harvard.edu>:
>
> > General question:
> >
> > if we use plots to evaluate the data on a preliminary basis, can we/should
> > we
> > include the plot output in the appendix, or should we just include the code
> > (i.e. plot(blah) and some explanation of what we see?
> >
> > Thanks,
> > Phillip.
> >
> >
> > -------------------------------------------------
> > Phillip Y. Lipscy
> > Perkins Hall Room #129
> > 35 Oxford Street
> > Cambridge, MA 02138
> > (617)493-4893
> > lipscy(a)fas.harvard.edu
> >
> > Ph.D. Candidate
> > Harvard University, FAS, Department of Government
> > -------------------------------------------------
> >
> >
> > Quoting Dave Kane <dkane(a)latte.harvard.edu>:
> >
> > > Not intended as a decoy. I am curious to see how many people figure
> > > out that there are not 51 states . . .
> > >
> > > ;-)
> > >
> > > Dave
> > >
> > > Phillip Y. Lipscy writes:
> > > > > b) The Internet is a useful tool. When I googled "ICSPR state
> > codes",
> > > I
> > > > > got
> > > > > this: http://www.ipums.umn.edu/usa/hgeographic/stateicpb.html.
> > > Perhaps
> > > > > you
> > > > > will find it useful to know that Alaska is `82'. There is another
> > > site
> > > > > with
> > > > > similar information:
> > > > > http://homer.bus.miami.edu/~bbishin/adamscodebook.pdf
> > > >
> > > > I'm not sure whether you intended this as a decoy, but the first
> > internet
> > > site
> > > > lists D.C. as 98, and the other has no information, but D.C. is coded
> > as
> > > 55 in
> > > > the data. This information is available in the GK code book for house
> > > > elections, and looking at the attributes for state# 55 in the data
> > makes
> > > it
> > > > pretty clear that it's D.C. Apparently the ICSPR "standard" is not as
> > > standard
> > > > as it is made out to be... (I don't expect a response since this is
> > not
> > > a
> > > > question).
> > > >
> > > > -Phillip.
> > > >
> > > > -------------------------------------------------
> > > > Phillip Y. Lipscy
> > > > Perkins Hall Room #129
> > > > 35 Oxford Street
> > > > Cambridge, MA 02138
> > > > (617)493-4893
> > > > lipscy(a)fas.harvard.edu
> > > >
> > > > Ph.D. Candidate
> > > > Harvard University, FAS, Department of Government
> > > > -------------------------------------------------
> > > >
> > > >
> > >
> > > --
> > > David Kane
> > > Lecturer in Government
> > > 617-563-0122
> > > dkane(a)latte.harvard.edu
> > >
> >
> >
>
>
It seems clear that some students are not looking as closely as they
should at the files that are a part of senate.zip. Besides the data
files themselves, there is information on exceptions (which you
probably shouldn't worry about) and on the timing of Senate elections
(which you may or may not find useful) and on the structure of the
data files (docfiles.txt). That last one is important (although not
without sloppiness of its own) and you should study it.
Good luck,
Dave
--
David Kane
Lecturer in Government
617-563-0122
dkane(a)latte.harvard.edu
A student writes:
> General question:
>
> if we use plots to evaluate the data on a preliminary basis,
An excellent idea. Every empirical project should begin this
way. Before you can model, you must poke around a fair amount.
> can we/should we
> include the plot output in the appendix, or should we just include the code
> (i.e. plot(blah) and some explanation of what we see?
Neither. The appendix contains the code and documentation needed to
replicate all the results in the paper. It is not a diary of
everything that you did. It is, obviously, much cleaner and shorter
than a dump of all the history of your R session from now till
Thursday. Of course, a preliminary plot may show you something which
then causes you to delete some of the data as outliers. In that case,
you just need code in the Appendix which does the deletion as well as
documentation in the appendix explaing why that deletion is done. (If
the manipulations are "important" enough, then they would merit a
footnote in the paper itself.) You do not need to show the plot
commands that lead you to this discovery.
Dave
--
David Kane
Lecturer in Government
617-563-0122
dkane(a)latte.harvard.edu