Could someone offer a tip on how to code the change in unemployment variable?
I'm not sure how to get R to substract the lagged unemployment from current
unemployment (mostly because I'm not sure how to create the lagged variable).
Thanks,
Siddharth
Dear List,
We used the F-statistic code in Professor Sekhon's email and our output is:
> pf(F, (lm2$df-lm1$df),lm1$df, lower.tail=FALSE)
Warning in pf(q, df1, df2, lower.tail, log.p) :
NaNs produced
[1] NaN
What does this mean? What is a NaN?
Thanks,
Political Theory (and Soft Comparativist) Group
Hello All,
Danny asked me a good question about Koyck transformation which I
thought other may also find of interest.
> Also, MES on page 601 mention Koyck transformations. Are these different
> from simply including the effect of lagged variables? If so, how?
For us it is the same as having a lagged Y. This transformation
arises when we are interested in lagged values of X (not of Y per
say), but we don't want to use a lot of parameters. We could have,
for example, unemployment_t + unemployment_{t-1} + unemployment_{t-2}
..... unemployment_{t-20}. But that would imply a lot of parameters.
So it turns out one nice way to estimate all of this under the
assumptions that past values decrease in importance geometrically is
to just include a lagged Y.
See pages 132-145 of this document (these are the page numbers printed
on the page not the Acrobat reported page numbers, or simply search
for Koyck in the document):
http://web.bham.ac.uk/M.G.Ercolani/ITSA/Lectures2x1.pdf
Cheers,
Jas.
Hi guys-
Hope that the Gov 2000 preview yesterday was helpful. Once again, please
don't hesitate to contact me or Kevin if you'd like to talk about the
class. I can also try to put you in touch with students further along
within your subfield if you're interested in talking about the pros/cons of
investing in more statistics training at this stage. (I'm a comparativist...)
Meanwhile--a problem. Jacob didn't end up receiving the third textbook,
Cleveland's "Visualizing Data Analysis" after class. This is Professor
Quinn's copy of the textbook, so it's a real problem that it's gone
missing! Please let me know if you have seen it or if you accidentally
left class with the copy.
Alison
Hi,
You don't need to know about this. But since Ian asked, I thought
others may also be interested:
>In previous courses I have taken, there was a lot of hand-waving on
>this point but we were told to always use the
>heteroskedasticity-consistent standard errors
> A.) Does this type of correction adequately address the problem
In most situations, yes. The heteroscedasticity-corrected covariance
matrix goes back to a very deep and famous paper by Huber in the
1960s. And it is ALMOST a free lunch. If you DON'T have
heteroscedasticity and you use use these standard errors anyways, they
are slightly larger than than usual ones (because they are inefficient
under homoscedasticity). But this is a small price to pay for
insurance.
> B.) How can you get the robust errors in R?
Look up the hccm() command in the "car" library it returns the
heteroscedasticity-corrected covariance matrix (there are a number of
correction methods. Method "hc0" corresponds to the usual White
correction people talk about).
Here's an example:
library(car)
data2 <-
read.table(file="http://www.courses.fas.harvard.edu/~gov1000/Data/approval.asc",header=T)
lm1 <- lm(approval~lagApproval+inflationMonthly+unrate+eisenhower+kennedy+johnson+nixon+ford+carter+regan+bush,
data=data2)
#this returns the covariance matrix
foo <- hccm.lm(lm1,type="hc0")
#I just want the standard errors:
hetro.se <- sqrt(diag(foo))
print(hetro.se);
Jas.
We're trying to deal with the Vietnam War. Originally we coded it as a dummy
variable, but then we realized that method did not account for the change
within the War. So now we wish to make a linear function which increases as the
War goes on in order to represent the increasing unpop of the War. We cannot
figure out how to do this in R. Can someone help us?
Political Theory Group
We were wondering if anyone had found a reference to how the second to
last two rows in the table were done: the thing called "Sign. of infl.,
unemp.t" and "Sign. of infl.t+1, unemp.t+1". We can't find any
reference in the text or in the notes to how these are aggregated, save
that they are actually aggregates. It's even more confusing that infl
(without a t) and unempl.t (not change in unempl.t) are not included in
the models as independent variables (and therefore don't reference the
table's little ***s). They don't seem to be sums of the p-values either.
Any ideas?
Zoe and Nick yet again
I have updated the online lecture notes. I have added a new section
on page 144 (which goes until page 147) entitled "Derivation of
Multiple Least-Squares Parameter Estimates". Since we are not using
matrix algebra, to keep everything sane, I've limited consideration to
a model with two independent variables (like our model #3).
As the lecture notes mention, there is accompanying R code to do
multiple regression by "hand". See the mr1.R file at
http://jsekhon.fas.harvard.edu/gov1000/R.html.
I have also posted a R file which provides graphical diagnostics for
OLS. See the diagnostics1.R, diagnostics1.Rout and diagnostics1.pdf
files at http://jsekhon.fas.harvard.edu/gov1000/R.html.
I'll go over all of this stuff tomorrow. Because I've added new
material to the lecture notes, I'll have photocopies made for you so
you can follow along in class.
Cheers,
JS.