Hi guys-
I got this right before needing to step out the door in two minutes for the
evening. Can someone advise this person? Otherwise I won't get to it til
tomorrow...
Thanks!
A
--and yes, we will have problem sets on a weekly basis throughout the term,
as long as we have class on the monday. They will give you guys the tools
you need to actually do the papers.
>I'm not sure if I understand this question accurately. We are to compare
>Muslim
>& Non-Muslim, Opec & Non Opec, but how are those compared with the 4
>variables
>we are to work with? Does the exclusion of one (i.e. Muslim), mean it's
>Non-Muslim? Here is what I have so far--am I on the right track?
> > library(foreign)
> > Fish <- read.spss("Fish_data.sav", to.data.frame=TRUE)
> > fish.clean.subset <- na.omit(data.frame(Fish$FHREVERS, Fish$GDP90LGN))
> > names(fish.clean.subset) <- c("FHREVERS", "GDP90LGN")
> > scatter.smooth(fish.clean.subset $FHREVERS, fish.clean.subset $GDP90LGN)
> > fish.clean.subset2 <- na.omit(data.frame(Fish$FHREVERS, Fish$OPEC))
> > names(fish.clean.subset2) <- c("FHREVERS", "OPEC")
> > scatter.smooth(fish.clean.subset2$FHREVERS, fish.clean.subset2$OPEC,
>+ family="gaussian", span=1.0, degree=1)
> > fish.clean.subset3 <- na.omit(data.frame(Fish$MUSLIM, Fish$OPEC))
> > names(fish.clean.subset3) <- c("MUSLIM ", "OPEC")
> > scatter.smooth(fish.clean.subset3$ MUSLIM, fish.clean.subset3$OPEC,
>+ family="gaussian", span=1.0, degree=1)
>Warning messages:
>1: pseudoinverse used at -0.005
>2: neighborhood radius 1.005
>3: reciprocal condition number 8.6533e-016
>4: There are other near singularities as well. 1.01
>
>What are these warning messages saying about my code?
>
>
>P.S. Will we still have problem sets each week while working on papers?
>
>
> > data(swiss)
> > swiss.clean.subset<na.omit(swiss)
> Error: syntax error
you're on the right track-- the problem above is that the assignment
operator ( <- ) is not correct. There is some other character instead
of the -. Using <- should do what you want. Also, note that the
xyplot() function used below will delete NAs automatically so you
don't need to use na.omit manually here.
> > xyplot<-(Fertility~Education|Catholic, data=swiss.clean.subset),
> Error: syntax error
> > panel=function(x, y, ...) {
> + panel.xyplot(x, y, ...)
> + panel.loess(x, y, span=.95, degree=1, ...)
> + }
>
the problem here is that swiss.clean.subset doesn't exist b/c of the
syntax error above. Fixing the problem above should fix this as well.
Best,
Kevin
I'm trying to clean up the data file of NA's via:
library(foreign)
par(mfrow=c(1,1))
Fish <- read.spss("Fish_data.sav", to.data.frame=TRUE)
fish.clean.subset <- na.omit(data.frame(Fish$FHREVERS, Fish$GDP90LGN))
names(fish.clean.subset) <- c(FHREVERS, GDP90LGN)
scatter.smooth(Fish$FHREVERS, Fish$GDP90LGN)
Error: NA/NaN/Inf in foreign function call (arg 1)
Though I still receive the Error message that I did before trying to clean. Any
hints would be great! :)
Marie
hi all,
if i wanted to jitter the MUSLIM and OPEC variables within the splom
scatterplot matrix, where would i put the command in the code, and how
would i make sure that it didn't also jitter the GDP and political rights
variables?
that is, if i have
fish.clean<-na.omit(data.frame(fish$FHREVERSE, fish$GDP90LGN, fish$MUSLIM,
fish$OPEC))
names(fish.clean)<-c("FHREVERS","GDP90LGN","MUSLIM","OPEC")
splom(~fish.clean,
panel=function(x,y, ...){
panel.xyplot(x,y,...)
panel.loess(x,y,span=1.0,degree=1)})
i assume i need a command for the panels that involve OPEC and MUSLIM?
(how) would this work?
thanks...
Lucy
>
> is there a way to display which plot is which?
> That is, which is for Muslim/OPEC, non-Muslim/Opec, etc etc?
>
When the conditioning variables are numeric (0/1 for instance) the
colored strip at the top will have a slightly darker vertical line in
it to let you know what the level of the conditioning variable is.
With a 0/1 conditioning variable the strip that has the darker line on
the far left will be the 0 level, the one with the darker strip on the
far right will be the 1 level. With a conditioning variable with
levels 0/1/2, the 0 plot will have the dark line on the far left, the
1 plot will have the dark line in the middle, and the 2 plot will have
the dark line on the far right. You get the idea.
The think you can do to make this much easier to read is to recode the
numeric levels of the conditioning variable to character strings--
"Muslim" and "Non-Muslim" for instance. If you look at the recodes of
the Russian election study data in visualizing.R you will see some
examples of this.
Hope this helps.
Best,
Kevin
>
> is there a way to display which plot is which?
> That is, which is for Muslim/OPEC, non-Muslim/Opec, etc etc?
>
Hi Everyone,
Just a reminder that we are getting near the middle of the semester.
If you plan to write a final paper for gov1000 you should set up
a time to meet with me to talk about your paper topic and to have it
approved.
Also, you should all feel free to talk with me about any questions you
might have on the course material. So far, relatively few students
have stopped by. I don't want anyone to feel that I'm not available
outside of class. I really don't mind helping students out with this
stuff.
Best,
Kevin
------------------------------------------------------
Kevin Quinn
Assistant Professor
Department of Government and
Center for Basic Research in the Social Sciences
34 Kirkland Street
Harvard University
Cambridge, MA 02138
Hi all-
I just wanted to let you know that I made some edits to the section
handout for this week (it now includes some of the relevant commands that
have been mentioned on the list serve). A new version is on the website.
happy reading-
Alison
Hi all,
I am having trouble making graphs for conditioning on continuous
variables. In particular, I'm having trouble controlling the size of the
group of my controled variable - for example, I get ~45 different little
plots with 1-2 data points each.
This is my code:
> Catholic <- equal.count(swiss$Catholic, number=3, overlap=.25)
> xyplot(Fertility~Education|Catholic, data=swiss)
I have tried changing the numbe and overlap, indepdently and together and
nothing seems to change the large number of little graphs with not many
data points. Any ideas??
~Becky
Hi all-
Hope the problem sets are going well.
For those of you attending the Thursday section, I just wanted to mention
that it would be very helpful if you could arrive by 4:07 tomorrow.
Someone will be coming from the Bok Center (our teaching resource center
on campus) to videotape the section.
All of you will no doubt serve as TF's in the future. I encourage you to
take advantage of the services offered by the Bok Center as you learn to
teach. In my case, one of their consultants is going to analyze the video
and then give me feedback on how to improve my teaching.
Have a good week-
Alison
Hi Michael,
Group work is not allowed on either the final paper or the take-home
final exam. Hope this helps.
Best,
Kevin
On Wed, 20 Oct 2004, Michael William Nitsch wrote:
> Hi Professor Quinn,
>
> I'm forgetting what was said early in the semester: are we
> allowed/encouraged to work in groups on the final paper, or not?
>
> Thanks so much,
> Michael
>
> On Wed, 20 Oct 2004, Kevin Quinn wrote:
>
> > Hi Everyone,
> >
> > Just a reminder that we are getting near the middle of the semester.
> > If you plan to write a final paper for gov1000 you should set up
> > a time to meet with me to talk about your paper topic and to have it
> > approved.
> >
> > Also, you should all feel free to talk with me about any questions you
> > might have on the course material. So far, relatively few students
> > have stopped by. I don't want anyone to feel that I'm not available
> > outside of class. I really don't mind helping students out with this
> > stuff.
> >
> > Best,
> > Kevin
> >
> > ------------------------------------------------------
> > Kevin Quinn
> > Assistant Professor
> > Department of Government and
> > Center for Basic Research in the Social Sciences
> > 34 Kirkland Street
> > Harvard University
> > Cambridge, MA 02138
> > _______________________________________________
> > gov1000-list mailing list
> > gov1000-list(a)lists.fas.harvard.edu
> > http://lists.fas.harvard.edu/mailman/listinfo/gov1000-list
> >
>