jackman.stanford.edu/blog
bannerImage

delivery flight?

Tuesday May 29, 2007

Filed under: flight nerdery — jackman @ 2:35 pm

Good luck to these people. Flying a Cessna 172 from Hayward to Honolulu, approximately 2400 miles non-stop. They must attach all kinds of auxiliary fuel tanks. I’ve seen goofy photos of internal fuel bladders in RJs for their delivery flights to Australia etc, but a C172? Yikes. From Flightaware (click on thumbnails below)…:

Flightaware1-1

Flightaware2-1

Comments Off

The Condi and Alexander Downer show

Friday May 25, 2007

Filed under: Australian Politics,general — jackman @ 11:40 am

Condi Rice (my former provost) has been showing Australian Foreign Minister Alexander Downer around the local area. A San Francisco Giants game on Wednesday night, a look around Stanford and a visit up to HP. I saw a lot of CHP cars in town yesterday, and the “standard package” cruising campus this morning (two black SUVs, unmarked Ford in front, two CHP cruisers parallel blocking traffic from behind). From Downer’s remarks at the HP visit:

we’re reminded of the enormous importance of investment in education when I hear the size of the endowment at Stanford University. Its annual revenue by Australian standards, it’s staggering.

Stanford’s endowment was reported to be $12.2B in 2005, $14.1B in 2006 (15.4% growth), the 3rd largest in the country, according to a 2007 report from the National Association of College and University Business Officers. Harvard leads the pack by a long way with $29B in 2006. The University of Texas system (public) had $13.2B in endowment in 2005; Texas A&M is not in the UT system and had $5.6B in 2005; so “big endowments” are not necessarily a private university only phenomenon (my implication: “just because Australia has a public university system…”). And Texas has about the same population as Australia.

And from Rice’s remarks:

We’ve talked a little bit about the need for further education of our engineering and math science graduates – something that I’m particularly interested in as a professor, former professor, soon to be future professor again.

She’s coming back?

Comments (2)

Therese watch #2; Amnesty report

Wednesday May 23, 2007

Filed under: Australian Politics — jackman @ 5:11 pm

As predicted(?), expect this kind of media focus (News Ltd; ABC) on Therese Rein as the Coalition looks for a way to turn the tide…

And in other news… I wonder if putting John Howard in the same league as Mugabe does Amnesty any good? The paragraph in question (full report here), with sidebar “Fear Thrives on Myopic and Cowardly Leadership”:

The Howard government portrayed desperate asylum-seekers in leaky
boats as a threat to Australia’s national security and raised a false alarm of a
refugee invasion. This contributed to its election victory in 2001. After the
attacks of 11 September 2001, US President George W Bush invoked the fear of
terrorism to enhance his executive power, without Congressional oversight
or judicial scrutiny. President Omar al-Bashir of Sudan whipped up fear
among his supporters and in the Arab world that the deployment of UN
peacekeepers in Darfur would be a pretext for an Iraq-style, US-led invasion.
Meanwhile, his armed forces and militia allies continued to kill, rape and
plunder with impunity. President Robert Mugabe of Zimbabwe played on
racial fears to push his own political agenda of grabbing land for his
supporters.

Amnesty has influence to the extent it shapes public opinion in Western democracies. They should be savvy enough to know how a paragraph like the above will be reported: e.g., ‘Howard is just like Mugabe’ (SMH), Howard stooping to Mugabe lows: Amnesty (ninemsn), etc. Howard and Mugabe bookend this paragraph, but the lead will always be “Howard in same league as Mugabe” etc. Attention getting, for sure. But easily batted away as a ridiculous criticism, hysteria etc, and hence change no one’s mind on the topic.

Comments Off

desktop wars

Filed under: computing — jackman @ 11:38 am

A profile of Al Gore showed his Mac setup with 3×30″ monitors. Some interesting comparisons here. My own effort (2×30″, for 2x2560x1600 = 8.192 million pixels) is shown here, at 20% actual size (550px wide thumbnail is 11% of actual).

Screen1

Comments (1)

predictive densities, conditionally conjugate models

Sunday May 20, 2007

Filed under: statistics — jackman @ 1:08 am

So in the throes of textbook writing I came across this bit of Bayesian arcania. Suppose

[tex]\theta | \mu, \sigma^2 \sim N(\mu,\sigma^2)[/tex]

where

[tex]\mu \sim N(b_0, B_0)[/tex]

and

[tex]\sigma^2 \sim \mbox{inverse-Gamma}(\nu/2, \nu\sigma^2_0/2),[/tex]

and we want to know the marginal density of [tex]\theta[/tex],

[tex] p(\theta) = \int_0^\infty \int_{-\infty}^\infty p(\theta | \mu, \sigma^2) p(\mu) p(\sigma^2) d\mu d\sigma^2.[/tex]

This model differs from the usual conjugate set-up in that we have independence between [tex]\mu[/tex] and [tex]\sigma^2[/tex], instead of the usual [tex]p(\mu,\sigma^2) = p(\mu|\sigma^2)p(\sigma^2)[/tex]; when we use the latter version of the prior and we have a normal model for [tex]\theta[/tex] as above, then the marginal density for [tex]\theta[/tex] is a [tex]t[/tex] density.

But what happens when the model is changed subtly, as above? I got as far as seeing that the marginal for [tex]\theta[/tex] would be something like a [tex]t[/tex] density, but stopped when I got as far as

[tex]p(\theta) \propto \displaystyle\int_0^\infty \exp \left[ \displaystyle\frac{-(\theta - b_0)^2}{2 (\sigma^2 + B_0)}\right] p(\sigma^2) d\sigma^2 [/tex]

where the first term is (up to a constant of proportionality) [tex]p(\theta | \sigma^2)[/tex], what you get after marginalizing with respect [tex]\mu[/tex] and is the kernel of a normal density over [tex]\theta[/tex] with mean [tex]b_0[/tex] and variance equal to [tex]\sigma^2 + B_0[/tex] (i.e., this is nothing more than the familiar result from analysis of variance that the total variance is equal to the sum of the between and within variances).

Ok, so now we have to marginalize with respect to [tex]\sigma^2[/tex]. In the conjugate case this last step is easy because [tex]\sigma^2[/tex] usually appears multiplicatively (i.e., [tex]\sigma^2 k[/tex]), rather than in the additive form as in the denominator, above. I’m guessing (a) the result I’m after is in a book in my office that I’ll find on Monday; (b)
is some kind of mixture of a [tex]t[/tex] and a normal?

I simulated draws from the marginal for [tex]\theta[/tex] using a Monte Carlo based method of composition (three lines of R); I seem to be getting back something more leptokurtic than the normal, but actually fitting a [tex]t[/tex] density to the simulated values produced weird results, even with [tex]10^7[/tex] simulated values. See the figure below (clickable thumbail), generated with this R code:

library(pscl)
m <- 1e7
nu <- 14
b0 <- .225
B0 <- (.15/4)^2
omega2.0 <- .005
omega2 <- rigamma(n=m,alpha=nu/2,beta=nu*omega2.0/2)
tau <- rgamma(n=m,nu/2,nu*omega2.0/2)
mu0 <- rnorm(n=m,b0,sd=sqrt(B0))
theta <- rnorm(n=m,mean=mu0,sd=sqrt(omega2))

pdf(file="logHist.pdf",
    w=11,
    h=10)
par(mfrow=c(1,2))

hist(theta,501,
     prob=TRUE,
     col=gray(.85),
     lwd=.5,
     ylab="density",
     main=NULL,
     xlab=expression(theta))
mynorm2 <- function(x)dnorm(x,mean=mean(theta),sd=sd(theta))
curve(mynorm2,
      from=min(theta),to=max(theta),n=1001,
      add=TRUE,
      col="red",
      lwd=1)
axis(1)
legend(x="topleft",
       bty="n",
       col=c("black","red"),
       lwd=c(1,1),
       legend=c("histogram estimate",
         "normal MLE"))

blah <- hist(theta,501,plot=FALSE)
plot(blah$mids,blah$intensities,type="l",log="y",
     xlab=expression(theta),
     ylab="density (log scale)",
     axes=FALSE)
lines(blah$mids,
      dnorm(blah$mids,mean=mean(theta),sd=sd(theta)),
      col="red",
      lwd=1)
axis(1)
legend(x="topleft",
       bty="n",
       col=c("black","red"),
       lwd=c(1,1),
       legend=c("histogram estimate",
         "normal MLE"))
dev.off()

Loghist-1

Comments (2)

JAGS binaries

Wednesday May 16, 2007

Filed under: computing,statistics — jackman @ 6:39 pm

Claudia Engel (the Academic Technology Specialist over in AnthroSci) has JAGS version 0.98 binaries for Mac (Intel and PPC) and Windows. Links from her blog. Nice follow through on some earlier hackery I cobbled together (aren’t computing professionals wonderful?).

Comments Off

14 point margins? Nah, just having a laugh.

Tuesday May 15, 2007

Filed under: Australian Politics — jackman @ 8:43 am

John Howard has suggested that his government’s poor showing in recent polls might stem from respondents having a laugh. From the 7.30 Report:

KERRY O’BRIEN: But that still raises the question: why are you doing so badly in the polls if the people believe you’re a good Government?

JOHN HOWARD: Well ultimately, we’ll all find out whether it’s not all been a you know, an interesting exercise by the Australian public’s with its innate sense of humour, and we’ll find that out on election day won’t we?

What else can you say when you’re down 57-43 etc etc? The line that “people aren’t really paying attention when they answer the poll question” is alternately patronizing or contradictory of the message that the Budget went over a treat with voters. Maybe whimsical survey respondents are the only way to reconcile 57-43 Coalition/Labor 2PP poll results with 1.85/1.92 (Coalition/Labor) prices on Centrebet (as Andrew Leigh and Justin Wolfers remind us, we’re probably less whimsical when having a punt than when answering survey questions). Then again, Morgan’s “who do you think will win” question has been producing splits around 32-54-14 (Coalition/Lab/DK) for 2 or 3 months now.

So someone’s having a laugh, or will, on election day. But who?

Comments (5)

"Horrendous” United flight UAL863 SFO-SYD (er, BNE)

Monday May 14, 2007

Filed under: flight nerdery — jackman @ 3:16 pm

Having flown UAL863 more times than I can begin to count, this sounds indeed “horrendous”, particularly so since often I’m trying to get to BNE anyway…!  United has no presence at BNE, so this would have been, well, interesting.  Surely they must have kept someone on the plane with the pax for the 5 or 6 hour wait until the relief crew showed.  And there is only one thing worse than this that I can think of, and that would be doing it with kids…

Fog wreaks airline havoc – National – smh.com.au

Comments Off

listwise deletion calculation

Sunday May 13, 2007

Filed under: statistics — jackman @ 10:08 am

Jacob Felson (PhD Candidate in Sociology at Penn State University) writes:

Professor Jackman:

I had a question related to expected values and missing data that I really
wanted an answer to, and thus, that I hope you might find useful to post
on your blog(!) It has to do with the relationship between the total
number of missing cells in a data matrix and the number of cases left
after listwise deletion. For example, take a data matrix X of dimensions
n x p, with 20% missing cells. The missingness is completely at random.
What is the expected number of missing (and observed) cells in any given
row? And with listwise deletion, what is the expected number of cases
after listwise deletion?

Of course, this has to do with the extent to which missing cases happen to
fall in the same column rather than different columns, or the extent to
which different respondents fail to answer the same questions, rather than
different questions. Knowing about the relationship between the total
percentage of missing data, and the number of cases present after listwise
deletion might tell you useful information about the survey.

If the missingness is completely at random (as you stipulate), then maybe you’re prepared to make an additional simplifying assumption: that missingness is independent across cells, with probability [tex]\alpha \in [0,1][/tex]. The story is more complicated with different probabilities of missingness across variables and/or observations. Under the (simple!) conditions I’ve stipulated, if missingness knocks out [tex](100 \times \alpha)\%[/tex] of the data, then the probability that any given cell is missing is [tex]\alpha.[/tex] Binomial theory now does the work. The probability that a row of length p has any missing data (and thus the row will be knocked out under listwise deletion) is 1 minus the probability that the row has no missing data: under independence, this is the probability of observing p successes in p independent trials, each with probability [tex]1-\alpha[/tex], i.e.,

[tex]P(p \, \mbox{independent successes}) = (1-\alpha)^p.[/tex]

We can now assign probability mass over the number of rows that survive listwise deletion using the binomial probability mass function: if the row survival probability is the probability that r out of n rows survive is

[tex]\displaystyle\binom{n}{r} (1-\alpha)^{pr} [1-(1-\alpha)^p]^{n-r}[/tex]

and the expected number of rows that survive is [tex]E(r; \alpha, p, n) = n(1-\alpha)^p[/tex].
To take an example, if [tex]\alpha=.2[/tex] and you have [tex]p=10[/tex] variables, then the probability that a row is not listwise deleted is

[tex](1-.2)^{10} = .8^{10} = .107[/tex]

(i.e., you had to get lucky 10 times in a row), and so you lose about 89% of your observations. Since we have MCAR (missing completely at random), there is no question of bias from the missingness, just an efficiency loss (i.e., confidence intervals are wider than they would be if you could have kept all or more of the data). In particular, running an analysis with a sample size equal to [tex].11 \times n[/tex] will produce confidence intervals [tex]1/\sqrt{.11} \approx 3[/tex] times as large as the analysis with [tex]n[/tex] observations.

With just [tex]p=3[/tex] variables and [tex]\alpha=.2[/tex] then you keep about [tex].8^3 = .51[/tex] or half of your data. All these calculations presuppose independence in the missingness, and the specific comparison I just made above turns on [tex]\alpha[/tex] remaining constant as the number of variables changes (which is unlikely in any real setting).

Comments Off

Howard has “buckled” on industrial relations

Friday May 4, 2007

Filed under: Australian Politics — jackman @ 5:24 am

From the SMH:

Australia’s peak business group [the Australian Chamber of Commerce] has accused Prime Minister John Howard of buckling to a scare campaign with his major backdown on Work Choices laws.

Mr Howard today admitted public concern was behind the decision to soften the controversial laws, although he refused to concede workers were worse off under Work Choices.

Under the changes, a new fairness test will apply to Australian Workplace Agreements (AWAs), ensuring workers earning less than $75,000 in areas covered by awards will be compensated if they trade away penalty rates.

….Australian Chamber of Commerce and Industry (ACCI) chief executive Peter Hendy was also scathing of Mr Howard’s backflip, saying the government should have toughed out the criticism.

Frankly, the louder the Chamber of Commerce complains about this, the better…for the re-election prospects of the Howard government…

Comments (1)
Next Page »

Powered by WordPress

Bad Behavior has blocked 3333 access attempts in the last 7 days.