As it turns out, stats golden boy Nate Silver's forecasts for last night's Oscars were correct only on 4 out of 6 counts. He missed on Best Supporting Actress (Penelope Cruz won over Taraji P. Henson) and on Best Actor (Sean Penn won over Mickey Rourke). He blogs about the outcome on FiveThirtyEight.com, and delivers a few gems in the category of sour grapes:
On Mickey Rourke losing Best Actor: "It probably doesn't help to be a huge jackass (like Mickey Rourke)... But is this information helpful for model-building? Probably not. (Unless perhaps we had some way to quantify someone's jackassedness: Days spent at the Betty Ford Center?)"
Liveblogging 13 minutes after missing Best Supporting Actress: "Remember when the Oscars used to be interesting and the Super Bowl used to be boring?"
Incidentally, the categories where he failed where two that I had guessed correctly, but my thought process in coming up with those guesses had absolutely nothing to do with statistics.
This in contrast to presidential elections, which are an inherently statistical process, and where anyone's guess on who might win inevitably involves some form of probabilistic reasoning...
Along these lines, it would be interesting to know more about Nate Silver's methodology and how it compares to his electoral model, but I haven't been able to find this information. Would welcome any leads...