Sunday, February 22, 2015

Finally, after weeks of anticipation, the Academy Awards were doled out Sunday. There weren’t too many surprises — no huge upset in the big six categories at least — and once again, the gamblers proved that crowds of people with money on the line are among the most reliable forecasters in the business.
How’d we do? When it came to best director, Alexandro G. Iñárritu beat out our model’s slight favorite, Richard Linklater. But otherwise, our simple model performed well, going 5/6. We definitely got lucky — this best picture race was one of the closest in the past 25 years — but the model did what it was supposed to do, showing us the general lay of the land.
We’ve been using a model that looks at how nominated films, directors and performers did in earlier prizes — from guilds, such as the Directors Guild Awards and Screen Actors Guild Awards; from members of the press, such as the Golden Globes; and from critics, such as the Critics’ Choice Movie Awards. Some of these prizes have tracked closely with the Academy Awards, some less so.
For example, here’s how the model worked this year for best picture. “Birdman or (The Unexpected Virtue of Ignorance),” which won the Oscar, also won top honors at the DGA and PGA awards, the two most predictive pre-Oscar awards.
hickey-datalab-oscarfinal-table-1
Add it all up, and here was the state of the race going into tonight:
hickey-datalab-oscarfinal-1
Pretty simple (here’s a full methodological description). So let’s look at the results by difficulty:
The predictable categories: 3/3
J.K. Simmons won for best supporting actor, Patricia Arquette won for best supporting actress and Julianne Moore won for best actress. Nice. Now on to the interesting ones.
The tough categories: 2/3
I lost the most sleep over the model’s preference for Eddie Redmayne winning best actor — it was one of the closer races, I personally didn’t like the film all that much, and the lack of data for Bradley Cooper had me worried. But when in doubt, trust the SAG award. Definitely notching that as a win for the methodology.
Our model agreed with the betting markets that “Birdman” was the favorite to win best picture, so that’s a win too.
And as I said above, we missed on Iñárritu winning best director, we had Linklater by a hair.
hickey-datalab-oscarfinal-6
The betting markets had the right idea on the top directing prize. If anything, this will mean that the top honor from the Directors Guild of America— which Iñárritu won — probably needs to have an even higher weight. Linklater took most of the other directing awards in our dataset — Golden Globe, BAFTA, Critics Choice, Satellite — but it seems like the DGA (which was right 84 percent of the past 25 years before the Iñárritu win) really is the only one that matters.
The betting markets on the rest:
The prevailing odds were right on “Citizenfour” winning best documentary, “Glory” from “Selma” winning best original song and “The Imitation Game” winning best adapted screenplay.
There were some opportunities for lucky bettors to beat the bookies: “Big Hero 6″ beat the favored “How To Train Your Dragon 2″ in the best animated picture category, and “Birdman” won best original screenplay over “The Grand Budapest Hotel.”
Otherwise, the betting markets nailed the big six awards.
In gamblers we trust.

So, what next? There are two things on my mind about the next iteration of this model.
I was pretty worried about how the model would handle a film like “American Sniper,” one that came out too late to really get a fair shake at a lot of the earlier award shows. This isn’t to say that itshould have necessarily, but we had absolutely no data on Bradley Cooper going into this show. This was pretty concerning — sort of the opposite problem we had back in 2011 predicting best director, when Ben Affleck sucked up all the pre-Oscar awards for best director of “Argo” but, er, was not nominated by the Academy.
Also, I’m curious if the model isn’t doing enough to capture any electoral idiosyncrasies in the instant runoff voting method the Academy uses to pick best picture. Essentially, it’s too simple to capture the victory-by-coalition that goes into picking the winner. Could that be a problem? Is including data in the model from before the Oscars vote system switched in 2009 throwing us off when it comes to which awards are predictive?
But we’ve got about a year to iron these out. Our simplistic model went 5/6 this year, the gamblers went 6/6. A pretty good year overall.

COMMENTS Add Comment

 

PROMOTED STORIES

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.