Covering elections isn't only about math. Just ask Carl Diggler.
The data journalists promised us Donald Trump wasn’t really a thing, and yet, here he is. (Jim Urquhart)
He went from being a
hostage of Russian security forces to
predicting the exact results of the Iowa presidential caucuses, right down to the third- and fourth-place finishers. He called
Bernie Sanders’s upset win in this past week’s Indiana primary, when his competitors all said Hillary Clinton had it locked down. He has correctly predicted the results of 77 out of 87 races in this year’s primaries, an 89 percent accuracy rating that equals that of FiveThirtyEight’s Nate Silver while tackling nearly twice as many contests.
And he’s a fictional character.
Carl “The Dig” Diggler is a parody of political pundits written by Felix Biederman and me for
CAFE. Carl exists to satirize all that is vacuous, elitist and ridiculous about the media class. From his
sycophantic love of candidates in uniform to his
hatred of Bernie Bros, from his
reverence for “the discourse” to his
constant threats of suing the people who troll him on Twitter, Carl is predicated on being myopic, vain and - frankly - wrong.
But something funny happened along the way. Biederman and I, who are neither statisticians nor political scientists, started making educated guesses for our parody about the results of the primaries. And we were right. A lot.
We beat the hacks at their own game by
predicting every Democratic winner on Super Tuesday. We told readers who would win in the unpredictable caucuses that FiveThirtyEight didn’t even try to forecast, such as those in Minnesota, Wyoming and even American Samoa.
We called 19 out of the past 19 contests. FiveThirtyEight, whose model cannot work without polling, accurately predicted 13.
Unlike professional forecasters, who maintain the pretense of objectivity, Diggler’s approach is proudly based on gut instinct and personal bias. Take his
Wisconsin prediction:
Wisconsinites are mostly a simple people. They eat their three lunches, kiss their often enormous children on their often featureless faces, and go to church so they can pray for the 2 Broke Girls.
Or his
New Hampshire call:
This state is practically built on the idea of filming police officers and pestering them about maritime law, so they see the harassment Sanders’ campaign is built on as inherently patriotic.
The success of two amateurs writing a fictional pundit who relies on “gut” and bogus “racial science” highlights the collapse of the expert prognosticators this year. For months, professional data journalists at
FiveThirtyEight,
Vox,
the New York Times,
et al, proclaimed that Trump and Sanders had no chance. They checked the numbers and reassured readers that the Trump bubble would pop, that
Sanders would win two states and then go home.
In particular, Silver, who correctly predicted the results of the 2008 and 2012 presidential elections, enjoys the imprimatur of scientific authority. His site is the grandaddy of a strain of data journalism - er, “
empirical journalism” - that insists that hard mathematical analysis is more objective and more accurate than old-school pundits with their capricious instincts. “DataLab,” the name of FiveThirtyEight’s blog, conjures up wonks in white lab coats chalking authoritative equations.
Critics of Silver’s prognostications are routinely dismissed as partisans who are just angry about a reality that disfavors their candidate (e.g., the “
unskew the polls” Republicans in 2012). But are Silver’s models truly scientific? Do they deserve any more credence than Carl Diggler’s gut instinct?
Good science is falsifiable. Silver’s horserace predictions are not. When he says that
Clinton has a 95 percent chance of winning the California primary if it were held today, you couldn’t prove or disprove him (because the primary won’t be held today, and even if it were, it would be held only once, not 100 times so you could see if she lost five of them). It’s an untestable assertion of who’s ahead and who’s behind that relies on the model’s past outcomes to be credible.
Yet Silver’s Election Day models have not been vindicated by actual results. FiveThirtyEight’s homebrew “
Polls-Plus” model, which weights several factors based on a secret formula,
has been worse at predicting outcomes than a weighted average of the most recent polls. It’s hard to pin down Silver’s actual success rate when these two models compete
with his third demographic model, which in turn might contradict his blog posts and podcasts. Benchmark Politics,
an upstart competitor that claims its record is better than FiveThirtyEight‘s, likens Silver’s hedging to “telling a basketball team they can shoot four free throws instead of two.”
Even when all of Silver’s models for a given race turn up wrong, it never seems to be FiveThirtyEight’s fault. When the site badly whiffed on last year’s British election,
it was the pollsters who erred. Their mea culpa after Michigan’s Democratic primary, which Sanders won by 1.5 percentage points even though Silver’s model gave Clinton a greater than 99 percent chance of winning, was titled “
Why the Polls Missed Bernie Sanders’s Michigan Upset.” After the Indiana primary, Diggler’s
tongue-in-cheek victory lap was
met with scoffs from Silver fans who explained that Silver gave Sanders a 10 percent chance of winning, and that things with a 10 percent chance of happening do happen from time to time.
But what’s the point of a prediction if you can’t stand by it, and if it doesn’t, well, predict? This ridiculous backpedaling has a glib, Brechtian tone to it: “The models are right. It’s the voters who are wrong.”
Despite the pretense of scientific detachment, Silver’s models are hardly unbiased. The moment you decide to weight some data sets over others, you’ve introduced bias. Silver’s failed
Polls-Plus model incorporated indicators that had virtually no predictive value this year, like endorsements and fundraising totals. Why? Because of Silver’s
dogmatic adherence to “The Party Decides,” a thesis in political science that says nominees are chosen by party establishment elites. That theory
is currently buried under a pile of Trump signs. Silver’s sanctimonious claims that Trump could never be the nominee - complete with FiveThirtyEight’s
invention of an “Endorsement Primary” - were a pretty clear case of using Diggler-style gut, not science, to guide your predictions.
If it seems I’m being too hard on Silver now, that’s because I am. But we should all feel bamboozled. If the quants had not ignored Trump’s soaring popularity all last year, perhaps the GOP establishment would not have sat on their hands as he waltzed to the nomination. And if the same pundits had not been writing Sanders’s obituary before any votes were cast, perhaps that race would be even closer. Maybe a more subjective form of analysis, such as going out and listening to voters, would have understood their passions better than the data journalists’ models.
I’m not an obscurantist. Voters ought to have some idea of the relative standings of candidates, expressed through polls or fundraising or endorsements or whatever combination of those things and more, to help them make informed decisions. But we should do away with the fantasy that the model that worked last election applies to this one. We should treat any given prediction not as objective science but as one person’s subjective guess about which variables could be more predictive than others. Forecasters should show their work, and pundits who are consistently proven wrong should be ignored in the future. In short, we should be skeptical of anyone who says they can predict the future.
Except for Carl Diggler, of course. Carl is always right.
Source