The 2022 midterms, polls and forecasting models
As the midterms approach (the elections, not the exams-- relax, students), the time approaches for posts on what the hell will happen. I do not know how many such posts there will be, but I think I want to begin by thinking back to that most traumatic moment in the history of mathematical political science: 2016. So, yeah. 2016 happened. My commentaries on 2016 were on The Unmutual Political Blog, which is no more, but here is the short version, along with what I said in the aftermath. The polls got 2016 wrong, but unbeknownst to most, the political science forecasting models actually got it right. Yes, that's right, the forecasting models were fine. But, and as we say, everything before the 'but' is irrelevant... here's how it happened.
Consider the model I reference most frequently for presidential elections: "Time for a Change." Abramowitz gets his predictive power from three variables: GDP growth in the second quarter of the election year, the president's approval rating (regardless of whether or not the incumbent president is running), and a penalty for the incumbent party that kicks in once the incumbent party has won two previous elections. The model predicted a GOP victory because while the economy was decent, it was not good enough to overcome the two-term penalty the Dems faced. That was merely one example, but generally speaking, the models (which were published in the October issue of PS: Political Science & Politics) leaned towards a generic Republican victory. Yet Abramowitz himself said that Trump was too far from "generic." He was too noxious, too obviously unqualified, and this would be the year his model failed. So said most political scientists when the polls went against the forecasting models, which was, shall we say, unusual.
As it turned out, the forecasting models were right, and the polls were wrong. Polling is hard. Political science... well, political science is hard too, but some of it replicates. Not much, but the forecasting models replicate. The touchy-feely crap and armchair revolutionary bullshit that the ideologues, activists and mathematically inept people do? That doesn't replicate, and it is useless, but the forecasting models keep getting replicated. They got replicated gain in 2016 (and 2020!). After the 2016 debacle, when I, too, deferred to the polls and rejected the forecasting models, I scraped a bunch of egg off my face, suffered through a bout of indigestion because as it turns out, crow does not agree with me (still, after a steady diet of crow year after year), and made the following vow. No, not a vow of silence, as peaceful as that might be. I would defer to the forecasting models. I sinned against the one, true math. I repented my sins, came back, said my Hail Alans, and decided that I'd do the math the right way.
So here we are. Pretty soon, PS: Political Science & Politics will publish its October issue, which will contain the forecasting models for the 2022 midterm elections. The articles went into submission months ago, based on data mostly 6 months prior, because that's generally how we do things. Why? The whole endeavor is based on the notion that a 6-month-prior prediction is political science, but a day-before prediction is journalistic, talking-head blather.
What will the models say? I have not been given a sneak preview, and I was not a reviewer (although I couldn't tell you if I were, so... whatever), but I know the models. The basic mechanics of the models are as follows. Midterm elections go against the president's party, with the effect mediated by the state of the economy, presidential approval, and so forth, depending on who constructs the model. There are a lot of models, and I've tried my hand at some improvements along with a colleague who found some perplexing patterns, but at the end of the day, or cycle, or whatever, the models are going to say that the Democrats lose seats in the House. How many? Models will vary.
But Biden's approval rating just shot up! Do I take that into account? Another week, another press interview in which I am asked about whether or not Dobbs v. Jackson Women's Health changes things. After all, did you see those numbers for women's voter registration? Trump continues to hold the center of the national conversation, because of course. Any day the country discusses Trump is a day the country is not discussing inflation/the economy. Does that alter the dynamic which would have put the forecasting models into effect? Does it at least blunt the effect of the economy? And what about Trump's growing legal troubles, which are now far more serious than anything he has faced before, as those legal troubles now exist? That can cut both ways, of course, helping to motivate Republicans, but also motivating Democrats, keeping independents aware of the GOP's attachment to Trump, and so forth.
Shall I keep going?
Notice that the paragraph devolves into, "but Trump!"
And so I return to the forecasting models. There is a lot of political science which does not replicate. See: ideologues, armchair revolutionaries and other associated bullshit artists [as he looks around himself...]. What does replicate? The forecasting models, just for example. Where I went wrong in 2016 was letting the noise of 2016 chaos obscure that fact.
There is noise. There will always be noise. Some of that noise will come in the form of supposedly learned individuals blathering endlessly about their latest scholarly books and articles, which will be read by few beyond their peer reviewers, and that's assuming the peer reviewers read them at all. At the end of the day, one thing matters, and one thing only. Replication. Are there unique factors going on in 2022?
Well, the thing is, "noise" has a statistical definition. Random error. Random error cancels out, and what is left is the underlying pattern. Which replicates.
Ignore the noise. Find the pattern, through what replicates, because everything else is bullshit.
GoGo Penguin, "Signal In The Noise," from their self-titled album.
Comments
Post a Comment