Polling is kind of like throwing darts, only... it's totally different. (en.wikipedia.org).
by Tony Petrangelo
Dec 5, 2012, 7:00 AM

Pollster performance in Minnesota in 2012

Last week I discussed 2012 election polling in Minnesota on a macro level, how all the polls did in the four statewide races. In that post I did not get into how specific pollsters fared against each other, that is what’s happening in this post.

Some individual pollsters did a really good job in Minnesota this year. Some pollsters did a really bad job in Minnesota this year. This is pretty typical for Minnesota, the some good polling, some bad polling thing. That’s not the point of this post. The point of this post is to quantify the goodness or badness of the pollsters relative to the final results, and each other.

What follows are a bunch of tables with a bunch of numbers in those tables. You have been warned.

The first table, this one of the average error of each pollster in the Presidential race in Minnesota (as with the tables in the previous post; “total” is the average error of all the polls the pollster released, “last 4” is polls from the last four weeks and “last 2” is polls from the last two weeks. “n” is the total number of polls by that pollster):

Pollster Total Last 4 Last 2 n
PPP 2.38 1.30 0.80 5
SurveyUSA 2.06 2.10 2.00 5
Mason-Dixon 6.20 4.70 4.70 2
Rasmussen 2.70 2.70 2.70 1
YouGov 0.43 0.50 0.70 3
St. Cloud St. 0.30 0.30 0.30 1

St. Cloud St. fares the best in the Presidential race. YouGov also did really well and they issues three polls to St Cloud St.’s one, although they used the same group of respondents for all three. Mason-Dixon, didn’t do so well.

I don’t like that table though. I want the error scores to be on a different, more readable scale. A scale such as the one used in baseball stats like OPS+ and wRC+.

It’s the “+” part of the stat that indicates the stat is on a relative scale. In this scale; 100 is average, 125 is 25% better than average and 75 is 25% worse than average. This scale renders the above results into an easier to read table.

Observe:

Pollster Total Last 4 Last 2
PPP 105 124 153
SurveyUSA 118 78 84
Mason-Dixon -47 -74 -74
Rasmussen 92 43 43
YouGov 183 171 159
St. Cloud St. 188 183 183

Looking at the Error+ of all the polls of the Minnesota Presidential race conducted you can see that PPP and Rasmussen were right around average, YouGov and St. Cloud St. were well above average and Mason-Dixon was just terrible.

There were two observations I made in my post last week that will help in looking at these numbers and the ones to follow. The first observation is that the polling in all of the races got more accurate the closer to the election they were conducted. So the polls from just the “last 2” weeks were generally the most accurate.

Looking at just the “last 2” weeks numbers from the above table, rather than all the polls, St. Cloud St. had the best results with YouGov and PPP also performing well. SurveyUSA was a bit below average, Rasmussen was bad and Mason-Dixon was downright dreadful.

The second observation is that the results in the Presidential contest had the least amount of error of any of the races. In fact, the average error of all the polls conducted on the Presidential race was less than the average error of any other group of polls. So it’s perhaps more informative to look at how the individual pollsters fared in the other races.

Here are the results from the US Senate race between Amy Klobuchar and Kurt Bills:

Pollster Total Last 4 Last 2
PPP 96 110 131
SurveyUSA 115 133 138
Mason-Dixon 125 89 78
Rasmussen 75 43 28
YouGov 64 77 101
St. Cloud St. 118 97 87

SurveyUSA leads the pack here, with PPP again posting results well above average. YouGov was average, St. Cloud St. and Mason-Dixon below average and Rasmussen, again, was bad.

Here are the results from the polling on the Marriage amendment:

Marriage Total Last 4 Last 2
PPP 139 141 109
SurveyUSA 55 84 103
Mason-Dixon 116 65 76
St. Cloud St. 144 100 109

There wasn’t much daylight between the pollsters on this one, PPP and St. Cloud St. lead the way, but they are both just barely above average. SurveyUSA also managed to get good results at the end, although if you look at the entirety of the pollsters work they consistently had the most pessimistic polls for opponents of the amendment.

Of the four pollsters to poll all four of the statewide races, Mason-Dixon is the only one who didn’t have a Error+ of over 100 on any of the races in their last poll.

In fairness to Mason-Dixon and St. Cloud St. on this one, their polls were released before the bottom fell out on the Photo Voter ID amendment. SurveyUSA polling released in the same time frame showed results similar to the Mason-Dixon and St. Could St. results.

This is the one race where I think an argument could be made to not hold it against the pollsters who got it wrong. A poll is only a snapshot in time, and it’s likely that when Mason-Dixon and St. Cloud St. were in the field, Photo Voter ID was still passing.

But here are the results from the polling on the Photo Voter ID amendment anyway:

Voter ID Total Last 4 Last 2
PPP 118 148 193
SurveyUSA 78 94 101
Mason-Dixon 123 75 68
St. Cloud St. 94 48 38

PPP and SurveyUSA were the last to release polls and PPP’s specifically was the only poll released by anyone that came anywhere close to getting the final results.

The following table is an average of the numbers from the above four tables.

Pollster Total Last 4 Last 2
PPP 114 131 146
SurveyUSA 92 97 106
Mason-Dixon 79 39 37
St. Cloud St. 136 107 104

Overall PPP comes out the best, with SurveyUSA and St. Cloud St. coming in slightly above average. Mason-Dixon was awful.

If you want to know who did the best the results on the Photo Voter ID amendment are ignored you can consult the following table, which is the same as the above table only with the exclusion of the results from the Photo Voter ID amendment.

Pollster Total Last 4 Last 2
PPP 113 125 131
SurveyUSA 96 98 108
Mason-Dixon 65 27 27
St. Cloud St. 150 126 126

St. Cloud St. fares better when you don’t look at the Photo Voter ID results and PPP fares worse, but it’s not enough of a swing for PPP to get dislodged from the top spot.

As they did in 2010 PPP and SurveyUSA turned in good results. Unlike 2010 St. Cloud St. did well and Mason-Dixon was terrible.

Thanks for your feedback. If we like what you have to say, it may appear in a future post of reader reactions.