So what to do for 2020?

This is the sixth and LAST in a series of blog posts for folks who are into the geeky mathematical details of how Election Graphs state polling averages have compared to the actual election results from 2008, 2012, and 2016. If this isn’t you, feel free to skip this series. Or feel free to skim forward and just look at the graphs if you don’t want or need my explanations.

If you just want 2020 analysis, stay tuned, that will be coming soon.

You can find the earlier posts here:

The electoral College trend chart

In the last few posts, I spent a lot of time on looking at various ways of determining what is a "close state". This is because in the past Election Graphs has defined three classifications:

  • "Weak": Margin < 5% – States that really are too close to call. A significant polling error or rapid last minute movement before election day could flip the leader easily.
  • "Strong": 5% < Margin < 10% – States where one candidate has a substantial lead, but where a big event could still move the state to "Weak" and put it into play.
  • "Solid": Margin > 10% – States where one candidate's lead is substantial enough that nobody should take seriously the idea of the leader not actually winning.

The "main" chart on Election Graphs has been the Electoral College Trend Chart. The final version on Election Day 2016 looked like this:

The "band" representing the range of possibilities goes from all the Weak states being won by the Democrat, to all the weak states being won by the Republican.

One of the reasons for all the analysis in this series is of course that this method yielded a "best case" for Trump of a 66 EV margin over Clinton. But the actual earned margin (not counting faithless electors) was 74 EV.

So the nagging question was if these bounds were too narrow. Would some sort of more rigorous analysis (as opposed to just choosing a round number like 5%) lead to a really obvious "oh yeah, you should use 6.7% as your boundary instead of 5%" realization or something like that.

After digging in and looking at this, the answer seems to be no.

As I said in several venues in the week prior to the 2016 election, a Trump win, while not the expected or most likely result given the polling, should not have been surprising. It was a close race. Trump had a clear path to victory.

But the fact he won by 74 EV (77 after faithless electors) actually was OK to be surprised about.

Specifically, the fact that he won in Wisconsin, where the Election Graphs poll average had Clinton up by 7.06% is an outlier based on looking at all the poll average vs actual results deltas from the last three cycles. It is the only state in 2016 where the result was actually surprising. Without Wisconsin, Trump would have won by 54 EV, which was within the "band".

Advantages of simplicity

So after all of that, and this will be very anti-climactic, I've decided to keep the 5% and 10% boundaries that I've used for 2008, 2012, and 2016.

Several of the ways of defining close states that I looked at in this series are actually quite tempting. I could just use the 1σ boundaries of one of the methods to replace my 5% boundary between "weak" and "strong" states, and the 2σ numbers to replace the 10% boundary between "strong" and "solid" states.

I could even use one of the asymmetrical methods that reflect that things may be different on the two sides.

But frankly, I keep coming back to the premise of Election Graphs being that something really simple can do just as well as fancy modeling.

From the 2016 post mortem here is a list of where a bunch of the election tracking sites ended up:

 

  • Clinton 323 Trump 215 (108 EV Clinton margin) – Daily Kos
  • Clinton 323 Trump 215 (108 EV Clinton margin) – Huffington Post
  • Clinton 323 Trump 215 (108 EV Clinton margin) – Roth
  • Clinton 323 Trump 215 (108 EV Clinton margin) – PollyVote
  • Clinton 322 Trump 216 (106 EV Clinton margin) – New York Times
  • Clinton 322 Trump 216 (106 EV Clinton margin) – Sabato
  • Clinton 307 Trump 231 (76 EV Clinton margin) – Princeton Election Consortium
  • Clinton 306 Trump 232 (74 EV Clinton margin) – Election Betting Odds
  • Clinton 302 Trump 235 (67 EV Clinton margin) – FiveThirtyEight
  • Clinton 276 Trump 262 (14 EV Clinton margin) – HorsesAss
  • Clinton 273 Trump 265 (8 EV Clinton margin) – Election Graphs
  • Clinton 272 Trump 266 (6 EV Clinton margin) – Real Clear Politics
  • Clinton 232 Trump 306 (74 EV Trump margin) – Actual result

 

The only site (that I am aware of) that came closer to the actual result than I did was RCP… who like me just used a simple average, not a fancy model.

This says something about sticking with something simple.

Or maybe I was just lucky.

To be fair, there was a lot of movement just in the last day of poll updates. Before that, I had a 108 EV margin for Clinton as my expected case and would have been one of the worst sites instead of one of the best sites in terms of final predicted margin. Noticing that last minute Trump surge in the last few polls in some critical states was important, and the fact Election Graphs uses a "last 5 polls" methodology made our numbers able to pick up that change quickly.

But even aside from how close we got, a regular person who doesn't follow these things that closely could come to Election Graphs and just say "oh, close states are under 5%, they could go either way". More complex models have their places, but it hasn't been Election Graphs' niche. One of the main points of this site was always doing something relatively simple, and still getting decent results.

So. I'm sticking to 5% and 10%. Even though they are just nice round numbers, without a mathematical justification.

Because they are nice round numbers that are still reasonable for these purposes, and not too far out from numbers you COULD pick with some sort of mathematical hand waving if you wanted to.

So. Less than 5% for a weak state, less than 10% for a strong state, and over 10% is solid.

Just like before.

What about the tipping point?

OK, with everything I have said about nice round boundaries, and keeping it simple, I think I will actually allow myself to move in the limits of what I show as "close" on the chart of the tipping point. Maybe 5% is too close to call on a state level, but if the tipping point is at 5%, that is more substantial.

Having said that, 2016 did see 6% swings in the tipping point within two week periods of time. It can move quite a bit, quite quickly. So of course, just watch, 2020 will see someone with a 6% lead in the tipping point on election day proceed to lose the race. But for now, I feel OK tightening these bounds.

I'll be using the 2.36% and 3.45% levels described in the last post to really emphasize that if you are in that zone, you have a super close race. Regardless of what the electoral college center line is, or the "best case" scenarios for the two candidates, if we see a 1% tipping point margin again, it would be crazy not to emphasize that you are looking at a race that is too close to call.

[Note added 2019-03-01: Once I started actually building out the 2020 site, I tried changing the limits for the tipping point as described above, but with everything else left at 5% and 10%, it looked out of place, so I actually left them at 5% and 10% as well. So alas, all this analysis of other ways to define limits that were not nice round numbers ended up with me just using the nice round numbers from before.]

What about that Monte Carlo thing?

Well, once again ignoring everything I said above about simplicity, I've never quite liked the fact that the "band" is generated by swinging ALL the close states back and forth, which is actually not very likely. The fact that a bunch of states are close and could go either way, does not imply that it would be easily possible for them to ALL flip the same direction at the same time. (Although yes, if polling assumptions are all wrong the same way, all the polling may be off in the same direction.)

Election Graphs shows that whole range of possibility, with no way of showing some outcomes within the range are more likely than others, or that some outcomes outside the range actually are still possible, just less likely. It would be nice to add some nuance to that.

And I'll be honest, I've been slowly introducing more complexity over the last three cycles, and I kind of enjoy it. For instance, the logic for how to determine which polls to include in the "5 poll average" that I used in 2016 has a lot more going on than what I did in 2008 or 2012. And for that matter, in 2016 everything was generated automatically from the raw poll data, while in previous cycles I did everything by hand. Progress!

So… while I am going to keep the main display using the 5% and 10% boundaries, I am actually kind of excited to now have a structured way to also do a Monte Carlo style model…

I would use the data from the Polling Error vs Final Margin post to do some simulations and show win odds and electoral college probability distributions as they change over time as well as the current numbers. I have a vision in my head for how I would want it all to look.

But that would be an alternative view, not the main one… if I actually have time.

The plan

I had originally intended to have the 2020 site up by the day after the 2018 midterms. Then I'd hoped to be done by the end of November. Then December. Then January. But life and other priorities kept getting in the way.

I'd also intended to launch with a variety of changes and refinements over the 2016 site, including perhaps changing the 5% and 10% bounds, but also other things. Some changes to how some of the charts look. Additional changes to how the average itself was calculated. A completely different alternative view to switch to if a third party was actually strong enough to win electoral votes. Or the Monte Carlo view. Or making the site mobile friendly. Or a bunch of other things.

But frankly, I've just run out of time. I now know of seven state level general election matchup polls for 2020 that are already out, and there are probably more I have missed. And the pace is increasing rapidly now that candidates are announcing. So there are already results I could be showing.

(Yes, I am quite aware that general election match up polls this far out are not predictive of the actual election at all, but they still tell you something about where things are NOW.)

So at this point my priority is to just get the site up and running as fast as possible, which means making all the logic and visuals an exact clone of 2016, just with 2020 data. At least to start with.

After that, I'll start layering in changes or additions if and when I have time to do so. I still hope to be able to do a variety of things, but that depends on many factors, so I'm not making any promises at this point. I'll do what I can.

So that's the plan.

Conclusion

I have been dragging my feet working off and on (mostly off) on collecting the data, making the graphs, and writing my little commentary on this series of posts for literally more than six months. Maybe more than nine months. I forget exactly when I started.

If there are any of you who have actually read all of this to the end, thank you. I don't expect there are many of you, if any. That's just the way it goes.

But I felt like I needed to get all this done and out before starting to set up the 2020 site. I wanted to see what the results of looking at this old data would show, and I wanted to share it. Maybe I didn't really need to and it was just an excuse to procrastinate on doing the actual site.

But I have no more excuses left. Time to start getting the 2020 site ready to go… I'll hopefully have the basics up very soon.

Stay tuned!

You can find all the posts in this series here:

Criticism and Tipping Points

This is the fifth in a series of blog posts for folks who are into the geeky mathematical details of how Election Graphs state polling averages have compared to the actual election results from 2008, 2012, and 2016. If this isn’t you, feel free to skip this series. Or feel free to skim forward and just look at the graphs if you don’t want or need my explanations.

If you just want 2020 analysis, stay tuned, that will be coming soon.

You can find the earlier posts here:

Criticism

So, after the Predicting 2016 by Cheating post went up, Patrick Ruffini decided to quote tweet it, after which Nate Silver replied saying "whoever did that is incompetent".

That was exciting.

In any case, despite being incompetent, I will soldier on.

A reminder here though that I am indeed an amateur doing this sort of thing for fun in my spare time. I am not a professional statistician, data scientist, or even pundit. (Although, like everybody else on the planet, I do have a podcast.)

This is not my day job. I make no money off this. I never expect to make any money off this. I just enjoy doing it. I am always happy to take constructive criticism. I've changed things on the site based on reader feedback before, and undoubtedly will again.

Also though, in this series of blog posts specifically, I have been exploring different ideas and ways of looking at the 2008-2016 data. The Monte Carlo simulation in the last post was NEVER a valid prediction for 2016, because it used the actual results of 2016 in the model. Which I said repeatedly in that post. It was just a proof of concept that using that data in that way would provide something reasonable looking.

I'm not sure if Nate actually read the posts describing how I was modeling things and all the caveats about how running that simulation was cheating since I was using 2016 data to predict 2016. Maybe he did. Maybe he didn't.

He is right of course that the Monte Carlo graph he was reacting to does give a much narrower distribution than his model did. The Polling Error vs Final Margin post shows how I got the probabilities that led it to be that narrow. The distribution is actually narrower than I expected coming in. But that particular way of looking at the data leads there. It may or may not be a good way of looking at things. I am experimenting.

Having said that, the results gave Trump win odds near what FiveThirtyEight had, but with the median being further toward Trump than their model, and with a narrower distribution. Looking at some other folks who showed distributions for 2016 on their sites (and still have them easily findable today in 2019), it looks like this distribution would not have been out of place. It didn't match any of them of course, since the methodology is different from all of them. But it isn't wildly out of line.

Running this on 2016 data is bogus of course, as I explained in the last post, and again a few paragraphs ago. But the results are interesting enough that using the data from the analysis in the Polling Error vs Final Margin post to do some Monte Carlo simulations for 2020 would at least be fun to look at.

OK, enough of that unintended detour. Now back to the originally intended topic for this post…

Tipping Points

All of the previous posts have been looking exclusively at the state poll averages as they compared to the actual election results in 2008 through 2016. But for the last couple of cycles, Election Graphs has also looked at the "tipping point". I borrowed the idea from the "meta-margin" Sam Wang at Princeton Election Consortium uses. Basically, it is the margin in the state that would put the winning candidate over the edge if you sorted the states by margin.

The tipping point essentially gives a measure of the overall margin in the national race, similar to a popular vote margin, but modified to account for the structure of the electoral college. It is a nice way of looking at who is ahead and who is behind in a way that isn't (quite) as volatile as looking directly at the center line of the electoral college estimates.

So how did the final Election Graphs tipping point numbers based on our state poll averages do compared to the actual tipping point as measured by the final vote?

For this, since there is only one tipping point per election, we unfortunately only have three data points:

In 2016, I used the same 5% boundary to determine what was "close" for the tipping point as I did for state poll averages. Once again just a round number, with nothing specific behind it other than a gut feel that less than 5% seemed close.

We only have three data points, but even with just that, we can produce a very VERY rough estimate of the 1σ and 2σ levels. Basically, for 1σ, you use the 2 closest of the 3 data points, and for 2σ you use all 3. This is ballpark only (at best) due to the low number of data points, but it gives an idea.

So to be 68.27% sure the current leader will actually win, you want a tipping point margin greater than 2.36%.

For 95.45% confidence, you want a tipping point margin lead of more than 3.45%.

OK, OK, that is kind of pathetic. I know. But there is only so much you can do with only three data points.

Anyway…

Clinton's final tipping point margin in 2016 was only 1.59% in Pennsylvania. Even assuming you only knew the 2008 and 2012 results, it should have been clear that a 1.59% tipping point represented an incredibly close race, far closer than either 2008 or 2016, and well within the realm where it could have gone either way.

The 5% boundary Election Graphs used in 2016 also indicated a close race of course, but narrowing that boundary based on the results of the last three elections seems like it would give a better impression on how close things need to be before we should consider that things really do look like a toss up where anything could reasonably happen.

So, what, if anything, will Election Graphs actually do differently for the 2020 cycle compared to 2016?

I'll talk about that in the next post…

You can find all the posts in this series here:

Predicting 2016 by Cheating


This is the fourth in a series of blog posts for folks who are into the geeky mathematical details of how Election Graphs state polling averages have compared to the actual election results from 2008, 2012, and 2016. If this isn’t you, feel free to skip this series. Or feel free to skim forward and just look at the graphs if you don’t want or need my explanations.

You can find the earlier posts here:

The 2016 states we got wrong

In the last post I used the historical deltas between the final Election Graphs polling averages in 2008-2016 to construct a model that given a value for a poll average, would produce an average and standard deviation for what we could expect the actual election results to be. So what can we do with that?

I don't have another election year with data handy to test this model on. No 2020, no 2004, no 2000, no earlier cycles either. So I'm going to look at 2016, even though I shouldn't.

Just as examples, lets look at what the odds this model would have given to the states Election Graphs got wrong in 2016… This technically isn't something you should do, since we are using a model on data that was used to construct the model, which isn't cool, but this is just to get a rough idea, so…

 Final AvgDem Win%Rep Win%Actual
WID+7.06%
98.76%1.24%R+0.77%
MID+2.64%70.59%29.41%R+0.22%
ME-CD2D+2.04%67.92%32.08%R+10.54%
PAD+1.59%66.27%33.73%R+0.71%
NVR+0.02%45.85%54.15%D+2.42%

The only one that is really surprising is Wisconsin, just as it was on Election night in 2016. Every other state was clearly a close race, where nobody should have been shocked about it going either way.

Wisconsin though? It was OK to be surprised on that one.

OK, and maybe the margin in ME-CD2, but not that Trump won it.

Doing some Monte Carlo

Let's go a bit farther than this though. One thing Election Graphs has never done is calculate odds. The site has provided a range of likely electoral college results, but never a "Candidate has X% chance of winning". But with the model we developed in the last post, we now have a way to generate the chance each candidate has of winning a state based on the margin in the poll average, and with that, you can run a Monte Carlo simulation on the 50 states, DC, and five congressional districts.

Now, once again, it is kind of bogus to do this for 2016 since 2016 data was used to construct the model, but we're just trying to get an idea here, and we'll just recognize this isn't quite a legitimate analysis.

So, here is a one off running the simulation 10,000 times to generate some odds. I'd probably want a bit larger number of trials if I was doing this "for real". I might also smooth the win chances curve in the last post to get rid of some of the jaggy bits before using it as the source of probabilities for the simulation. And obviously if you ran this again, you'd get slightly different results. But here is the result of that one run with 10,000 trials…

Well, that is a fun graph. It puts the win odds for Trump at 25.38%.

Now, I emphasize again that this is cheating. Because the facts of Trump's win are baked into the model. We're testing on our training data. That's not really OK. Having said that though…

How does this compare to where other folks were at the end of 2016? I looked at this in my last regular update prior to the results coming in on election night, so here is my summary from then:

So this Monte Carlo simulation using the numbers calculated as I have described would have given Trump better odds than anybody other than FiveThirtyEight. Again though, I am cheating here. A lot.

But here is the thing. Even though I would be giving Trump pretty good odds with this model, the chance of him actually winning by as much as he did (or more) is actually still tiny at 0.29%. With these odds a Trump win should not have been a surprise, but a Trump win by as much as he actually won by… that still should have been very surprising.

Comparisons

In this series of posts, we've been looking at a whole bunch of different ways of answering the basic question "what is a close state?". One reason I am looking at this is that the way Election Graphs has done our "range of possibilities" in the past is just to define what a close state is, and then let all of them swing either to one candidate or the other, and see what the range of electoral college results would be.

So lets see what electoral college ranges we would have gotten in 2016 with each of the methods I've gone over in the last few blog posts:

The two showing the ranges from the Monte Carlo simulation are dimmed out because they are determined by a completely different method, not swinging all close states back and forth.

It is interesting that both the 1 sided and 2 sided histogram 1σ boundaries would end up with the exact same boundaries as my current 5% bounds. But as you can see there are a ton of different ways to define "too close to call" which result in a huge variation on how the range of possibilities gets described.

So what to do for 2020? How will I define close states?

You'll have to wait a little longer for that.

Before I get to that, it is also worth looking at the national race as opposed to just states. On Election Graphs I have used the "tipping point" to measure that. What tipping point values should be considered "too close to call"?

I'll look at that in the next post….

You can find all the posts in this series here:

Polling Error vs Final Margin

This is the third in a series of blog posts for folks who are into the geeky mathematical details of how Election Graphs state polling averages have compared to the actual election results from 2008, 2012, and 2016. If this isn’t you, feel free to skip this series. Or feel free to skim forward and just look at the graphs if you don’t want or need my explanations.

You can find the earlier posts here:

Error vs Margin scatterplot

In the last post I ended by mentioning that assuming the error on poll averages was independent of the value of the poll average might not be valid. There are at least some reasonable stories you could tell that would imply a relationship. So we should check.

I've actually looked at this before for 2012. That analysis showed the error on the polls DID vary based on the margin of the poll average. But it wasn't "close states are more accurate". But maybe that pattern was unique to that year.

So I looked at this relationship again now with all the data I have for 2008, 2012, and 2016:

That is just a blob right? Not a scatterplot we can actually see much in? Wrong. There is a bottom left to upper right trend hiding in there.

Interpreting the shape of the blob

Before going further, let's talk a bit about what this chart shows, and how to interpret it. Here are some shapes this distribution could have taken:

Pattern A would indicate the errors did not favor either Republicans or Democrats, and the amount of error we should expect did not change depending on who was leading in the poll average or how much.

Pattern B would show that Republicans consistently beat the poll averages… so the poll averages showed Democrats doing better than they really were, and the error didn't change substantially based on who was ahead or by how much.

Pattern C would show the opposite, that Democrats consistently beat the poll averages, or the poll averages were biased toward the Republicans. The error once again didn't depend on who was ahead or by how much.

Pattern D shows no systematic bias in the poll averages toward either Republicans or Democrats, but the polls were better (more likely to be close to the actual result) in the close races, and more likely to be wildly off the mark in races that weren't close anyway.

Pattern E would show that when Democrats were leading in the polls, Republicans did better than expected, and when Republicans were leading in the polls, Democrats did better than expected. In other words, whoever was leading, the race was CLOSER than the polls would have you believe.

Finally, Pattern F would show that when the polls show the Democrats ahead, they are actually even further ahead than the polls indicate, and when the Republicans are ahead, they are also further ahead than the polls indicate. In other words, whoever is leading, the race is NOT AS CLOSE as the polls would indicate.

In all of these cases the WIDTH of the band the points fall in also matters. If you have a really wide band, the impact of the shape may be less, because the variance overwhelms it. But as long as the band isn't TOO wide the shape matters.

Also, like everything in this analysis, remember this is about the shape of errors on the individual states, NOT on the national picture.

Linear regressions

Glancing at the chart above, you can determine which of these is at play. But lets be systematic and drop some linear regressions on there…

2008 and 2012 were similar.

2016 had a steeper slope and is shifted to the left (indicating that Republicans started outperforming their polls not near 0%, but for polls the Democrats led by less than about 11%). But even 2016 has the same bottom left to top right shape.

I haven't put a line on there for a combination of the three election cycles, but it would be in between the 2008/2012 lines and the 2016 line.

Of the general classes of shapes I laid out above, Pattern F is closest.

Capturing the shape of the blob

But drawing a line through these points doesn't capture the shape here. We can do better. There are a number of techniques that could be used here to get insight into the shape of this distribution.

The one I chose is as follows:

  1. At each value for the polling average (at 0.1% intervals), collect all of the 163 data points that are within 5% of the value under consideration. For instance, if I am looking at a 3% Democratic lead, I look at all data points that were between an 8% Democratic lead and a 2% Republican lead (inclusive).
  2. If there are less than 5 data points, don't calculate anything. The data is too sparse to reach any useful conclusions.
  3. If there are 5 or more points, calculate the average and standard deviation, and use those to define boundaries for the shape.

Here is what you get:

This is a more complex shape than any of the examples I described. Because it is real life messy data. But it looks more like Pattern F than anything else.

It does flatten out a bit as you get to large polling leads, even reversing a bit, with the width increasing like Pattern D, and there some flatter parts too. But roughly, it is Pattern F with a pretty wide band.

Fundamentally, it looks like there IS a tendency within the state level polling averages for states to look closer than they really are.

Is this just 3P and undecided voters?

All of my margins are just "Republican minus Democrat". Out of everybody, including people who say they are undecided or support 3P candidates. But those undecideds eventually pick someone. And many people who support 3rd parties in polls end up voting for the major parties in the end. Could this explain the pattern?

As an example assume the poll average had D's at 40%, R's at 50%, and 10% undecided, that's a 10% R margin… then split the undecideds at the same ratio as the R/D results to simulate a final result where you can't vote "undecided", and you would end up with D's at 44.4% and R's at 55.6% which is an 11.1% margin… making the actual margin larger than the margin in the poll average, just as happens in Pattern F. 

Would representing all of this based on share of the 2-party results make this pattern go away?

To check this, I repeated the entire analysis using 2-party margins.

Here, animated for comparison, is the same chart using straight margins and two party margins.

While the pattern is dampened, it does not go away.

It may still be the case that if we were looking at more than 3 election cycles, this would disappear. I guess we'll find out once 2020 is over. But it doesn't seem to be an illusion caused simply by the existence of undecided and 3P voters.

Does this mean anything?

Now why might there be a tendency that persists in three different election cycles for polls to show results closer than they really are? Maybe close races are more interesting than blowouts so pollsters subconsciously nudge things in that direction? Maybe people indicate a preference for the underdog in polls, but then vote for the person they think is winning in the end? I don't know. I don't have anything other than pure speculation at the moment. I'd love to hear some insights on this front from others.

Of course, this is all based on only 3 elections and 163 data points. It would be nice to have more data and more cycles to determine how persistent this patten is, vs how much may just be seeing patterns in noise and/or something specific to these three election cycles. After all, 2016 DID look noticeably different than 2008 and 2012, but I'm just smushing it all together.

It is quite possible that the patterns from previous cycles are not good indicators of how things will go in future cycles. After all, won't pollsters try to learn from their errors and compensate? And in the process introduce different errors? Quite possibly.

But for now, I'm willing to run with this as an interesting pattern that is worth paying some attention to.

Election Result vs Final Margin

Before determining what to do with this information, lets look at this another way. After all, while the amount and direction of the error is interesting, in terms of projecting election results, we only really care if the error gives us a good chance of getting the wrong answer.

Above is the actual vote margins vs the final Election Graph margins, with means and standard deviations for the deltas calculated earlier plotted as well. Essentially, the first graph is this new second graph with the y=x line (which I have added in light green) subtracted out.

The first view makes the deviation from "fair" more obvious by making an unbiased result horizontal instead of diagonal, but this view makes it easier to see when this bias may actually make a difference.

Lets zoom in on the center area, since that is the zone of interest.

Accuracy rate

Out of the 163 poll averages, there were only actually EIGHT that got the wrong result. Those are the data points in the upper left and lower right quadrants on the chart above. That's an accuracy rate of 155/163 ≈ 95.09%. Not bad for my little poll averages overall.

The polls that got the final result wrong range from a 7.06% Democratic lead in the polls (Wisconsin in 2016) to a Republican lead of 3.40% (Indiana in 2008).

For curiosity's sake, here is how those errors were distributed:

  D's lead poll avg
but R's win
R's lead poll avg
but D's win
Total
Wrong
2008 1 (MO) 2 (NC, IN) 3
2012 0 0 0
2016 4 (PA, MI, WI, ME-CD2) 1 (NV) 5
Total 5 3 8

So, less than 5% wrong out of all the poll averages in three cycles, but at least in 2016, some of the states that were wrong were critical. Oops.

Win chances

Anyway, once we have averages and standard deviations for election results vs poll averages, if we assume a normal distribution based on those parameters at each 0.1% for the poll average, we can produce a chart of the chances of each party winning given the poll average.

Here is what you get:

Alternately, we could recolor the graph and express this in terms of the odds the polls have picked the right winner:

You can see that the odds of "getting it wrong" get non-trivially over 50% for small Democratic leads. The crossover point is a 0.36% Democratic lead. With a Democratic lead less than that, it is more likely that the Republican will win. (If, of course, this analysis is actually predictive.)

You can also work out how big a lead each party would need to have to be 1σ or 2σ sure they were actually ahead:

  68.27% (1σ)
win chance
95.45% (2σ)
win chance
Republicans Margin > 1.11% Margin > 4.87%
Democrats Margin > 2.32% Margin > 6.42%
Average Margin > 1.72% Margin > 5.64%

Democrats again need a larger lead than Republicans to be sure they are winning.

These bounds are the narrowest from the various methods we have looked at though.

Can we do anything to try to understand what this would mean for analyzing a new race? We obviously don't have 2020 data yet, and I don't have 2004 or earlier data lying around to look at either. So what is left?

Using the results of an analysis like this to look at a year that provided data for that analysis is not actually legitimate. You are testing on your training data. It is self-referential in a way that isn't really OK. You'll match the results better than you would looking at a new data set. I know this.

But it may still give an idea of what kind of conclusions you might be able to draw from this sort of data.

So in the next post we'll take the win odds calculated above and apply them to the 2016 race, and see what looks interesting…

You can find all the posts in this series here:

Win Chances from Poll Averages

This is the second in a series of blog posts for folks who are into the geeky mathematical details of how Election Graphs state polling averages have compared to the actual election results from 2008, 2012, and 2016. If this isn't you, feel free to skip this series. Or feel free to skim forward and just look at the graphs if you don't want or need my explanations.

Last time we looked at some basic histograms showing the distributions for how far off the actual election results were from the state polling averages, but pointed out at the end that we don't just care about how far off the average is, we care about whether it will actually make a difference to who wins.

For instance if we knew the poll average overestimated Democrats by 5%, this means something very different depending on the value of the poll average:

  • If the Democrat was ahead by 15% the overestimation wouldn't matter, because it wouldn't be enough to change the outcome. The Democrat would just win by a smaller margin.
  • If the poll average showed the Democrat ahead by 3% the overestimation would mean the Republican is actually ahead. This is the case where the overestimation would actually make a difference.
  • If the poll showed the Republicans ahead the overestimation of the Democrats wouldn't change the outcome, it would just mean that the Republicans would win by a bigger margin.

Two sided by individual results

So lets look at this same data we did in the last post, but in a different way to try to take this into account… Instead of bundling up the polling deltas into a histogram, we consider all 163 results, and try to figure out at each possible margin, how many poll results had a big enough error that they could have changed the result.

As an example, if you look at all 163 data points for cases where the Democrats beat the poll average by more than 15%, you only find 2 cases out of the 163. (For the record that would be DC in 2008 and 2016.) That lets you infer that if the Republicans have a 15% lead, you can estimate the Democratic chances of winning given previous polling errors is about 2/163 ≈ 1.23%, which of course means in those cases Republicans have a 98.77% chance of winning.

Here is the chart you get when you repeatedly do this calculation:

As you would expect, as the polls move toward Republican leads, the Democratic chance of winning diminishes, and the Republican chance of winning increases. The break even point is almost exactly at the 0% margin point. (It is actually at an approximately 0.05% Republican lead based on my numbers, but that is close enough to zero for these purposes.)

This is good, because it means even if there are some differences in the shape of the distribution on the two sides, at least the crossover point is basically centered.

Taking the exact same graph, but coloring it differently, we can look at not which party wins based on the polling average at a certain place, but instead at the chances that the polling average is RIGHT or WRONG in terms of picking the winner.

For instance, at the same "Republicans lead by 15% in the polling average" scenario used above, there is a 98.77% chance the polling average has picked the right winner, and a 1.23% change the polling average is picking the wrong side.

Here is that chart: 

Looking at it this way, and again looking at 1σ being getting things right about 68.27% of the time, and 2σ being getting things right about 95.45% of the time we can construct the table below. With the asymmetry, it is different for Republicans and Democrats:

  68.27% (1σ)
win chance
95.45% (2σ)
win chance
Republicans Margin > 1.76% Margin > 8.26%
Democrats Margin > 3.01% Margin > 12.56%
Average Margin > 2.38% Margin > 10.41%

Basically, given the results of the last three election cycles, Democrats need a bit larger lead to have the same level of confidence that they are really ahead.

Now, doing something for Election Graphs based on this asymmetry is tempting as it has now showed up in two different ways of looking at this data, but…

A) It is certainly possible that this asymmetry is something that just happens to show up this way after these three elections, and it would be improper to generalize. 163 data points really is not very much compared to what you would really want, and it is very possible that this pattern in an illusion caused by limited data and/or there are changes over time that will swamp the patterns seen here.

B) A big part of the appeal of Election Graphs has always been having a really simple easy to understand model… including having nice symmetric bounds… "a close race is one where the margin is less than X%"… without that being different depending on which party is ahead… is a big part of that, even if it is a massive oversimplification.

So lets look at this in a one sided way again…

One sided by individual results

Once again we we calculate a "chance of being right" number, as we did earlier, but now just looking at the absolute error. We will assume we only know the magnitude of the current margin and the previous errors, not the directions, and that errors have an equal chance of going in either direction.

Running through an example:

At the 5% mark, there were 107 poll averages out of 163 that were off by less than 5%. These 107 would all give a correct winner no matter which direction the error was in.

The remaining 56 polls had an error of more than 5%, so could have resulted in getting the wrong winner. But since we are assuming a symmetric distribution of the errors now, we would only expect half of these errors to be in the direction that would change the winner. So another 28.

107+28 = 135. So we would expect at the 5% mark the poll averages would be right 135/163 ≈ 82.82% of the time, and wrong 17.18% of the time.

Repeat this over and over again, and you get the following graph: 

So if you are looking to be 68.27% (1σ) confident that a state poll average will be indicating the right winner, the leader needs to be ahead by at least 2.49%. To be 95.45% (2σ) confident though, you need an 11.36% margin.

  68.27% (1σ)  95.45% (2σ)
Margin 2.49% 11.36%

These last two ways of looking at things have the 68.27% level considerably narrower than the 5% that is the current first boundary on Election Graphs. If we are thinking of this as the boundary for "really close state that could easily go either way" this seems counter intuitive.

If we kept the same "Weak/Strong/Solid" names for the categorizations, then in the final state of the 2016 poll averages Michigan would get reclassified as "Strong Clinton" instead of "Weak Clinton", while Ohio and Georgia would move from "Weak Trump" to "Strong Trump". Given that Michigan ended up actually won by Trump, this seems like it might not be a good move.

Or maybe it would just be even more important to be clear that "Strong" states are actually states that have a non-trivial chance of flipping. The "Solid" category was intended to be the states that really should not be expected to go the other way. "Strong" was originally intended to indicate a significant lead, but not that a state was completely out of play. "Weak" was supposed to be states that really were so close enough you shouldn't count on the lead being real.

If we changed the categories in a way that moved the first boundary inward, it would be important to make this very clear, perhaps by changing the names.

If, on the other hand, we moved the "close state boundary" outward to the 95.45% (2σ) level, it just seems way too wide. A state where one candidate is ahead by 11.36% just isn't a close state. Maybe it is true that there is a 4.55% chance that the poll average is picking the wrong winner. This seems to imply that. But even so, it seems misleading to say that big a lead is still a close race.

There is something else to think about as well. The analysis above assumes that the difference between the election result and the final poll average is not dependent on the value of the poll average itself.

But is that reasonable?

One might think (for instance) that maybe poll averages in the close states would be more accurate than polls in the states where nobody expects a real contest. The theory here would be that because close states get more frequent polling the average can catch last minute moves better than in less competitive states that are more sparsely polled. (At least this logic might make sense for a poll average like the Election Graphs average that uses the "last X polls", other ways of calculating a poll average may differ.)

Or maybe there is some other factor at play that makes it so we can say more about how far off a poll may be if we take into account what that poll average is in the first place.

And that's what we will look into in the next post…

You can find all the posts in this series here:

Polling Averages vs Reality

2018 is over. Multiple candidates have announced they are at least investigating running for President in 2020, and a few are even past that stage. But before Election Graphs starts posting new graphs and charts for 2020, one more look back at the past.

This will be the first in a series of blog posts for folks who are into the geeky mathematical details of how Election Graphs state polling averages have compared to the actual election results from 2008, 2012, and 2016. If this isn't you, feel free to skip this series. Or feel free to skim forward and just look at the graphs if you don't want or need my explanations.

For those of you who just want to know about 2020… Keep checking in… actual new graphs and charts and analysis for 2020 will be here before too much longer! How much longer? I'm not sure. But after this series of posts is done, getting up the basic framework of the 2020 site is my next priority!

Now, for the small group of you who may be left… any thoughts, advice, or checks on my math are welcome. While this is all interesting on its own, some of this is just me thinking aloud as I figure out what (if anything) I am going to do differently for 2020. Please email me at feedback@electiongraphs.com for longer discussions, or just leave comments here. Raw data, Excel spreadsheets, etc., are available on request to anybody who wants them, although fair warning, they aren't all cleaned up and annotated to be scrutable to anyone other than me without some explanation or effort.

Anyway, one of the key elements I called out in my 2016 Port Mortem was the need to "trust the uncertainty" and look at the range of possibilities, not just the prediction's center line. Part of this is just being vigilant in avoiding the temptation to reduce things to a single point estimate rather than a range of possibilities. Another part is repeating over and over again that a 14% chance of something happening isn't the same as 0%. Although pretty much everybody doing poll analysis did explain these things to some extent in 2016, in retrospect it is clear that there was still too much emphasis on that centerline by most people, including by me.

But another important element is defining what that uncertainty is. Ever since I started doing presidential election tracking in 2008, I have used "margin less than 5%" to define states I was going to categorize as close enough you should take seriously the possibility they could go either way. I also used 5% as the limit of "too close to call" for the tipping point metric. I had a second boundary at 10% on the state polls to mark off the outer boundaries that you could even imagine being in play if a candidate made a huge surge.

Those numbers were just arbitrary round numbers though. With three election cycles of data behind me now though, it is time to do some actual analysis of the real life differences between the final polling averages and the actual election results, to get a better idea of what kind of differences are reasonable to expect.

Over the next few blog posts I will look at this in several different ways, then decide if anything about Election Graphs should change for 2020.

One sided histogram

First, let's just look at a simple histogram showing how far off the poll averages were from the actual margins.

For each of the three election cycles, we have 50 states and DC. For 2012 and 2016, we also have the five congressional districts in Maine and Nebraska. (I didn't track those separately in 2008 unfortunately.) For each one of those results, I look at the unsigned delta between the final poll average margin and the margin in the actual election results, and show a histogram for each of the three election cycles, and a combined line using all 163 data points.

Somewhat improperly using the "Nσ" notion… with 1σ being about 68.27% of the time, and 2σ being about 95.45% of the time, we see that 68.27% (1σ) of the poll averages were within 5.49% of the actual results. But to get to 95.45% (2σ) you have to move out to 13.39%.

  68.27% (1σ) 95.45% (2σ)
Margin 5.49% 13.39%

[Note for sticklers #1: There is no way this kind of analysis is significant to 0.01%, so showing two digits after the decimal point is false precision and the value depends on my choices for how to interpolate… among a variety of other things. But I've standardized on two digits after the decimal for everything in this series of posts anyway because… well, just because. Feel free to round to the nearest 0.1% or even 1% if you prefer.]

[Note for sticklers #2: Each election cycle I made some modifications to the fiddly details of how I calculated the averages, including things like what I did if there were less than 5 polls, how I dealt with polls that included more than one version of the result (for instance registered vs likely voters or with and without third party candidates), and if I used the end or middle of the field dates as the date used to determine poll recency. These differences may technically make it improper to do calculations that combine data from these three cycles without recalculating everything based on the same rules. I contend that the differences in my methodology over the three cycles were minor enough that it wouldn't substantially change this analysis, but given the amount of work that would be involved, I have NOT spent the time to convert 2008 and 2012 to match my 2016 methodology in order to confirm this.]

It is very tempting in the context of Election Graphs to just move my boundaries between "Weak" and "Strong" states from  5% to 5.49%, and the boundary between "Strong" and "Solid" from 10% to 13.39%. Both of those numbers are kind of close to where the old boundaries are, just expanded a bit to show a bit more uncertainty than before, which seems intuitively right after the 2016 election cycle.

Two sided histogram

But wait, why just look at the magnitude of the errors? Isn't the direction of the errors important too? Are the polls systematically favoring one side or the other? Very possible. Time to do that histogram again, but taking into account which direction the polls were off:

The pattern isn't symmetrical, although it is certainly possible (perhaps even likely) that if I had data for a few more election cycles it would become more so. At the moment though, while the peak looks to be very slightly on the side of Democrats doing better than the poll average (in other words the polls showed Republicans doing better than they actually were), when you average out all the polls, the bias is actually that the Republicans beat the poll averages by 0.69%. (In other words, the poll averages showed Democrats doing slightly better than they actually did.)

The asymmetry is notable here. When the poll averages overestimate the Republicans, most of the time the error is 6% or less, but when the poll average overestimates the Democrats it is often by quite a bit more.

I didn't put it on the plot because it was already pretty busy, but you can also use this to get the ranges for the central 1σ and 2σ:

  Middle 68.27% (1σ) Middle 95.45% (2σ)
Range D+4.61% to R+7.45% D+12.21% to R+13.32%
Avg Limit 6.03% 12.77%

Maybe these numbers could be used to define category boundaries? You would have to either explicitly have different category boundaries for the two parties or use the averages of the R and D boundaries to make it symmetric. The median is also not quite at the center… it is at an 0.01% Democratic lead. But that is probably small enough to just count as zero.

But looking at how many polls are a certain amount off favoring one party or another doesn't really hit exactly what we want.

See, for what we care about on a site like Election Graphs, if we have a poll average at a certain point, we don't actually care that much if the actual result is that the leading candidate wins by an even bigger margin. We only really care if the polls are wrong in the direction that leads the opposite candidate to win.

I'll look into that in the next post…

You can find all the posts in this series here: