Frequently Asked Questions

General Questions

Who are you anyway? Are you even qualified to be looking at any of this?

My name is Samuel Minter. A long time ago I was a Physics major. Then somehow I got into the world of the web, and eventually ended up doing product management at a major tech company in Seattle. You've probably heard of it. Roughly speaking, I'm in the abuse prevention space. Specifically using audits to measure how abuse prevention efforts are going, and provide data to improve those efforts. This election stuff is purely a hobby. I do it for fun.

I do not have professional credentials in this area. I dont’t claim to be an expert. I’m just an amateur putting this together in his spare time. I think the results have been pretty good so far. If you ARE an expert and have constructive criticism for me, I’d love to hear it and learn from you. Just don’t be mean. I don't need to be part of the acromonious twitter fights over election models I sometimes see.:-)

Shouldn’t you maybe get a life? Maybe go outside or something?

Nah, I’m good.

Why are you doing this?

Well, mainly because I have fun doing it.  I don’t have any illusions about changing careers and making a living doing this or anything.  But I think my little version of this adds yet another perspective to look at the presidential race, and more is better, right?  But mainly, I’m a news junkie, and presidential elections are an every four year frenzy for that sort of thing. And I enjoy crunching numbers, doing analysis, and making graphs. So this is a natural fit. Mainly though, I enjoy it.

Seriously, we already have 538, The Economist, and all kinds of other folks doing this sort of thing.  What do you add?

Well, all those places are great. I eagerly consume everything they do and respect all of those efforts quite a bit.  I’ve added to and enhanced what I do here based on things I’ve learned from them.  All those models can get quite complex though. They factor in many things, and they do much more robust mathematics than I do.  One of my “points” in doing this the way I have since 2008 is that even with a very simple model, just a plain old average, nothing fancy, you can do very well.

Well, at least that was true when I started, over the years there has been feature creep and things have gotten a bit more complex here too, but the point is still that you can do pretty decently with a relatively simple approach.

Also, while a few of the other do this too, I concentrate a bit more on monitoring the ebb and flow of the race and looking at the trends over time rather than just where things are today.

And of course, my views and analysis are always purely electoral college based for the general election and delegate based for the nomination races.  When I started in 2008, it was because I had been frustrated in 2004 that it was hard to get views of the race based exclusively on the electoral college, instead everybody kept talking about the national polls.  But that isn’t how we elect presidents. Then later I added a view of the nomination race I thought was helpful as well. Now lots of people are doing this, but I still keep going because mine does things slightly differently, so at still adds a different view that I hope at least some people find interesting or useful.

What exactly do you do here?

The site looks at two major things during election season. The major focus is tracking state level polls in order to show how the race for the electoral college is going. But during the primary races, we also track the delegate races for both parties.

Doesn’t this take a lot of work?

Yes. But referring back to an earlier question, it is fun for me, so that isn’t that bad.

Have you done the same thing since 2008 when you started all this?

No. Every four years I have added more complexity and the details have changed a bit. This FAQ represents how everything is as of the end of September 2020. Some of the details were slightly different in 2016, 2012, and 2008. I like to think the changes I have made over the years are improvements, although perhaps some would be debatable.

Aside from brand new things like the probabilistic views, most of the differences are minor adjustments of details on how edge cases are handled, nothing that changes for overall picture. But if you have specific questions on some of the older election cycles, contact me and I'll do my best to remember how I used to do it.

What do I do if I have questions or comments on all this?

Well, just add a comment to one of the blog posts about the election here on the site.  Or email feedback@electiongraphs.com.  Or tweet at me on @ElectionGraphs.

How do I follow along with what is changing with this analysis?

For up to the minute updates when polls are added, when states change categories, when tipping points change, or information of that sort, follow @ElecCollPolls on Twitter.  For less frequent updates, check @ElectionGraphs. And of course there are periodic blog posts here on Election Graphs discussing the trends as things progress. You can also follow the Election Graphs Facebook Page, but it just links to the blog when there are updates there.

This is awesome, I’d like to share this or mention it in my own online space, can I?

All of the Election Graphs pages, as well as my blog post commentaries, are released with a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license. Feel free to reuse this material within those constraints, although I would appreciate a heads up that you are planning to do so.

For other possible uses or to discuss anything else about this site, please contact Election Graphs via Twitter. That is probably the best way to get my attention quickly rather than having an email lost in my inbox or spam folders.

In terms of sharing on social media (or other media)… of course! That would be great! The more the better. Thanks!

This sucks, what a waste of time, you are obviously just a shill for <insert candidate you don’t like>… How can I tell you how awful you are?

I welcome and would love to hear from people who have constrictive criticism about the site, want to discuss methodology, or just want to chat civilly about the implications of the numbers shown on the site. But if you just want to yell at me… do you really have to? It just wastes everybody’s time. If you don’t like the site, don’t visit. But if you do need to make comments like the above, there are places to do that. The links to Facebook, Twitter, and my feedback  are above. Comments are also open on the blog posts here as well.

Do you come at this with some sort of political bias? 

Well, of course. Everybody has some sort of opinion and I do have feelings on the presidential race, both in the primaries and in the general election. When it comes to Election Graphs and the commentaries I post when I make updates to either the Delegate Race numbers or the Electoral College estimates, I try my best to be objective and talk mostly about the numbers, and the implications of the numbers on the results. If you want to hear my less objective and more opinionated thoughts on what is going on, and what should be going on, listen to the podcast I cohost: Curmudgeon’s Corner. I often talk about the objective analysis there too course, but my cohost Ivan and I are not constrained by that. In my actual charts and graphs though, and on the update commentary blog posts, I try to stick to what the numbers are saying.

Do you take tips?

Well, ah, shucks. Do you really like this all that much? Sure. Of course. Go here. This site costs me more in hosting fees, let alone my time, than I will ever make from it. I know this. But a little tip to offset that or buy me an occational snack or something is always welcome. If you leave me something, thank you in advance for your support.

Do you do other things that might be interesting?

Well, I don't know what you like, so I can't really tell you that. But the other things I spend time on outside of work and family at the moment are:

  • Curmudgeon's Corner – A weekly current events podcast I cohost with Ivan Bou, a college friend of mine. Our show started as a public affairs show on WRCT Pittsburgh roughly from 1991 to 1994. (Hey, it was a long time ago, I don't remember the exact years any more.) Many eons later, we decided we missed it and started doing it again as a weekly podcast in 2007 and have been going ever since.
  • Wiki of the Day – A family of three completely automated daily podcasts that read the summary sections of English Wikipedia articles using Amazon Polly. Each of the three podcasts choose the daily episodes differently. One chooses random articles, one highlight's Wikipedia's featured article of the day, and the last chooses a popular article from the previous day.
  • ALeXMXeLA.com – My 11 year old son Alex's YouTube channel. He doesn't post as much as he used to, when he does it is usually videos he originally recorded when he was 6, and honestly he mostly does all the work editing and posting videos himself now, but I figure I'll still plug it. Subscribe today!

In addition, if you want to track random things I'm reading or commenting on, not just this election stuff, my personal Twitter is @abulsme.

Electoral College

Are the numbers you show a prediction?

No!  This site does NOT do a forecast. It does a "nowcast".  Everything here is “if the election was held today”. Until we actually get to election day, it is not election day. Things will change. News will happen. Election Graphs does not try to model how much things can change over time. It is simply a snapshot of what things look like today, which can still tell you quite a lot. Lots of other sites do try to model how much change is reasonable to expect given X days left until the election. I don't.

OK, but by Election Day it is a prediction, right? How has your site fared?

Well yes, I guess it is at that point. How has it done? Well, that depends on how you measure.

Just looking at individual states and if the averages have predicted the correct winner, out of the 163 poll averages, there were only actually EIGHT that got the wrong result. That's an accuracy rate of 155/163 ≈ 95.09%. Not bad for my little poll averages overall.

Of course most of those 163 weren't states that were even remotely close. So getting those right wasn't exactly hard. I mean, you would have to actively TRY to get the District of Columbia or Wyoming wrong.

The polls that got the final result wrong range from a 7.06% Democratic lead in the polls (Wisconsin in 2016) to a Republican lead of 3.40% (Indiana in 2008).

I looked into how those errors were distributed in a January 2019 blog post which actually goes into tons of detail looking at how my polling averages differed from actual election results. But the summary of states that were wrong is here:

D's lead poll avg
but R's win
R's lead poll avg
but D's win
Total
Wrong
2008 1 (MO) 2 (NC, IN) 3
2012 0 0 0
2016 4 (PA, MI, WI, ME-CD2) 1 (NV) 5
Total 5 3 8

So, less than 5% wrong out of all the poll averages in three cycles, but at least in 2016, some of the states that were wrong were critical. Oops.

Another way to look at it is looking at the three "cases" I provided to show the range of expected outcomes based on which states were close:

Dem Best Case Expected Case Rep Best Case Actual 𝚫
2008 D+274 D+138 D+18 D+192 D+54
2012 D+158 D+126 R+82 D+126 EXACT
2016 D+210 D+8 R+66 R+77 R+85

The 77 electoral vote actual in 2016 includes the faithless electors. It would have been 74 otherwise.

Now, given that, I can't claim to be exactly spot on. Especially in 2016 when Trump didn't even come in within what I thought was a really wide range of possibilities at the time. It isn't just that Trump won… my "Trump Best Case" allowed for that. It was that he won by more than the best case. (This is because he won Wisconsin, where the poll average had Clinton up by more than 5%.)

However, compared to the competition, we haven't done half bad on the "expected case". I don't have comparisons for the previous cycles, but for 2016, right after the election was over, I did a post-mortem and logged the following as the final electoral college predictions from a bunch of sites:

  • Clinton 323 Trump 215 (108 EV Clinton margin) – Daily Kos
  • Clinton 323 Trump 215 (108 EV Clinton margin) – Huffington Post
  • Clinton 323 Trump 215 (108 EV Clinton margin) – Roth
  • Clinton 323 Trump 215 (108 EV Clinton margin) – PollyVote
  • Clinton 322 Trump 216 (106 EV Clinton margin) – New York Times
  • Clinton 322 Trump 216 (106 EV Clinton margin) – Sabato
  • Clinton 307 Trump 231 (76 EV Clinton margin) – Princeton Election Consortium
  • Clinton 306 Trump 232 (74 EV Clinton margin) – Election Betting Odds
  • Clinton 302 Trump 235 (67 EV Clinton margin) – FiveThirtyEight
  • Clinton 276 Trump 262 (14 EV Clinton margin) – HorsesAss
  • Clinton 273 Trump 265 (8 EV Clinton margin) – Election Graphs
  • Clinton 272 Trump 266 (6 EV Clinton margin) – Real Clear Politics
  • Clinton 232 Trump 306 (74 EV Trump margin) – Actual "earned" result

So for all these site's "final" numbers, only RCP got closer to the actual results than Election Graphs, although of course nobody outright had a Trump win. We clearly had the election in "too close to call" territory though, with Clinton's lead being extremely narrow.

I attribute how close both RCP and Election Graphs were to both sites using relatively simple trailing averages that responded quickly to polls in the last few days to a week showing a Trump surge in some key states, where many of the more complex models used by the other sites just didn't move as much in response to that last-minute data.

Of the states that Election Graphs got "wrong" in 2016, all but Wisconsin were in our "Weak" categories meaning that it was reasonable to expect those states COULD go either way. Wisconsin was the only real surprise, with Trump massively outperforming his polling average.

A third way of looking at this is by comparing the "tipping points" based on the final polling averages, compared to the actual tipping point based on election results, which I did in another January 2019 blog post. Since there is only one tipping point per election cycle, I only have three data points. But this at a very high level represents how far off the polls were in the critical states.

  • 2008: Obama's actual tipping point was 3.45% better than predicted by polling.
  • 2012: Obama's actual tipping point was 0.89% better than predicted by polling.
  • 2016: Trump's actual tipping point was 2.36% better than predicted by polling.

So in the last 3 cycles, the tipping point has been off by up to 3.5%. Anything less than that, and you can say one side is prefered to win, but you would be on thin ice saying it was a sure thing, because the polls have been further off within very recent history.

What kinds of data goes into your model?

Polls. Just polls. This isn't some fancy model where I look at all kinds of factors and use them to get the results. It is basically just averaging polls, and seeing where that gets you. If the polls are wrong, this site will be wrong too. I'm not factoring anything else in here. We're just taking in the polls and seeing what they say.

How do you decide what polls to include?

I basically am not picky. I include just about everything I find.  I’d have to have a really good reason to exclude something. I don’t try to evaluate pollsters by their methodology, previous accuracy, or anything like that.  I just throw everything into the mix and see what comes out.

Obviously lots of aggregators apply strict criteria for what they include and what they do not include, and many also adjust the weighting of pollsters in their averages based on previous accuracy, or even "correct" pollster results based on previously observed bias. I am not that sophisticated. On the one hand, this makes things a lot simpler. On the other hand, this makes the site more vulnerable to an outlier from a bad pollster unduly influencing the averages.

Tradeoffs. So far I have opted for simplicty.

Where do you find all of these polls?

Look, 538 does a great job. They have a very rapidly updated list here. I don't use their API to automatically ingest polls (although I keep thinking maybe I should) because of how I deal with polls with multiple results and a few other things. But I watch that page regularly. I also watch a Twitter list I made full of pollsters and people who talk about polls, and sometimes I catch things there before 538 has them, or things that 538 has decided not to include for one reason or another.

So wait, you enter the polls manually?

Uh. Yeah. For the moment. Maybe I will reconsider this in 2024 depending what sources are available. In the past I haven't done this because A) I start showing polls each cycle before the other places tend to spin up their efforts, and B) It ties me to the source's choices of what to include and what not to. So, for now, when I see a poll, yes, I type the results in to my own system myself. Sometimes there are typos. I fix them as soon as I notice.

How do you determine the averages for each state?

So, the basic premise of the site starting back in 2008 was to use a "last five poll average" for each state. Most of the time, the average in a state is just the simple arithmetic mean of the last five poll results in the state.

Isn't a simple average too simple?

Lots of other aggregators do more complicated things. Weighting polls by recency, or previous accuracy, or even the sizes of their samples. Or they aren't doing a straight average at all, but rather curve fitting to a scatter plot of polls. Sometimes when there is a lack of state polling, they augment with data on national trends. Etc.

I don't do any of that.

Initially when I started in 2008 I didn’t consider doing any of those fancy things because I was calculating everything with a simple Excel spreadsheet and it was a little daunting to think about.  Now that I’ve automated all sorts of things, it wouldn’t be that hard to plug in whatever logic I wanted for computing the “average”.  It has occasionally tempted me, but then I remember that one of my points in doing this is to show how very simple logic can do just as well as the complicated models.

I haven't been able to resist the temptation to do more complicated things each election cycle than the one before, but so far I've stuck with simple averages as the core calculation on the site.

OK, but how about a median instead of a mean?  Wouldn’t that be better? And still simple?

Maybe.  They are more resiliant to outliers. They definitely have some advantages.  I have thought about it, although there are probably downsides too.  In the end though, I’ve used means since 2008 and it has worked pretty well, and I’d like to be able to compare results between election results to some degree, so I’ll stick with the mean.

Why do you use a five poll average? Why not six? Or ten?

This is fairly arbitrary. It was chosen to be high enough that you could get a reasonable average out of it, but low enough so that the time frames covered by the average are short enough to be responsive to changes in the campaign…  at least close to the election in the close states…  further out in time or in sparesly polled states the average can cover so much time it won’t respond quickly, just because there is no new data.

But what happens is that as the election approaches, close states are polled more and more often, so by the time of the election, the five poll averages in the close states generally only cover a few days and the averages respond quite quickly.

But why a number of polls? Why not an amount of time? 

With any timeframe, you will often hit the situation where you have no polls at all in your chosen timeframe.  Then what do you do?  There are obviously answers to that question, but you have to start having special cases. Having an “X poll average” is just simpler and more straightforward and automatically adapts to looking at shorter timeframes as the election approaches and there is more polling.

Just how do you determine what is most recent?

Unlike other sites that use the END DATE of a poll as the date of the poll, we use the MID DATE for sorting our polls. So, for instance, a poll that was in the field from September 11 to 20 has a middate of September 16th, and so ends up sorted BEFORE a poll that was out from Sep 14 to 19 (middate September 17th), even though the last day it was in the field was a day later.

The reason for this choice is that just using the end date puts all the weight of that poll at the time it ended, but especially for longer running polls, when there are events changing people's opinions, it is important to recognize that a lot of the polling was done well before the end date. Sorting by the middate puts the poll at the location where half of the sample occured before that date, and half after. (Roughly speaking that is, since of course pollsters don't actually release timing details of when they reached people.)

But wait, I've noticed sometimes the average has more than five polls. Why?

Yeah, of course things can't be as simple as just taking the last five polls. There are a few situations that result in us using more than five polls.

  1. If you sort the polls by middate, and there is a tie for the "5th oldest poll" I just include all of the polls with the tied middate. So it could be a 6 or 7 poll average or whatever to include all of those polls with the same middate.
  2. The original conceit of this site was to classify every state into categories based on the current margin in those states. This has a problem whenever the average ends up exactly on the line between two categories. Now, you can solve this by, just picking which category you get classified as when you are on those boundaries. But deciding which category to put exact ties is still problematic without giving a slight advantage to one side or another. So instead, if the average falls exactly on one of the boundaries, I just include older polls until it doesn't. Now, while this was fine for the categorization views, I know this is actually a little problematic for the probabalistic views, but for the moment it is what it is.

How do you deal with polls that give more than one result?

Most aggregators have defined rules to pick one result from each poll. For instance, if a pollster reports results with and without 3rd parties, one of those would be included, not both. They just define a hierachy for how preferred different ways of reporting are, and then pick the one you like.

Instead, we just include all of them. However, we don't want a poll that reports their results three differed ways to count three times as much as another poll that just shows one result. So when a poll reports multiple results, we weight it so that all the reported results together count as one poll.

In other words, if a poll reports results in three ways, each of those results count as one third of a poll in our averages.

How do you deal with tracking polls?

So, just as a poll that gives three different results out of the same set of responses to questions needs to be weighted to be fair, similarly if a pollster releases daily results based on the last X days of ongoing polling that continues over an extended period of time, counting each daily result as a seperate poll would give that pollster undue influence in the averages.

One way to deal with this is to only record the tracking poll when the new poll doesn't overlap at all with the last one that was included. So a daily tracking poll that always contains the last five days of results would only be shown every 5th day. This is actually what this site did in 2016.

In 2020 though, I am including every day's results, but weighted so that if the poll is an X day trailing result, each day's result counts as 1/Xth of a poll.

Since the logic for the average requires "at least 5" polls, this sometimes means you will end up with the average being based on a non-integer number of polls. Which is a little odd, but is a result of this logic.

What is the "Categorization View"?

When this site started in 2008, the idea was simple. Determine which states were close enough that they were in play, and generate a range of possible outcomes based on those classifications. These categories are the following:

  • "Weak" – The margin in the state is less than 5%
  • "Strong" – The margin in the state is more than 5% but less than 10%
  • "Solid" – the margin in the state is more than 10%

Aren’t those categories completely arbitrary?

Yes.  Yes they are.  5% just seemed like a reasonable amount that you could imagine either polls being wrong, or for a last minute movement in the race that could happen too fast for polls to catch.

Similarly 10% seemed like a reasonable number for "even with huge polling error or a stunning news event these states probably won't flip the other way".

I have occationally considered changing these definitions, but so far have just stuck with them. While these are not boundaries that are emperically defined by mathematical analysis, they are easy to grasp at a glance and feel nice and clean.

What is the "Expected Case"?

The "expected case" is when each candidate wins exactly the set of states where they lead the averages, even by a tiny bit.

What are the "Best Cases"?

The "best case" scenarios are when the candidate in question wins every state they lead, plus all of the states where they are behind by less than 5%.

What is the "Tipping Point"?

The "tipping point" is the margin in the state that would put the winning candidate over the edge. This can essentially be views as a "pseudo popular vote" number adjusted for the structure of the electoral college. Basically, if polls everywhere in the country moved identically, how big of a change would it take to flip the winner.

What are the "Probabilistic Views"

On each of the two "probabilistic views" I do what is called a "Monte Carlo Simulation".

Basically, I run 1,000,001 simulated elections based on the current polling data. Once you sort all the results from best for the Democrat to best for the Republican, the median is the result of the single simulation in the middle… roughly speaking the Republican does better in half the simulations, the Democrat does better in the other half. The "σ" ranges show the range of outcomes when you look at the middle 68.27%, 95.45%, and 99.73% of the simulations. And finally the odds just represent what fraction of the simulations gave each of the three possible outcomes (Democratic Win, 269-269 tie, Republican win).

This gives a more subtle view of what is going on in the race than simply classifying some states as "weak" states that could go either way. The movement of averages within categories makes a difference. A 0.1% lead in a state is different than a 4.9% lead. These views recognize that, where the categorization view can't distinguish between those situations.

These views are brand new in 2020, so are somewhat experimental. I welcome feedback from those who want to engage with me on the detailed methodology.

How do you come up with the probabilities?

This lovely blog post from January 2019 goes into all kinds of detail on how I constructed mappings from the polling average margins to percentage chances of each candidate winning. To summarize, I analysed how the difference between the final polling average and the actual election results varied based on the polling average margin aggregated over the three election cycles I have done polling averages (2008, 2012, and 2016).

Using that historical data and some math, I generated a mapping that basically said "When the state polling average margin is X%, the actual election results have averaged Y% with a standard deviation of Z%, which translates into a W% chance of winning the state."

I am sure that there are valid critisims of my methodology. I'd love to hear from folks who know about such things and can give me constructive critisism. A few I can think of myself right off the bat:

  • I only have three election cycles of data, and that really isn't a lot
  • I may be over-fitting based on that limited data
  • I have left in "jitters" in the data rather than using a smoothed curve
  • My way of using "windowed" means and standard deviations is a little odd

Never the less, it seemed to provide an interesting way of looking at things that was worth experimenting with in 2020. We'll see how it works out.

What is the "Independent States" view?

The polling average margin to win odds mapping I generated has to be applied to states when doing a simulation.

One way of doing that is to essentially just roll the dice seperately for each state. So one candidate when one candidate has a 36% chance of wininng Georgia, and a 55% of winning Ohio, what happens in one state has zero influence over what happens in the other.  If one candidate pulls an upset in a close state, that is very likely to be compenstated for by an upset in the opposite direction by the other candidate.

This way of doing things generates some nice looking results, and was the first probabalistic view I added to Election Graphs.

Wait, aren't polling errors correlated between states? 

Well, OK.

You've got me.

I'd kind of hoped that the analysis I did of the previous differences between Election Graphs averages and actual election results showed that you most often did NOT see a consistant correlation between the results in all the states. That is, you didn't see a situation where one year all the polls were biased toward Democrats, and another they were biased toward Republcans, etc. Looking at the scatter plots, polls were off in both directions and the bigger pattern seemed to be a bias toward making the race look closer than it was, rather than a correlated error toward one party over the other.

But experts have looked at this a lot. There is definately some correlation. And more specifically, there can be strong regional correlations, even if there is less of a pattern nationwide. If polls make a certain kind of error in one midwest rust-belt state, it is decently likely that they will make the same kind of error in other similar states.

So the "Independant States" way of looking at things, underestimates the chance of critical key states all having errors in the same direction at the same time.

There was a big blow up on "Election Twitter" in September 2020 critisizing a prominant election forecaster for releasing a model that treated the states as completely independent from each other. Lots of people pointed out how unrealistic this was, and how no serious model could make that assumption, and how in the context of 2020 that meant the model way overestimated Biden's odds. That model was retracted and sent back for more work to add consideration of correlation between states.

Woops. The probabalistic view on Election Graphs had the same issue. I had waved off the fact that Election Graphs was consistantly giving Biden 99%+ chances of winning to the fact it was a nowcast rather than a forecast, but it was getting close enough to the election to doubt that, especially in cases where the tipping point was dipping to levels low enough that it had been wrong in the last three election cycles, and still Biden's chances were super high.

So clearly the probabalistic view I was showing at the very least was not OK to represent the full picture.

So I had to do something.

What is the "Uniform Swing" view?

This is the "something" mentioned in the previous question.

My site has simple poll averages and then I used some analysis to generate a mapping from polling average margins to percent win probabilities.

I do not have, or am I likely to generate a model any time soon that tries to use all sorts of other data and assumptions to model a specific level of correlation between the states, let alone a more complicated model that identifies groups of states that are likely to move together.

The Independent States view shows the extreme case where the states are completely uncorrelated. Where knowing what happens in one state gives you no valuable information to help predict what will happen in any other state.

I don't have a good way at the moment to produce a view that has some degree of partial correlation between the states.

But I was pretty easily able to go to the other extreme where I assume the states are actually perfectly correlated with each other.

Rather than roll the dice sepearately for each state to see which ones are won and lost, I roll the dice ONCE for each of the 1,000,001 simulations I do, and the random number determines how ALL the states go. So, if I generate a random number between 0% and 100% and get a 57%, then each candidate wins every state where they had a greater than 57% chance of winning.

This is the "Uniform Swing" view.

So the Independant States and Uniform Swing views give the two extremes for the odds of winning.

The fully correlated Uniform Swing view produces a much broader range of outcomes than independant states , because if one candidate is exceeding their polls in a particular simulation, they exceed them everywhere.

So the Independant States view shows the low end for the odds of either an upset win or a landslide by the leader, while the Uniform Swing view shows the high end probabilites for both of those options.

The truth is somewhere in between. I don't know exactly where in between though, so I just show both extremes.

Come on, that is a cop-out, can't you do better and make one estimate with the right amount of correlation between states?

Maybe for 2024. It is way too late for 2020.

Of course that would be making the model on this site even more complicated, when the original point was simplicity. It is already so much more complicated than it was in 2008! So I'm not sure if I actually even want to do that.

However, if you are someone who has some knowlege of ways to do this and wants to give me some advice, drop me a note!

Delegate Race

Why are you doing delegate tracking in addition to the electoral college stuff?

Fundamentally, because I have fun with it, just like the other. But in addition I think I’ve added a couple things beyond the usual delegate tracking you find everywhere in the primary season that are useful and interesting.

OK, what are those things?

First up is tracking the percentage of remaining delegates needed to win. I think this gives a better measure of how the race is going than just looking at the delegate totals alone, because it captures how much harder it is to change the outcome as the race progresses and the frontrunners accumulate delegates.

Second is tracking the various metrics against the percentage of delegates allocated so far, rather than against the date. This gives a better understanding of just how far along the process it is than the date does, since the distribution of delegate allocation “events” is very clumpy, not evenly distributed through the primary season. This is also influenced by when delegates not bound by primary and caucus results make their decisions, which doesn’t happen on any particular pre-set timetable.

Third, rather than concentrating completely on the current delegate counts, I also show the progression of these metrics over time, giving a sense of how things are developing. Now, the past is not necessarily directly predictive of the future, there is always the possibility that campaign events occur that dramatically change the direction of the race, but looking at how things have been going so far, more often than not, gives you an idea of the most probable way things are going to continue.

Why are your delegate counts different than those at my favorite media outlet?

Until delegates actually do the roll call vote at the convention, delegate counting has a lot of uncertainty involved.

While some delegates are directly bound and known based  on primary and caucus night results, many others are determined in a variety of ways that make them hard to count. In some cases there is a multi stage delegate selection process. Some outlets choose not to count delegates in these multi-stage processes until the final delegates are known at the last stage of that process. Others, like this site, attempt to determine an estimate of what the final outcome of the process will be based on the results of the earlier stages. These are estimates, and will almost certainly change and shift as the process continues.

In addition, other delegates are actually complete free agents and can vote for whoever they feel like, and can change their minds repeatedly until they cast their actual vote at the convention. Often delegates for candidates that drop out end up in this category as well if they are “released”. Where these delegates publicly express their support for a candidate, this can be used to predict how they will vote, but even in these cases the delegate can still change their mind.

All of the above means that different outlets will make different choices of how to estimate the delegate count, and will have slightly different numbers. The general trends should be similar though.

So are you some kind of expert in the delegate selection process to make these estimates?

No. Not at all. Many years ago I was a physics major. These days I work at a Seattle tech company. I have no particular expertise in this other than being an interested amateur. My estimates though are for the most part not made by me trying to directly estimate numbers from my own reporting. I consult a number of different sources of information on delegate outcomes and try to come up with numbers that use the best available information from those sources.

OK, so what are those sources?

My primary source in 2020 was Green Papers and the sites they link as references. I basically always mirrored their results unless there was a very specific reason to differ. I also carefully watched a Twitter account who presents delegate information along with a mountain of snark.

Doesn’t the fact some delegates can change their minds invalidate the “% of remaining delegates” metric?

This was a much bigger deal in 2016 when the Democratic superdelegates could vote on the first ballot. In 2020 the only place this really mattered was in reallocation of delegates for candidates that dropped out. Either way, it does mean that it is really “% of remaining delegates if none of the already allocated delegates change or were estimated incorrectly”. In practice in recent contests, this hasn’t made a huge difference. But yes, it is always the case that there can indeed be changes, and you can imagine situations where this would end up being quite significant. For instance, if the front runner were to have an unexpected health issue, dropping out of the race and releasing all of their delegates half way through the primary season, then of course everything gets scrambled. But even events like this would be nicely represented in the graphs, with the % of delegates allocated moving backwards, and the “% of remaining needed to win” changing appropriately. Absent a dramatic event of that sort though, delegates changing is a minor secondary effect, and the estimates in multi-stage processes are often “close enough”, so these factors are unlikely to have a huge effect on the charts.

How often is the data updated?

During primary season, I try to update the numbers at least daily. On big primary nights I will often update hourly as results come in.