I’m having a few problems with using the Intrade data for US election analysis and I’d love your help or assistance to figure a few things out. This might not be for everyone as it gets a little nerdy – but for those without stats backgrounds, I’ll use some charts and stuff to explain the maths in ways that will hopefully make it a bit more digestible.
I’m more interested in using the State by State betting markets on Intrade, the “Democrat/Republican to win State X” as a focus because I think those markets actually contain superior information than any of the headline markets such as “Democrat/Republican to win the Presidency”.
My thinking on this comes from seeing the US Presidential election not as a single election, but a collective result of some 51 individual electoral contests (the States and DC) that just all happen to occur on the same day.
I reckon that the average amount of information that a participant in a State betting market has about the true political situation on the ground, as a proportion of the total amount of political information available about that State, is on average, higher than the average amount of information that a participant in the headline markets have as a proportion of the total amount of information that is available about all 51 electoral contests in the US.
As a result, I see the State markets as containing, both individually and collectively, far superior information about the true “current state of play” in the US political system.
The problem comes in deciding which way to aggregate that info.
Over at the US election page on the site, I’m using monte carlo simulations to try and aggregate the information contained within those State markets. But I’m not using monte carlo simulations in the usual way that it is done, because I don’t believe that the usual way is actually a valid methodology for political betting markets.
For those of you reading that just went “WTF is a monte carlo simulation”, fear not, it will all become clear in a bit.
The way monte carlo simulations are usually used in these markets is, very basically, that you take a State, lets say Ohio , and look at the probability the Intrade market gives it to be won by a particular party- let’s say the Democrats. At the moment the Intrade probability for the Democrats winning Ohio is 67%, or 0.67.
Then we generate a random number between 0 and 1( to, say, three decimal places) and we compare our random number against the Intrade probability – if the random number is 0.67 or less, then we give the Electoral College Votes for Ohio to the Democrats, if the random number is greater than 0.67 we give those electoral college votes to the Republicans. But we don’t just do this for one State, we do it for all 51 electoral contests at the same time, and after every state has one random number generated for it where it is compared to the Intrade probability and the college votes are distributed accordingly, we add up all of those college votes that each party would win across the country to get a simulation of the election results.
Then we do that exact same thing a million more times to end up with 1 million possible outcomes that look like a bell curve. The mean of that bell curve is the mean number of electoral college votes the Democrats will win after 1 million simulations, and from that we can calculate the current implied probability of a political party not only winning the presidency according to the Intrade State markets, but the probability of them getting any number of electoral college votes in total.
That’s the way it is usually done, but I think that methodology is completely and utterly invalid to use for Intrade political betting markets. Let’s say on the day before the election, the Intrade markets have the Democrats a 5% chance of winning Alabama the next day.
Theoretically, using the orthodox monte carlo approach, if 100 elections were held the next day, the Democrats would win Alabama in 5 of them. Now that is clearly nonsense. You could have one hundred thousand elections the next day and Alabama wouldn’t turn blue in any of them! Ordinarily we would expect those strange results to wash out in the million simulations, and those extreme ones do – but there is also a problem with the less extreme probabilities which we’ll get to in a bit. This is just a simple example of the sorts of problems we face.
So rather than deal with these funny little “not in a million years but regularly on Intrade” results that occur in the simulations, I’m doing it differently.
I create for each State a normal probability distribution with a mean of their current Intrade probability and a standard deviation of, currently, 0.2. I then generate a random number from that probability distribution and if that random number is greater than 50% I give the State to the Democrats, if it’s less than 50% I give it to the Republicans. I then do that once for every state and add up the electoral college votes, then repeat the process a million times to end up with a bell curve of the electoral college results that gives us our implied State market probability of the Democrats winning the presidency.
For the non stat types, the way to visualise this is if we take an imaginary State that has a current Intrade probability of exactly 50% for the Democrats winning it, then the distribution would look like a bell curve where the highest point on that bell curve is exactly 0.5. By having a standard deviation of 0.2, it means that around 68% of the random numbers I pull out of that distribution will be between about 0.3 and 0.7 (the mean of 0.5 plus or minus the standard deviation of 0.2), and that 13.5% of the random numbers that will be pulled out of that distribution will be between 0.1 and 0.3 and another 13.5% will be between 0.7 and 0.9 (those figures aren’t exact to so many decimal places, they’re approximately correct). This means that the random numbers which are pulled out of that distribution will be more likely to be closer to the mean of 0.5 than further away from it.
However – is that standard deviation right, should the same standard deviation apply to all States and should it stay at a value of 0.2 all the way through to the election?
For the not stat types, the following graphs will be handy. The larger the standard deviation, the wider the shape of the bell curve – below shows how it works. They are two distributions I calculated using 1 million simulations, the first is a normal distribution with a mean of 0.5 and a standard deviation of 0.2, the second one is the same except it has a standard deviation of 0.1. Notice how the smaller the standard deviation, the tighter the range of random numbers that can be generated within it (the random numbers we generate for the US States come from the area under the curve).The smaller the standard deviation, the closer the random number we pull from the distribution will be, on average, to the mean.

On the question of whether the standard deviation should remain the same through to the election, I’m of the mind that it shouldn’t – but I’d love to hear your thoughts about it?
I think that the standard deviation we give to the individual state market distributions here should be a function of uncertainty – as in, how sure are the punters that the market is true, how sure are the punters that the probability for a given state is true? A lot of that uncertainty is reduced by information about the state of play on the ground in a given state, information like polls for instance.
As we approach the election, the uncertainty of each state market should reduce as more information like polling gets released. To accommodate this we should probably reduce our standard deviations for the State markets over time as well.
But the big question is whether the uncertainty reduces linearly or non-linearly as we approach the election. For instance, if we chart how the reduction of uncertainty would look over time as we approach the election, both as a linear function and a non-linear function we get:

I’m of the mind here that uncertainty will reduce in a non-linear fashion, simply as function of the number of polls and the timing of their release. We can all remember our own election here last year when the number of polls released gradually increased in the lead up to the election, before increasing dramatically over the campaign period. Because such increasingly vast quantities of polls will be released in the US as Election Day looms, I’m thinking that people will become increasingly certain of their bets in the State markets as Election Day approaches.
Any thoughts?
We’ve got our current uncertainty represented by a standard deviation of 0.2 which at the moment is fairly wide, but we also need our final uncertainty to use on election eve in order to generate all of the standard deviations between now and then that we will use.
As for what the uncertainty should be on the day before the election, unfortunately we can’t really model it to get a number because we just don’t have enough data – so we’ll have to use our heads and make an assumption.
For instance, what are your thoughts on having an uncertainty level on the day before the election of a size that is represented by a standard deviation of 0.025, or 2.5%?
That would mean, essentially, that were a hypothetical State on election eve to have a 50% probability of going to the Democrats, than the uncertainty around that final result would be such that there would be roughly a 68% chance of the true probability being between 47.5% and 52.5% of the Democrats winning the State, and an approximate 95% chance of the true probability being between 45% and 55% for the Democrats winning that state.
Does that sound like a reasonable standard deviation to represent election eve uncertainty?
This also gets us back to why I think we should use this type of monte carlo simulation methodology rather than doing things the way they’re usually done.
If we believe that markets actually contain good information, then using a standard monte carlo approach is inconsistent with that belief. On the one hand we’d be saying that markets know best, but on the other treating them as if they don’t by drawing random probabilities to judge them against.
For instance, if Florida was given a 30% chance of falling to the Dems on election eve – does that really mean that if 100 elections were held the next day, the Democrats would be expected to actually win Florida 30 times? Of would that probability be substantially less on the basis that the market has probably got it right in outcome if not probability?
Especially since Intrade predicted every State result correctly last election, but often by small margins of only a few percent. It’s almost improbable that Intrade would have predicted every race were the chances of each party winning a given State truly represented, literally, by the Intrade odds. On Nov 2nd in 2004, Intrade had the Republicans in front by less than 7% probability in Florida, New Mexico, Ohio and Iowa – 59 Electoral College votes all up. Intrade actually predicted every winner, but by margins so small that suggest we should treat the results with more respect for the predicted outcomes than standard probability theory tells us we ought to.
Hence, I think that we should measure uncertainty in terms of drawing random numbers from within a probability distribution for each State and comparing that to the 50% probability mark to distribute Electoral College votes, rather than randomly draw numbers and compare it to the implied probability in each of the States and distribute Electoral College votes accordingly.
If I use this methodology (with standard deviations reducing nin-linearly) with Intrade data for the 2004 presidential election, on election eve the Mode of the simulation (the number of Electoral College votes for the Democrats that get projected most often) is 252, which was exactly the result. The final probability of the Dems on election eve was a 36.9% chance of victory. On June 21st in 2004, the Intrade State markets gave the Dems a 29% chance of victory with a Mode of 255 Electoral College votes. So the methodology plays out pretty well using 2004 Intrade data that I’m slowly gathering.
The other two big questions that I haven’t quite got my head around are:
1. Should each State have the same type of distribution, as in a normal distribution, or would there be other types of distributions that might represent them better on the basis of circumstances happening in each state – and if so, what sort of distributions and on what basis should we select them (I can effectively use any type of probability distribution known to man here)
2. Should the depth of each State market, as in the volume of contracts traded for a State market, have a say on the size of the standard deviations we give to each State’s normal distribution, and if so – anyone got any ideas on how to make that so?
High volumes of contracts traded should theoretically represent greater certainty because more people believe a given probability. So should we include market volume when it comes to determining the standard deviations of the States, and how?
And if I haven’t explained anything adequately here, please let me know or ask me, because I’d really like to construct simulations between now and US election day that try and extract the best information we can get out of those knowledge filled State markets.
All suggestions would be really appreciated.
On something else US election related, here is the Obama campaign strategy that’s been flying around the intertubes very recently (it’s a small pdf version of a powerpoint presentation). Thanks to LL for sending me that. Some of you may not have seen it, and it’s pretty interesting.
On something more local – that slayer of psephological piffle and all round electoral legend Antony Green has a spiffy blog .He tags Newspoll for being gooses in preference distributions when OPV is running.