Possums Pollytics

Politics, elections and piffle plinking

Archive for June, 2008

The Narrowing, the Narrowing! Run for the Hills!

Posted by Possum Comitatus on June 30, 2008

Newspoll Tuesday, or for those insomniacs – Newspoll Monday Night – has come around yet again. It delivers us polling goodness of primary votes running 44/39 to the ALP flowing through to a TPP of 55/45 the ALP’s way.

The Narrowing Beast is upon us! Dennis shall regale us with tales of honeymoons concluded and political danger, Paul Kelly will tell us that Rudd needs to become a “conviction politician” to overcome this most dire of polling slumps, while Glenn Milne will, oh, probably talk about politicians social lives or something.

Yet all shall speak in dulcet tones of “The Narrowing”.

We’ll get into the nitty gritty chanelling of the bellybutton lint over the numbers on Tuesday, but in the meantime we can update our Pollytrack series (the most accurate tracking poll in the country)  and its webbed feet cousin Loess Allpolls to give us something to chew over.


Something you might be interested in; if we take every Newspoll from after the 1996 election through to today and look at the Government two party preferred result, turn that result into a probability distribution and simulate it a million times – we end up with a nice little curve that tells us the historical probability of a government getting between any two values of Newspoll.

Here it shows that the chances of the government getting 55% or higher in TPP terms is about 9%. Or another way, 81% 91% of all polls for the government are historically below 55%.


Posted in Uncategorized | 12 Comments »

Intrade, EMH and more Monte Carlos.

Posted by Possum Comitatus on June 30, 2008

All you folks that read the headline and wondered what the possible relationship between holographic Star Trek characters, Arnotts biscuits and betting markets could be, fear not. :mrgreen:

In comments in the last post, Andrew Leigh raised the Efficient Market Hypothesis (EMH) as a good place to start for dealing with Intrade markets. For those that don’t know just what that is, it’s probably worth having a squiz at here.

For the sake of simplicity, we’ll just look at the three general forms of EMH, the strong form efficiency, the semi-strong form efficiency and the weak form efficiency – knowing which one applies to Intrade behaviour helps us enormously in trying to analyse the results.

So let’s do this via a process of elimination.

First up is the strong form efficiency which states (from the above link) that “Share prices reflect all information, public and private, and no one can earn excess returns.”.

The key here is the first bit, which as far as our Intrade analysis is concerned means that Intrade probabilities at any given time “reflect all information, public and private”.

The reason I used the Intrade results on election eve rather than election day itself in 2004 for calibrating our monte carlo sim is because of the massive volatility that occurred on election day in the Intrade markets – newstraders went haywire, making bets on every piece of dodgy exit polling that was released. Ironically the prices on election eve had it right, and prices on election day had it wrong for a good chunk of the time.

We know that Intrade prices don’t necessarily reflect “all information, public and private” at any given time, and we only have to look at the Intrade data on election day to see this. After the polls had closed in both Ohio and Florida, Kerry was actually ahead in the probabilities on Intrade. We can see this by borrowing a neat little chart of Intrade probabilities on Election day from Justin Wolfers and Eric Zitzewitz in their paper “Partisan Impacts on the Economy:Evidence from Predictions Markets and Close Elections” (2006).

By 9.00pm the polls had well and truly closed in the two key States of Ohio and Florida,  private information was available to both Democrats and Republicans in Ohio and Florida by 9.00pm that told them who had won the election – yet the Intrade market did not reflect that private information, it simply continued to reflect the public information, the widespread broadcast of exit poll results that turned out to be erroneous (or actually just results that were within the margin of error of those exit polls). It wasn’t until after 10:00pm when political information started to leak to the media about the true state of affairs that the Intrade market reacted and pushed the Bush probability up beyond 50%, and did so rapidly. The same thing occurred in the Iowa Electronic Markets.

So we know that the strong form efficiency doesn’t always apply to Intrade – all private and public information is not reflected into the Intrade price at any given time, so we can reject the strong form efficiency.

We also know that weak form efficiency holds, since weak form efficiency simply tells us that the current Intrade price has all previous Intrade prices and trading volumes built into it. We expect Intrade prices to be serially correlated if they were weak form efficient and a quick peek at any Intrade political market or state shows that to be the case, for instance here’s the Democrat headline market and it’s correlogram:

It’s serially correlated up the wazoo – but interestingly it’s not exactly, statistically a basic random walk either (for the nerdy types that care for such things).

So the big question becomes, does the semi-strong form of efficiency hold?

Using a neat and simple Wiki definition, “Semi-strong-form efficiency implies that share prices adjust to publicly available new information very rapidly and in an unbiased fashion, such that no excess returns can be earned by trading on that information.”

We can certainly see that public information drove rapid adjustments to Intrade prices on election day in 2004,  we also saw that private information didn’t impact upon price until it became public – so some degree of semi-strong form of efficiency would seem to apply.

This brings us to something that might be worth watching over the next few months in terms of the data. If we assume that there is a semi-strong form of efficiency operating in the Intrade markets, is the State by State market a stronger, weaker or similar level of that semi strong form compared to the headline markets?

Thanks to Caf, our Monte Carlo simulation at least partially adjusts for State non-independence – if we run those weekly simulations going back to April that gives us our implied State by State market probability, and compare that to the headline “Democrat as President” market probability we get something interesting:

We don’t have enough data yet to run any tests here (although we will eventually), but it looks as if the headline market could well be chasing the State market results – though, so saying, it could just be arbitrary at the moment, time will tell.

If our State market results can be demonstrated to be a leading indicator of the headline market, it will be evidence for the State markets containing a higher, more accurate level of information which takes time to flow through and be aggregated into the headline market – which, should that be the case, would be another piece of evidence backing the semi-strong form of efficiency argument.

Moving back to where our Monte Carlo sims stand at the moment, so far we are:

1. Turning our Intrade probabilities for each State into probability distributions with a mean of the implied probability for that State and a standard deviation now of 0.1. That let’s us treat probabilities not as true but as approximately true. As we approach the election, the standard deviation for those distributions will reduce, tightening up the Intrade probabilities to reflect greater certainty in the market of the implied probability being the true probability.

2. Thanks to Caf, for each iteration of the Monte Carlo sim, we generate a mean outcome M, uniformly distributed between 0.1 and 0.9. We use 0.1 and 0.9 to effectively cut the tails off the Intrade probabilities which we know aren’t accurate at the fringes. For each state election within a given iteration, we generate a value with mean M and a standard deviation of 0.01 (which we determined by using the 2004 election results as a calibration) to give us a call number. This call number for each state is compared to the randomly selected probability that we pull from our State probability distributions, and if our call number is greater than our State probability number we give the EV of a State to the Republicans, if it’s less than our State probability number we give it to the Democrats.

3.Each time we have an iteration, this process happens for all 51 electoral contests and we can sum the simulated number of Democrat Electoral College Votes. We repeat this process 100 000 times (which is now really stable at 100,000 iterations with this new method) to get a probability distribution of the Electoral College votes that the Democrats win in the simulation, which then allows us to calculate the probability of the State markets giving the Dems 270 ECVs or greater.

So the thing left to do is determine a process to reduce our standard deviation for the individual State probability distributions between now and the election, in such a way that it reflects the reduction in uncertainty in the markets that will occur during that period. I’m hunting through all sorts of data at the moment to see if I can find a good empirical basis for the shape of this reduction – but am always really open to any suggestions.


The US Election page on the Intrade data has had it’s weelkly update, this time applying the new simulation methodology.

Posted in Intrade, US Elections | 2 Comments »

Request for mental assistance on US Election Intrade stuff.

Posted by Possum Comitatus on June 26, 2008

I’m having a few problems with using the Intrade data for US election analysis and I’d love your help or assistance to figure a few things out. This might not be for everyone as it gets a little nerdy – but for those without stats backgrounds, I’ll use some charts and stuff to explain the maths in ways that will hopefully make it a bit more digestible.

I’m more interested in using the State by State betting markets on Intrade, the “Democrat/Republican to win State X” as a focus because I think those markets actually contain superior information than any of the headline markets such as “Democrat/Republican to win the Presidency”.

My thinking on this comes from seeing the US Presidential election not as a single election, but a collective result of some 51 individual electoral contests (the States and DC) that just all happen to occur on the same day.

I reckon that the average amount of information that a participant in a State betting market has about the true political situation on the ground, as a proportion of the total amount of political information available about that State, is on average, higher than the average amount of information that a participant in the headline markets have as a proportion of the total amount of information that is available about all 51 electoral contests in the US.

As a result, I see the State markets as containing, both individually and collectively, far superior information about the true “current state of play” in the US political system.

The problem comes in deciding which way to aggregate that info.

Over at the US election page on the site, I’m using monte carlo simulations to try and aggregate the information contained within those State markets. But I’m not using monte carlo simulations in the usual way that it is done, because I don’t believe that the usual way is actually a valid methodology for political betting markets.

For those of you reading that just went “WTF is a monte carlo simulation”, fear not, it will all become clear in a bit.

The way monte carlo simulations are usually used in these markets is, very basically, that you take a State, lets say Ohio , and look at the probability the Intrade market gives it to be won by a particular party- let’s say the Democrats. At the moment the Intrade probability for the Democrats winning Ohio is 67%, or 0.67.

Then we generate a random number between 0 and 1( to, say, three decimal places) and we compare our random number against the Intrade probability – if the random number is 0.67 or less, then we give the Electoral College Votes for Ohio to the Democrats, if the random number is greater than 0.67 we give those electoral college votes to the Republicans. But we don’t just do this for one State, we do it for all 51 electoral contests at the same time, and after every state has one random number generated for it where it is compared to the Intrade probability and the college votes are distributed accordingly, we add up all of those college votes that each party would win across the country to get a simulation of the election results.

Then we do that exact same thing a million more times to end up with 1 million possible outcomes that look like a bell curve. The mean of that bell curve is the mean number of electoral college votes the Democrats will win after 1 million simulations, and from that we can calculate the current implied probability of a political party not only winning the presidency according to the Intrade State markets, but the probability of them getting any number of electoral college votes in total.

That’s the way it is usually done, but I think that methodology is completely and utterly invalid to use for Intrade political betting markets. Let’s say on the day before the election, the Intrade markets have the Democrats a 5% chance of winning Alabama the next day.

Theoretically, using the orthodox monte carlo approach, if 100 elections were held the next day, the Democrats would win Alabama in 5 of them. Now that is clearly nonsense. You could have one hundred thousand elections the next day and Alabama wouldn’t turn blue in any of them! Ordinarily we would expect those strange results to wash out in the million simulations, and those extreme ones do – but there is also a problem with the less extreme probabilities which we’ll get to in a bit. This is just a simple example of the sorts of problems we face.

So rather than deal with these funny little “not in a million years but regularly on Intrade” results that occur in the simulations, I’m doing it differently.

I create for each State a normal probability distribution with a mean of their current Intrade probability and a standard deviation of, currently, 0.2.  I then generate a random number from that probability distribution and if that random number is greater than 50% I give the State to the Democrats, if it’s less than 50% I give it to the Republicans. I then do that once for every state and add up the electoral college votes, then repeat the process a million times to end up with a bell curve of the electoral college results that gives us our implied State market probability of the Democrats winning the presidency.

For the non stat types, the way to visualise this is if we take an imaginary State that has a current Intrade probability of exactly 50% for the Democrats winning it, then the distribution would look like a bell curve where the highest point on that bell curve is exactly 0.5. By having a standard deviation of 0.2, it means that around 68% of the random numbers I pull out of that distribution will be between about 0.3 and 0.7 (the mean of 0.5 plus or minus the standard deviation of 0.2), and that 13.5% of the random numbers that will be pulled out of that distribution will be between 0.1 and 0.3 and another 13.5% will be between 0.7 and 0.9 (those figures aren’t exact to so many decimal places, they’re approximately correct). This means that the random numbers which are pulled out of that distribution will be more likely to be closer to the mean of 0.5 than further away from it.

However – is that standard deviation right, should the same standard deviation apply to all States and should it stay at a value of 0.2 all the way through to the election?

For the not stat types, the following graphs will be handy. The larger the standard deviation, the wider the shape of the bell curve – below shows how it works. They are two distributions I calculated using 1 million simulations, the first is a normal distribution with a mean of 0.5 and a standard deviation of 0.2, the second one is the same except it has a standard deviation of 0.1. Notice how the smaller the standard deviation, the tighter the range of random numbers that can be generated within it (the random numbers we generate for the US States come from the area under the curve).The smaller the standard deviation, the closer the random number we pull from the distribution will be, on average, to the mean.

On the question of whether the standard deviation should remain the same through to the election, I’m of the mind that it shouldn’t – but I’d love to hear your thoughts about it?

I think that the standard deviation we give to the individual state market distributions here should be a function of uncertainty – as in, how sure are the punters that the market is true, how sure are the punters that the probability for a given state is true?  A lot of that uncertainty is reduced by information about the state of play on the ground in a given state, information like polls for instance.

As we approach the election, the uncertainty of each state market should reduce as more information like polling gets released. To accommodate this we should probably reduce our standard deviations for the State markets over time as well.

But the big question is whether the uncertainty reduces linearly or non-linearly as we approach the election. For instance, if we chart how the reduction of uncertainty would look over time as we approach the election, both as a linear function and a non-linear function we get:

I’m of the mind here that uncertainty will reduce in a non-linear fashion, simply as function of the number of polls and the timing of their release. We can all remember our own election here last year when the number of polls released gradually increased in the lead up to the election, before increasing dramatically over the campaign period. Because such increasingly vast quantities of polls will be released in the US as Election Day looms, I’m thinking that people will become increasingly certain of their bets in the State markets as Election Day approaches.

Any thoughts?

We’ve got our current uncertainty represented by a standard deviation of 0.2 which at the moment is fairly wide, but we also need our final uncertainty to use on election eve in order to generate all of the standard deviations between now and then that we will use.

As for what the uncertainty should be on the day before the election, unfortunately we can’t really model it to get a number because we just don’t have enough data – so we’ll have to use our heads and make an assumption.

For instance, what are your thoughts on having an uncertainty level on the day before the election of a size that is represented by a standard deviation of 0.025, or 2.5%?

That would mean, essentially, that were a hypothetical State on election eve to have a 50% probability of going to the Democrats, than the uncertainty around that final result would be such that there would be roughly a 68% chance of the true probability being between 47.5% and 52.5% of the Democrats winning the State, and an approximate  95% chance of the true probability being between 45% and 55% for the Democrats winning that state.

Does that sound like a reasonable standard deviation to represent election eve uncertainty?

This also gets us back to why I think we should use this type of monte carlo simulation methodology rather than doing things the way they’re usually done.

If we believe that markets actually contain good information, then using a standard monte carlo approach is inconsistent with that belief. On the one hand we’d be saying that markets know best, but on the other treating them as if they don’t by drawing random probabilities to judge them against.

For instance, if Florida was given a 30% chance of falling to the Dems on election eve – does that really mean that if 100 elections were held the next day, the Democrats would be expected to actually win Florida 30 times? Of would that probability be substantially less on the basis that the market has probably got it right in outcome if not probability?

Especially since Intrade predicted every State result correctly last election, but often by small margins of only a few percent. It’s almost improbable that Intrade would have predicted every race were the chances of each party winning a given State truly represented, literally, by the Intrade odds. On Nov 2nd in 2004, Intrade had the Republicans in front by less than 7% probability in Florida, New Mexico, Ohio and Iowa – 59 Electoral College votes all up. Intrade actually predicted every winner, but by margins so small that suggest we should treat the results with more respect for the predicted outcomes than standard probability theory tells us we ought to.

Hence, I think that we should measure uncertainty in terms of drawing random numbers from within a probability distribution for each State and comparing that to the 50% probability mark to distribute Electoral College votes, rather than randomly draw numbers and compare it to the implied probability in each of the States and distribute Electoral College votes accordingly.

If I use this methodology (with standard deviations reducing nin-linearly) with Intrade data for the 2004 presidential election, on election eve the Mode of the simulation (the number of Electoral College votes for the Democrats that get projected most often) is 252, which was exactly the result. The final probability of the Dems on election eve was a 36.9% chance of victory. On June 21st in 2004, the Intrade State markets gave the Dems a 29% chance of victory with a Mode of 255 Electoral College votes. So the methodology plays out pretty well using 2004 Intrade data that I’m slowly gathering.

The other two big questions that I haven’t quite got my head around are:

1. Should each State have the same type of distribution, as in a normal distribution, or would there be other types of distributions that might represent them better on the basis of circumstances happening in each state – and if so, what sort of distributions and on what basis should we select them (I can effectively use any type of probability distribution known to man here)

2. Should the depth of each State market, as in the volume of contracts traded for a State market, have a say on the size of the standard deviations we give to each State’s normal distribution, and if so – anyone got any ideas on how to make that so?

High volumes of contracts traded should theoretically represent greater certainty because more people believe a given probability. So should we include market volume when it comes to determining the standard deviations of the States, and how?

And if I haven’t explained anything adequately here, please let me know or ask me, because I’d really like to construct simulations between now and US election day that try and extract the best information we can get out of those knowledge filled State markets.

All suggestions would be really appreciated.

On something else US election related, here is the Obama campaign strategy that’s been flying around the intertubes very recently (it’s a small pdf version of a powerpoint presentation). Thanks to LL for sending me that. Some of you may not have seen it, and it’s pretty interesting.

On something more local – that slayer of psephological piffle and all round electoral legend Antony Green has a spiffy blog .He tags Newspoll for being gooses in preference distributions when OPV is running.

Posted in Uncategorized | 51 Comments »

Pollytrack goes permanent.

Posted by Possum Comitatus on June 24, 2008

Pollytrack now has enough data to become a permanent feature on the site with it’s own dedicated page which you can see anytime by following the Pollytrack link in the buttons at the top of the site. It will be updated at the end of each week.

I’ll also add a small graphic soon under the Pollytrack results in the sidebar.

The Pollytrack series is a combined 3 pollster rolling average weighted by sample size of Newspoll, ACNielsen and Morgan phone poll. Each point in the polling series is based on the most recent poll of those 3 pollsters, and as each new poll is released by a pollster, their old value get’s replaced with their new value to construct the most recent Pollytrack data point.

Each observation of the Pollytrack series has a sample size on any given week of between 3100 and 4000, providing for a Margin of Error between 1.5% and 1.75%.

Theoretically the Pollytrack series should be the most accurate tracking poll in the country.

The series started in mid April when ACN resumed their political polling and will be calculated for each week, although at a slight lag as we have to wait for all polls produced in a given week to be released before we can crunch the Pollytrack numbers.

Click to expand:

Along with the Pollytrack series, we’ll also produce two charts of all polls conducted by the three big pollsters, but this time including Morgan face to face polls. We will also run a Loess regression through the polling results to give us a line of best fit.

Click to expand:

Also, for those that don’t know, the US election intrade data page has also been updated for the week ending Sunday, and now has even more spiffy charts for all.

And finally, for a bit of a giggle – a lot of us check out the latest Morgan polls on Friday before they hit the front page of their site, by going to the address that the poll will appear on a few hours earlier. Normally the way it works is that a headline get’s put up and a few hours later the content of the latest poll appears under it.  This was the headline for their recently released qualitative survey on Federal politics before it was changed (it’s a screen capture – click it). :mrgreen:

I think Gary was taking the piss out of us early peekers.


Just a weird one and not everyones cup of tea – but something I forgot about.

Exerpts from a marathon reading of 1984 at the 2008 Sydney Writers Festival.

There’s just something quaint about book readings  – but I’ve got to say, I’ve never seen a relay reading before. Is it just me or was there a bit of crowd attrition as the chapters passed 🙂

AddThis Social Bookmark Button

Posted in Uncategorized | 6 Comments »

Brendan Nelson – Perpetrator or Scapegoat?

Posted by Possum Comitatus on June 22, 2008

Let me tell you a secret – Brendan Nelson is unpopular.

Are you shocked? Probably not.

Let me tell you another secret – The Coalition’s poor standing in the polls is all Nelsons fault.

Are you shocked at that?  Probably not either, since we hear it everyday. But you ought to be shocked because it probably isn’t as true as its being made out to be.

Undoubtedly the leadership of a party will  impact on the polling results, particularly the voting intention results that a party receives – but to blame all of that failure, or even most of it on the leader is probably a bit of a rough call.

We can see why everything is currently being blamed on Nelson though – the MSM is filled with simple creatures that prefer simple things to talk about, like each other for instance, or each others opinions even more so.

But in terms of the really simple things, there is nothing simpler than very small numbers.

And when we talk about very small numbers in politics, the Nightwatchman’s preferred PM ratings are the most obvious thing to grab on to.

In the politics of polling, there are three variables that take up the mindspace – voting intention, satisfaction ratings and preferred PM ratings.

But unfortunately, the historical, long term, consistent relationships between these variables are rarely paid attention to, so it’s probably worth recapping the statistically significant things we discovered about these variables from last years blogging and analysis.

1. Opposition satisfaction ratings are both covariant (meaning it moves together) with the primary and TPP vote, as well as being a leading indicator of the primary and TPP vote. When Opposition satisfaction ratings go up, on average, the primary and TPP vote moves up with it, as well as moving up in the next period  – not by much, but by a little bit.

2. We also know that Opposition satisfaction ratings are covariant with the Preferred PM rating as well as being an ever so slight leading indicator of the Preferred PM rating.

3.The other thing we know is that the Primary and TPP vote is both covariant with, and a lagging indicator of the Preferred PM rating (meaning that the two measures either move together and/or the Preferred PM rating follows the primary and TPP votes with a slight lag.)

The latter being what started the Poll Wars between The Oz and the better informed blogging community.

The Preferred PM rating is essentially a meaningless beauty contest which has no statistical bearing on the vote. It either moves with changes in voting intention and satisfaction ratings, or lags behind them, and the relationship between the vote and the PPM is pretty tight as far as polling relationships go.

To demonstrate this, let’s make a spiffy chart. We’ll chart the Opposition Leaders Preferred Prime Minister rating against the size of the difference between the governments TPP vote and that of the Opposition. We’ll do it as a scatter plot with a regression line running though it, we’ll use monthly Newspoll averages over the period of the Howard government, and we’ll also mark on the chart where Nelson currently sits in this broad historical relationship (it’s a thumbnail – click it).

Nelsons current position is exactly where we would expect it to be in terms of the size of the governments lead in the TPP vote.

PPM is a function of voting intention, voting intention is covariant with, or a leading indicator of the oppositions PPM rating, and Nelsons current PPM rating sits smack bang on the regression line of the historical relationship.

So all up, there is no useful information here – zip, zilch, nadder. Nelsons PPM is where we would expect it to be with him leading a party currently experiencing a 19 point TPP vote gap.

The satirical coverage that the MSM gives this number is fair enough – it’s at record lows and good for some humour. But to give this number serious coverage is to completely miss the point of what PPM ratings actually are in practice, as well as what their relationship to voting intention really is.

The only marginally important thing about PPM ratings for the Opposition is in terms of it’s compositional make up – things like the the proportion of Coalition voters that prefer Nelson as PM – and even then, it’s simply a reflection of how well Nelson is resonating in his own party and anchored well and truly to the Coalitions vote share.

So moving on from fluff like PPM, let’s look instead at Opposition satisfaction ratings and their relationship to voting intention. If we run a chart the same as above, but this time substitute the Oppositions PPM rating with their satisfaction rating we get (it’s a thumbnail – click it):

As we can see, the relationship isn’t exactly tight between opposition leader satisfaction ratings and the TPP vote difference over the last 12 years, but it’s still statistically significant. The reason for this is pretty simple – new leaders change the satisfaction dynamics in different ways. We can see how that plays out with the following self-explanatory chart (it’s a thumbnail – click on it):

But what is interesting in terms of the relationship between the Opposition leaders performance and the Oppositions vote share is that either Nelson has a higher satisfaction rating than the Coalition vote share would suggest ought to be the case, or alternatively, the Coalition vote share is lower than it should be considering the Opposition leaders current satisfaction rating.

The Newspoll satisfaction ratings come from asking the question:


It is specifically a question on the performance of the Leader of the Opposition.

So the satisfaction/voting intention chart is effectively measuring the performance of the Opposition against the performance of the leader of the opposition.

And on this count, Nelson is certainly performing better personally than the Coalition is performing in terms of their vote share, making me question just how much of the Coalitions poor polling performance is actually Nelsons fault, compared to how much of it is a result of the undisciplined rabble rousing that the rest of the front bench and the back bench pork chops have been carrying on with.

Also worth considering is what would happen to the satisfaction ratings and voting intention if the leadership changed. When we look at the ALP experience over the period in the above chart, if Turnbull or someone else were to become leader, would the consequences resemble Crean, Latham or Beazley Mk 2? – with neither 3 being particularly good.

Crean dragged everything down, Latham peaked and then crashed (and I’ve long had this feeling that Malcolm Turnbull as leader would just be Mark Latham in a Fioravanti suit – but minus the suburban mum bounce) and Beazley had a small boost then a larger decline in the satisfaction ratings, but unusually running hand in hand with a small growth in the primary vote support – but essentially nowhere near enough.

Yet whatever happens, whilst we can all navel gaze over whether Nelson is holding back the party vote share and by how much, the historical polling stats suggest that regardless of what Nelson is doing, the party itself is doing even worse and needs to shoulder a fair amount of responsibility for their own failure rather than blaming it all on one person.

But then, considering the historical way that the Coalition treats their leaders – that’s probably asking for too much. For a party that prides itself on waxing lyrical about the importance of personal responsibility being shown by the community, they seem to have distinct aversity to ever taking any themselves.


And on another thing – I am still doing my age attrition model for projecting the vote share hit the Coalition is facing in about ten years – but I stuffed it up and had to start from scratch. But it is coming!

AddThis Social Bookmark Button

add to kwoff

Posted in Polling, Voting behaviour | 10 Comments »

Putting the Newspoll in perspective and US election updates.

Posted by Possum Comitatus on June 17, 2008

Mr Mumbles again donned his secret squirrel cape and has the good acorns on todays Newspoll over here:

Primaries are running ALP 46/33 leading to a TPP of 59/41.

But to throw all these polls in perspective – let’s chart every poll taken so far in 2008 (by Morgan, Newspoll and ACN) and run a Loess regression through it as a line of best fit.

What is really noticeable here is that the drop in the ALP primary vote over time has been greater than the drop in their two party preferred – and larger than the rise for the Coalition primary vote that started around day 110.

The Greens and “others” vote have been the beneficiaries of this falling ALP primary, letting it flow back the ALP in preferences for TPP terms.  As we found last year, we often get a bit of noise in the minor party vote changing the TPP headline number by a few points here and there, but currently we are getting small movements in the minor party vote that is keeping the TPP numbers where they are and changing the underlying primary vote composition of the polls.

To see this, we only have to look at the ALP primary vote in the context of both the Coalition and Greens+Others vote. To do this, we’ll chart the ALP vote as inverted (meaning it decreases as you go up the vertical axis on the left) and chart the Greens+Others and the Coalition Primary normally on the right hand side axis . To see where the votes are shifting to, if the lines move together, then votes are shifting between ALP and the other party on the chart, if they move in opposite directions then that isn’t happening.

Up until the beginning of May (the 3rd Newspoll), voters were moving between the ALP and the Coalition as well as between the ALP and the Greens minor parties.

For the following two Newspolls votes were only moving between the ALP and the minors, and finally in the latest poll, votes moved from the Coalition to the minor parties.

We can also see this playing out in the satisfaction ratings.If we look at the satisfaction ratings of Rudd vs Nightwatchman  and how they’ve changed over time, Rudd at the moment has his second lowest satisfaction rating recorded this year by Newspoll at 59%, while the Nightwatchman has satisfaction and dissatisfaction ratings that haven’t moved a jot. As voter dissatisfaction has increased for Rudd, votes aren’t moving to the Libs as a result, they’re moving to the Greens and preferences are flowing back to the ALP two party preferred.

If we chart both the ALP TPP vote and the ALP primary vote against Rudds satisfaction level, we get:

We expect satisfaction ratings and vote levels to move together, but the satisfaction rating is having a much larger influence on the ALP primary than the TPP.

This is because most of the change in the ALP primary vote is moving to the minor parties,- these voters might not be impressed with what Rudd is doing, but they are refusing point black to support the Coalition.

On the Coalition side, they just recorded their lowest primary vote of the year at 33% driven by some movement from them to the minor parties – which is a bit unusual and probably a sampling artefact rather then any true indication of a large change in Coalition primary vote support. But regardless, I cant imagine we’ll be hearing any more Honeymoon is over stories this week, let alone Rudd in danger of being a one term wonder from the shallow end of the commentariat pool.

In other news, the most accurate aggregation of polling around, our Pollytrack series currently has the ALP leading 56.9% to 43.1% in TPP terms,  off the back of primaries running 45.1%  to 37.4% to the ALP – all with a margin of error or 1.56% and a sample of 3938

Over at the Pollytics US Election page, all the Intrade data has been updated and now includes large Monte Carlo simulations to get a better idea of the probability spread on the Electoral College Votes by State, as well as cumulative frequency charts of these simulations to show how the probability of the Intrade market has changed over the last month for every electoral college vote number –worth a squiz if your nerdy.

Posted in Polling, Voting behaviour | Tagged: | 7 Comments »

Possum’s Box, google searches and a hint for Pollies

Posted by Possum Comitatus on June 14, 2008

The Possum Box is up and running with a couple of articles, and the RSS feed on the top of the left sidebar here has been added to make it easy to keep up with new articles as they’re posted over at the Box.

One of the consequences of having a rather wordy blog is that all sorts of google searches send people to these very pages – mostly looking for political stuff, but as you would imagine – often not.

For a while now I’ve enjoyed a little ritual of sitting down on a Saturday or Sunday afternoon with a glass of wine and have a bit of a giggle perusing the search engine queries that sent people to Pollytics. Since the readers and commenters really make this site as much as anything I ramble on with, I think it’s only fair that the strangeness and joy of google voyeurism be shared with all

These are some of the better ones I’ve had over the last little while.

sniffed my cousin’s bra

possum shooting+ result graphs (You need to graph that?)

“he needed a piss”

suck my possum

brendan nelson raccoon (The possibilities of that one)

qantas trip a fawlty towers fever dream

election calculator cash drawer (very appropriate in these boondoggle times)

possum ratsak ( Hmmf)

do possums eat duck heads (only when slow roasted, served on a rosti with a red wine jus)

possum terror (That was Dennis :mrgreen: )

blather “piers ackerman” (Now who hasn’t thought that?)

woman discovers possum living in her vag (WTF!?! Holy Smokes Batman!)

That last one is a bit disturbing.

Oh, and a hint to the pollies reading. When you’ve been busted for something like drink driving, speeding, running a red light, various traffic offences or other bouts of law breaking and you wake up the next morning in a cold sweat, worrying whether it’s all over the media – if you go to google and search “John Citizen MP”+ “drink driving” for instance, people like me can see your search.

It’s probably not a good idea.

I’m losing count of the times that I’ve seen some search on a pollie and their indiscretions, only to see it all over the media a few days or weeks later. You might want to keep that in mind eh?

Posted in The Possum Box | 12 Comments »

Your thoughts about a Possums Podium?

Posted by Possum Comitatus on June 12, 2008

I regularly get sent questions by people that have been stirred up by some political issue or another to the point where they felt compelled to write an article about it – but they wonder where they can publish it on the net. These folks don’t tend to produce enough output to run a blog of their own, and are just after some sort of opinion clearinghouse I suppose that has a readership that might be interested in what they have to say.

So, a question for you folks out there.

If I were to run a small adjacent site to Pollytics, a kind of “Guest Speaker on Possums Podium” type thing where I would simply put up any such articles that people send to me and link those articles via an RSS feed on the left sidebar (so everyone could see the titles of the latest articles as they come in) – would anyone be interested in reading them?

And would anyone ever be interested in writing stuff for it?

There would only be one rule for publishing – a cogent piece on a political, psephy or media topic (the political flavour would be irrelevant, variety is the spice of life after all)

I can remember when I started blogging by accident 12 months ago, it was people like Bryan Palmer and William Bowe that linked into my work that gave me my starting boost in traffic, which then made the word get around… so to speak.

As such, I sort of believe that I have an obligation to do the same for others that want to write intelligently about politics. So I’m sort of thinking that maybe it would not only be an opportunity for people that don’t have the time, inclination or output to run a blog themselves to be able to write occasionally and get read by a highly politically literate audience, but also a way for new political bloggers to potentially get noticed as well.

The net might reduce the barriers of entry in the media space, but it does little in and of itself to overcome the obscurity that’s associated with a new blog. So if I can do my bit to help alleviate some of the obscurity that many new writers and bloggers face, that’s probably a good thing.

If I ran it as an adjacent site it wouldn’t clutter up the Pollytics site, yet the RSS feed on the sidebar would let readers here be able to clickthrough to any new articles that catch their eye.

So folks – what are your thoughts?


OK folks, the site’s up but with no articles published yet:


Here are the rules and guidelines for submission and how to submit:


Now I need some articles – so if you’re a new political blogger or an occasional writer and you want to submit an article, read the above link and let me have’em so to speak.

The blogroll on the sidebar of the new site will be filled over time by bloggers that have had an article published on The Possum Box.


I have two articles already, I wouldnt mind one more to launch – but either way The Possum Box will go live early on Saturday afternoon.

Apparently it’s already the 21st fastest growing blog in the WordPress.com universe – I’m not exactly sure what that means, but it’s looking good for eyeballs to read new work.

Posted in Uncategorized | 42 Comments »

Time to play “Pick the Veep”

Posted by Possum Comitatus on June 10, 2008

This was me earlier today in Crikey.

With the melodrama of the primary season behind us, the next big game in US politics will be “Pick the Veep” — and it will really all be about one Veep in particular: Obama’s.

While the benefits a VP can bring to the ticket are relatively small compared to the damage they can inflict if the choice is a bad one (Thomas Eagleton anyone?), good VP choices attempt to add something to the ticket – a few votes from a particular State or region where the VP originates, some particular demographic block like women or working class males, and sometimes even a policy strength that the Presidential candidate may be perceived to have as a weakness (although this carries the risk of the theory working in reverse where the Veeps’ strengths end up highlighting the Presidential candidates’ weaknesses rather than complementing them).

Yet the overriding need for all Veep candidates is that they fit with the general vibe of campaign. It’s no use running on a meta-theme of ‘change politics’ if you then appoint some Washington old boy who has more frequent flyer points on his K-Street card than Tom DeLay.

The Intrade markets for the Democrat Vice Presidential Nominee have thrown up a few interesting possibilities in this regard, particularly the alternative Intrade market which has a broader selection of candidates.

Clinton and Virginia Senator Jim Webb are riding high as the favourites for the nominee, with both hovering around the high teens to low twenties as an implied probability of clinching the Veep spot. While Clinton’s chances are pretty self-explanatory, Webb’s come from him ticking all the right boxes – he burst on to the scene by knocking out Republican Jim “Macaca” Allen in the Senate race for Virginia at the last mid-terms, he combines a strong anti-war position with the military experience of being Secretary of the Navy under Reagan. He’s a popular Senator in a swing State that has strong appeal to working class males and he brings with him high levels of support and regard from military families – an important demographic for any election campaign with a large focus on Iraq.

After Clinton and Webb, the next bunch of contenders hovering around the high single figures to low teens as implied probabilities, are a mix of the usual suspects and some out of left field possibilities.

Everyone has, or no doubt will, in the very near future read about the chances of Joe Biden, Mark Warner, General Wesley Clarke, Michael Bloomberg, Claire McCaskill, Ed Rendell and Kathleen Sebelius.

Even Bill Richardson is often invoked, particularly as a way to bring the Hispanic vote on board for the Dems – but that risks having two minorities on the same ticket which would be a stretch, and there’s the permanent rumour windmill that hangs around Richardson of the type that Glenn Milne would be talking about were he an Australian politician.

But there are three intriguing possibilities at the moment that aren’t the usual suspects.

Firstly, the Republican Senator from Nebraska Chuck Hagel. This guy has been smacking the Bush administration around over the Iraq war for years, has sided with the recent Democrat anti-war push in the Senate, is retiring this term and is currently running at 9% odds for getting the VP nomination. An Obama/Hagel ticket would send shockwaves through the US political system – but his biggest drawback is the obvious, he’s a Republican and many Democrats would go ballistic at the mere thought of a cross-party ticket.

Secondly, one of the most popular Governors in the country, Brian Schweitzer from Montana, has rocketed into 10% odds from nowhere over the last week. Schweitzer is the leading Democrat Prairie Populist in the country and has shown the Democrats that they can win over even the strongest of Republican electorates if they frame their political language right. He’s the master of the one liner – “I believe in gun control; you control your gun and I’ll control mine” — he speaks fluent Arabic, is a soil scientist by trade, is big on energy reform and would add to the ticket in unusual places if Obama is really pursuing a 50 State strategy – particularly the Dakotas and Colorado. His one liners would create mischief for McCain in the news cycle and he’d bring good down ticket support in Senate and Congressional races across the normally Republican leaning areas of the US.

Finally, there’s the Ohio Governor Ted Strickland who is currently running a 10% probability in the Intrade markets. What makes Strickland interesting is that nearly all Republican roads to the White House go through Ohio. If the Democrats take Ohio it would be almost impossible for McCain to become President. Strickland is highly popular (61% approval rating currently), has very broad support in the electorate and would bring a conservative flavour to balance out the Obama ticket. He might even drag a few percentage points of the vote in a really critical State along with him. Strickland has repeatedly stated that he would not accept a VP nomination which throws a spanner in the works – but looking at the Intrade odds, no-one seems to believe him.

Posted in Crikey, US Politics | 34 Comments »

US Election and Pollytrack Updates

Posted by Possum Comitatus on June 10, 2008

To start off with, let’s have a look at how Pollytrack is going?

The ALP primary vote has reduced from 46.6% in mid May down to 46.2%. That’s the only poll movement we’ve seen from the rolling three pollster phone poll average weighted by sample size.

On US politics, Intrade has Virginia moving into the Democrat column putting the projected Electoral College votes for the Democrats at 306, which you can see over on the US Election page as well as a whole heap of weekly data goodness relating to that particular contest. This includes a new graphic showing how the State by State probabilities have changed from the past week as well as a better probability map:

For full explanations of how all these bits work, you’ll need to pop over to the US Elections page that is conveniently linked at the top of the site via the obvious button and it’s updated every Monday night/Tuesday morning. Next month US polling will also be added to that page as well as a thing called the Jaundiced View Intrade/Polling Confluence graphic which sounds a little mysterious :mrgreen:

Posted in Uncategorized | 7 Comments »

The Parallel Universe of Opinionatas

Posted by Possum Comitatus on June 3, 2008

This was me earlier in Crikey today.

Petrol prices have ended the Rudd honeymoon” proclaimed Dennis, Rudd’s honeymoon was “well and truly over” declared Gerard McManus, while Clinton Porteous got stuck into the Journo Juice and questioned “whether Kevin Rudd will be a one-term wonder?“.

It was hard to find an article during the week that didn’t have the phrase “political crisis” scrawled in it somewhere – the government was in a crisis over Fuelwatch, over leaks, over threatening the public service; even over Brendan Nelson’s parliamentary performance, of all things.

We had Glenn Milne on Agenda (replacing his Comrade Confidential hat of trawling through the private lives of politicians, for some new headgear as a political theatre critic) telling us that Labor needed a Costello or a Keating because they were in danger of not cutting through in Parliament. Yeah, because we all know how 5 second grabs on the nightly news of aggressive boofheads yelling at each plays out in the wider electorate. No wonder Laura Tingle looked like she wanted to slap him. No wonder David Speers looked like he was thoroughly going to enjoy it if she did.

Yet today’s Newspoll has the ALP two party preferred stuck exactly where it was before this manufactured media melodrama began; 57/43 riding off the back of a one point reduction in the ALP primary to 46 and the Coalition primary stuck on 37.

The world of the Opinionatas – a sort of deafening echo chamber of electoral ignorance and lemming like commentary- has never been more irrelevant to the wider public. Costello was right when he told them that they don’t need politicians around to generate noise, they can just make stuff up among themselves. Which is all too often what happens, and the public can see right through it.

One would think that the Newspoll reality being incompatible with what passed for last fortnights fictional narrative of a government in trouble, would have invoked a little reassessment amongst the guilty, perhaps even a little humility, at the very least a reappraisal of the authenticity of the narrative itself – you know, when you’re talking shit and it becomes pretty obvious, it might be time to stop?

Alas no – not in the rarefied air of political punditry where attachment to electoral reality isn’t a KPI. “Petrol has blown up in Kevin Rudd’s face“, says one pundit this morning, in that sort of Japanese soldier on a deserted Island refusing to believe the war is over kind of way.

To do something novel here and add a bit of fact to this tawdry spectacle – this is what the areas around Brisbane and Sydney would look like under a uniform swing to Labor of 4.3% given by Newspoll – the pink seats are Coalition seats that would fall to Labor, 23 in all across the country.

The mathematical reality is far removed from the commentary.

There’s a reason the Morgan Polls continually have journalists near the bottom of the list when it comes to the public’s opinion of professions. It’s also not surprising that a large majority of people think the media is biased. When these headline act Opinionatas repeatedly lose touch with how issues play out in the only place that counts – the electorate – and when that electorate sees acres of rubbish being rammed down their throats that bears little resemblance to their lived experience and their own views, it’s no wonder their opinion of journalists everywhere, good and bad, suffers as a result.

And a lot of it comes back to these Piñatas of public opinion, dangling out there on a limb, swaying in a political breeze of their own imagining.

Could someone please hit them with a stick and give us the lollies.

If not for our sake, or for Gawds sake, or for the sake of Tarago drivers with a wheelchair and five kids in the back everywhere – then at least for the sake of a credible national media landscape.


Crikey also has some questions on this running in their Media Forum that might be worth a look if you’re a Crikey subscriber or if you wish to join up for one of their trial runs.


And another thing – Dont tell me Dennis turned his comments off again?

It’s a pity – they’re always the best bits of the article.

There hasn’t been a comment since 10:35am and it’s 2:50pm as I write this.

Oh well – there’ll always be another 40 allowed through for the next one before it becomes too embarrassing.

Small Update:

As of 4:17pm there was a small trickle let through.

AddThis Social Bookmark Button

add to kwoff

Posted in Crikey, Polling, spin | 51 Comments »

Newspoll Op-Egg Edition

Posted by Possum Comitatus on June 2, 2008

Oh Dear

Newspoll comes in with a headline 57-43 to the ALP off primaries of 46/37.

Zero change from last time. Some will be surprised, some won’t be.

We were assured by miles – literally miles and miles and miles of Op Ed piffle this week that “The Honeymoon is officially over”.

Some even ventured as far as to say that Rudd could be a one term wonder – all over the the last weeks worth of political events, events that were seen by the Opinionistas through a prism of their own delusion; a gross misunderstanding of the actual political dynamics at work in this country.

Mind you, these dynamics arent new – they’ve been going on since December 2006 when Rudd rebooted domestic politics with his leadership. Maybe these opinion columnists are lazy, maybe they’re incompetent, maybe they live in a bubble – but for whatever reason or excuse they might have, the proof is in the eating.

They no longer understand how politics in Australia operates with the broader electorate. The 2007 Election demonstrated that, and little, it would seem, has changed since.

So let us all absorb the last week of media machinations, let us all throw it into perspective with the results of this Newspoll, and let us ponder the tenuous grasp of reality that these political stenographers on the Op Ed pages around our country possess.

And let us enjoy their pathetic excuses in tomorrows media cycle, and the inevitable focus on a 5% PPM movement against Rudd, down to ONLY 66%. When you’ve fluffed it this bad – anything works as a security blanket.

More later – once we’ve all pondered.

Mr Mumbles went all secret squirrel and has the good acorns over HERE and HERE

Posted in Uncategorized | 22 Comments »