Possums Pollytics

Politics, elections and piffle plinking

Analysing the Poll Bias: Morgan vs. Newspoll Part I

Posted by Possum Comitatus on July 6, 2007

There’s a lot of Hoo Har in the punditry over the differences between Morgan and Newspoll results. So let’s take an actual look at the differences between the two, and the size of the differences that are evident at different primary vote levels for the ALP and the Coalition.

For this, I’m using Newspoll and Morgan polling data for the primary vote, starting from the first poll published after the election of the Howard government in 1996 and including every published poll of each organisation since, right through to the present day. This gives us 315 Morgan polls to work with and 282 Newspolls.

To start with, let’s look at the Probability Density Functions for the primary vote estimations of the ALP and the Coalition, and compare the two results. For those that don’t know what a PDF is, you could zip over here for a brief explanation. Alternatively, you could just look at them as smoothed histograms with an area under the curve equalling 1.Basically they just tell us how common in each organisations polls various primary vote values are for each party.

alppdf1.jpg cpdf1.jpg

At this stage we better add some basic statistics to go with it.


The stats are pretty self explanatory – although the mode score for the Newspoll ALP is strangely low, but that could be a bit of an artefact of chance more than anything else.

As you can see, the differences are fairly obvious. What stands out in those PDFs is that the difference between the Newspoll and Morgan estimates of the ALP primary vote are much larger around the centre than the differences for the Coalition. What’s also evident is that Morgan estimates lower scores for both parties more often than does Newspoll.

If we take the Cumulative Distribution Function of those PDFs, whereby we cumulatively add the PDF values for each party according to each polling group, we can pull even more info out in a manner that is much easier to explain.



The way to read these CDFs is simple. Take a primary vote value from the bottom axis, trace it vertically to where it intercepts either a Morgan (black) or Newspoll (red) curve, and trace horizontally from that intercept to the left axis to get the probability of that organisation producing an estimate of the primary vote that is less than or equal to the primary vote value you chose.

For instance, let’s use the ALP CDF and take a primary vote value of 40 on the bottom axis. If we trace that vertically we see that it intercepts the Morgan poll (black line) at about 0.37.That means that a Morgan poll will estimate the ALP primary vote being less than or equal to 40, thirty seven percent of the time. However (using the same technique), a Newspoll will estimate the ALP primary vote being less than or equal to 40, fifty percent of the time. Hence either Newspoll underestimates the ALP vote or Morgan overestimates the ALP vote at that value.

Armed with that, we can see a few more interesting things. Morgan and Newspoll get fairly identical results for the Coalition when their estimated primary vote is above 45%.But below that 45% level the two polls increasingly diverge in their estimates with Morgan estimating lower levels of support for the Coalition (compared to Newspoll). The lower the estimated vote, the larger the gap is between the two polling results for the Coalition.

From this we can conclude that either Morgan underestimates the Coalition primary vote when that vote is less than 45%, or alternatively that Newspoll over estimates the Coalitions primary vote when that vote is less than 45%.

If we look at the ALP CDF, we see that when the ALP estimated primary vote is between about 36 and 50, either Morgan overestimates the ALP primary vote or Newspoll underestimates it.

As for how large is the difference in estimation sizes between the two organisations – well there is no straight answer for that. The difference isn’t homogenous, but generally it’s in the ballpark of 1-2% for the primary votes. However, the difference between Morgan and Newspoll is generally greater when it comes to estimating the ALP primary vote than it is when it comes to estimating the Coalition primary vote.

UPDATE: I’ve had a ferret around the size of the differences between the polls and put some numbers to them in Part 2


AddThis Social Bookmark Button

11 Responses to “Analysing the Poll Bias: Morgan vs. Newspoll Part I”

  1. michael said

    So in the end, when the polls are widely divergent e.g. 55-45 then no one believes it and says it is all lies.

    However, when they are close, then people select the poll that supports their side as the most legitimate.

    Could explain the comments in the blogocracy!

  2. electa stone said

    Are the differences in the probability distributions temporal artifacts?

  3. […] Analysing the Poll Bias: Morgan vs. Newspoll There’s a lot of Hoo Har in the punditry over the differences between Morgan and Newspoll results. So let’s take an […] […]

  4. Possum Comitatus said

    Michael – it not only explains blogsville, it explains vast tracts of journalistic twaddle everywhere.”Pick-a-poll” is the new black.

    Electa, as the series themselves are time series, there are certainly intertemporal issues in play insofar as the size of the differences between the two poll estimations were larger during some periods than others.But the CDF and PDF plots are based simply on ranking all the primary vote estimations from each organisation by size and comparing where they start to diverge, not when.I get what your saying, in that some periods of time probably had greater variations that others and this is indeed true.I’ve just put up Part 2 of this, so that should answer some of your thinking here.

  5. To All

    Polls and Morgan Poll vs Newspoll

    Over the years some public opinion poll results have been wrong when compared to an election result.

    My father, Roy Morgan, was sent the US in 1940 and 1948 by Sir Keith Murdoch to work with Dr George Gallup. He worked with Gallup on the 1948 Presidential Election when they were very wrong – Truman defeated Dewey! At the time, weeks before the election, he pointed out his concerns regarding the Gallup “quota sampling” method having “sample quotas” which covered too large an area (e.g. New York). Today most polling companies know how to survey accurate cross-sections; however, there are other problems.

    Apart from electors changing their minds in the last few days before an election – as they are allowed to do and actually did in Australia in 2002 and 2004 – there are two polling aspects not often discussed, namely:

    1. How do those who refuse to answer a poll intend to vote – hard to measure but I am working on it!

    Dr George Gallup talked about this a lot in 1964 when I was his assistant working with him in Princeton. (See the UK comment on the 1992 polls : “those who intended voting Conservative were more reluctant to be interviewed or to say how they would vote than Labour voters”; http://www.aph.gov.au/Library/pubs/RN/1996-97/97rn48.htm, and

    2. How do those who don’t vote or vote for a minor party influence the final outcome. In the US and UK voting is “first past the post” and not compulsory so a significant number of electors don’t vote. With compulsory voting in Australia, a significant number of electors now vote for minor parties. It is their preferences which decide the election! With “how to vote” cards and marketing (posters) at election booths, “minor party” preferences are almost impossible to measure.

    On Friday we released our face-to-face Morgan Poll (ALP 59%, LNP 41%). http://www.roymorgan.com/news/polls/2007/4184/
    Today we released our telephone Morgan Poll conducted late last week (ALP 59%, LNP 41%). http://www.roymorgan.com/news/polls/2007/4185/

    Both have the ALP winning and well in front.

    There is no doubt the LNP are in trouble. However, a lot can happen between now and the election. Our surveys show most electors think “things in Australia are heading in the right direction” and July Roy Morgan Consumer Confidence is up and very high “126.8 – up 4.5”. http://www.roymorgan.com/news/polls/2007/4185/

    It is indeed a “strange” situation. If my memory is correct a similar situation as John Major was confronted with in the 1992 UK General Election – which he won! In 1992 all major UK polls were very wrong!

    Only in 2001 have we (Roy Morgan) polled just before a UK General Election. http://www.roymorgan.com/news/press-releases/2001/11/. We were then most accurate in predicting the Labour lead.

    The following note on Morgan and Newspoll is well worth reading.
    Possums Pollytics

    The following included comment says it all:

    “I find it interesting that for the only poll in the last 5 years for which there is any ‘real’ figure with which to compare, ie the polls immediately before the 2004 election, Morgan (45.5%) was closer to the actual Coalition Primary (46.7%) than NewsPoll (45%) or Nielsen (49%), and Morgan (38.5%) was also closer to the ALP actual primary (37.6%) than NewsPoll (39%), and only marginally further away than Nielsen (37%). Since we have no idea of how far away the ongoing polls are from ‘reality’ (whatever that means), surely we should just go with what we know, that in the most recent testable case, Morgan was better at forecasting the actual primary vote than NewsPoll. On what possible basis should we decide that the Newspoll or Nielsen primary vote estimate is ‘better’ than Morgan’s.
    Comment by Alan H — July 8, 2007 @ 4:43 pm


    Gary Morgan

  6. Pollio said

    The (excellent) charts above show Morgan consistently returning a higher Labor vote than Newspoll.

    At the last two elections the final Morgan Poll has shown Labor with a two party preferred vote of 51% (2004) and 54.5% (2001).
    Wrong in both cases; outside margin of error in both cases.

    Today’s Newspoll has Labor on 56%; Morgan has them on 59%. I know which one I find more plausible and I know which one I’ll be paying attention to on election eve.

  7. Stig said

    Thank you Gary Morgan for getting into the web-based poll discussion here on this fine site. I suspect that you’ll find the discussion and analysis on polling here is a lot more serious than what you’d find in the mainstream media, or what you may overhear from two Ackermans walking into a bar (how’s that, Possum?😛 ). You may also find that a lot of journos end up reading this stuff too, it’s just that it seems they often don’t understand it.

    For my $0.02, I don’t think there’s much milage in advocating which of the polls is “the best”. They all have their different polling methodologies, and may return a more or less accurate result at different times. Surely the best approach is to take the data from all of the reputable polls, and to take the trends that they indicate as the best reflection of reality? I reckon Possum is even now working on his “Grand Unified Poll Theory”, combing all the data into a super-model that is beautiful for all psephologists to behold.

  8. Possum Comitatus said

    Thanks for the post Gary, and sorry to stick you in the spam bin for a few hours – anything more than 6 links ends up their automatically until it’s pulled out.

    When you say “those who intended voting Conservative were more reluctant to be interviewed or to say how they would vote than Labour voters”, have you ever tried to put some numbers on the size of that effect for the conservative primary vote in Australia?

    And do you have any insight onto why the results of the Morgan telephone polls seem to have a different variance from the results of the face to face surveys?

    Lastly, on the question about “Do you think Australia is moving in the right direction” that Morgan asks, and using the results of that to determine “soft” support for the ALP; Are you particularly confident in that interpretation or is it just a handy yardstick to use to produce a given measure of softness for the purposes of comparing over time?

  9. Possum Comitatus said

    On fact 2, you’d be a brave chap to use Two Party Preferred results for anything. If pollsters use preference flows at the last election as a means to estimate the distribution of preferences for a future election, it effectively just adds some unknown given quantity of deterministic uncertainty to the results on top of the margin of error that already exists for the primary vote estimations.

    If the pollsters ask for preference flows, it’s adds additional uncertainty in a different way because it is estimating preferences without the survey respondents having how to vote cards (which obviously a sizeable chunk of the electorate follow or they simply wouldn’t be used).That requires an assumption that the people that run political party campaigns are rational agents…. And that’s probably a bit courageous an assumption to make sometimes😉

    It’s more complicated to use primary voting intention as a way to measure the snapshot of current politics, but while its more complicated, it’s also more accurate. Often in the last week of elections, all the polling organisations get within 1.5% or so of the actual primary vote election results, but they diverge wildly on TPP estimations. I think that in any given election, the polling organisation that gets the “closest” TPP estimate does so usually by nothing more than luck.

    Also what’s worth mentioning here is that for the Coalition primary vote, AC Nielson ‘tends’ to exhibit a random walk between Newspoll as its upper limit and Morgan as its lower limit.All three polls track each other beautifully, which leads me to believe that the polls are all pretty close on primaries – and if the polling organisations want some product differentiation (Hint Hint Gary!)they may want to look into adding value to their opinion polls.

    Like releasing numerical IDs for the polling participants so we can see the correlations between various voting groups and satisfaction dynamics, as well as asking a question along the lines of “if you have recently decided to change your voting intention, who were you previously going to vote for” type thing.

    Your Ackermanology was just fine😉

    And I reckon you’re spot on about general poll equality and getting value from aggregating them. If polling organisations want to be “better” than their competitors, then they should focus a little more on adding value!

  10. Greg Peters said

    Hi possums
    apologies if this has been asked before…but
    if the pollsters want to establish that they are reaching a truly representative group, I cant understand why they dont ask each respondent what party they voted for last time, and publish that data as a control within each poll. [Is there a law against it??]
    One other commentor here implied that typically, less than 1 in 3 pollster phone calls achieve a usable reply. This alone leaves huge scope for ascertainment bias. Hence some data like the above is sorely needed.

  11. Tanks To Information….

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: