Possums Pollytics

Politics, elections and piffle plinking

Election Prediction Model 1 Update

Posted by Possum Comitatus on June 21, 2007

epm1june1.jpg

Here’s what it means and how it works for those that dont know.

The June Newspoll data rolled in, was fed into the model and not much changed.

The predicted ALP primary vote for the election reduced from 45.6 in May to 44.54 in June and the predicted Coalition primary vote increased from 39.7 in May to 40.04 in June.

The predicted ALP 2 party preferred vote for the election decreased from 53.7 in May to 53.02 in June, while the Coalition 2 party preferred vote increased from 46.3 in May to 46.99 in June.

AddThis Social Bookmark Button

5 Responses to “Election Prediction Model 1 Update”

  1. Tomasso said

    Hi Possum,

    The variables in your model provide a quick (probably reasonable) coverage of a few attitude/sentiment scores, events and plausible issues. The r^2 looks impressive, but the model seems descriptive, rather than predictive.

    What does the r^2 look like on the delta (GOVPRIMARY – GOVPRIMARY(-1)) so you’re testing the ability to predict or describe change in primary support? GOVPRIMARY(-1) has a fairly small co-eff, so the other vars are doing a fair bit of work to explain GOVPRIMARY rather than the delta.

    I’d be interested in a model that looked at tactical advantages to influence the marginal/undecided voters in marginal electorates. This is Crosby Textor heaven, but I expect the ALP has learnt a few things, and also are less likely to get wedged by their own party. Lennon can’t do to Rudd what he did to Latham this time.

    Recent approaches to “political marketing’ (a horrid term) assume that mass communication and mass messages aren’t effective and cost to much, and more effort goes into identifying microsegments and niches, and ways to push their buttons more personally (so to speak). It’s a bit like meet and greet, but knowing first who to meet and when, and what ways to push them.

    I don’t know how to get data for that without paying a lot, or stealing it from someone who did. Kinda subverts democracy…

    Tomasso – I ain’t doin’ no political marketin’.

  2. Possum Comitatus said

    Hiya Tom,
    My plan is to build about 4 different models as we approach the election. This one is more like a naïve judgemental type model that accounts for large chunks of the quite complex electoral dynamics that are in play, but it’s designed more for comparative purposes against the other models I’m building that will come later.
    You’re right, the model isn’t really predictive but generally descriptive. And it contains a lot of problems that I wouldnt ordinarily put up with: For instance, the GOVPRIMARY model is over specified and there’s a fair bit of useful info left in the residuals of the ALP model related to it not really taking into account the sensitivity of the ALP primary vote to the quite complicated dynamics of the satisfaction rating variables for each party.
    The primary vote series of each party is very laggy to just about every explanatory variable you throw at. There seems to be a lot of chaotic inertia involved where the lagged effects of some variables will superimpose themselves on the lagged effects of other variables – its quite difficult to isolate the specific effects of some series on the primary vote series of each party, particularly the satisfaction and dissatisfaction series and their derivatives. Some of them have very strong explanatory power in isolation, but those effects will often get washed out when you combine even a small number of them together in a model. When you combine that with the serious autocorrelation in the primary vote series for each party… it all becomes quite a mental challenge!
    It also doesn’t help that a lot of these series have temporary unit root processes that may run for 3 or 4 years at a time before reverting back to stationarity for an equally long period. I was thinking of trying maybe some kind of threshold switching regime to adapt to that problem, but it all gets complicated very quickly going down that route.
    You’ve hit the nail on the head with the political marketing (I agree…. Ugggh what a term) and electoral microsegments – but the problem is encapsulating the effects of those campaigns without having access to the Textors and Crosbys of this world.

    One route which I could go down and have been playing around with (and it’s a bit unusual so please bear with me), is to spec out some probability distributions of the primary vote movements for each party in the few months leading up to previous elections. I’ve pretty much nailed down the form of those distributions using stochastic simulation and Latin Hypercube sampling. I could then fold that distribution into a larger regression type model to give me some approximate measure of the aggregate effect of that microsegment marketing on GOVPRIMARY. If I weighted that probability distribution in the regression with some measure of the longer term inertia behind the governments primary vote…

    [some weighting representing this type of effect https://possumcomitatus.files.wordpress.com/2007/06/pvsgov.jpg ]

    …I should then end up with an approximate means to account for at least some of the upward volatility of the governments primary vote caused by the microsegment political marketing in the lead up to the election, but with the magnitude of that volatility being slightly constrained by the longer term inertia of the primary vote movement of the government. That way (my thinking goes) it could allow for both the successful vote chasing of the governments niche marketing campaign to be somewhat accounted for, but it would also deflate that effect by taking account of the much lower primary vote base it is coming off, as well as the general scepticism of the government that the longer term pattern of the primary vote swing against it represents.

    I’d be really interested in your thoughts about any of this.

    The alternative of course would be to simply go out and spend 200 000 crackers on some solid national polling myself – but I just cant see that happening anytime soon😉

  3. Tomasso said

    Yes, sounds like a sensible directions. Lags and interactions are the big nasties here, as well as exploring for indep variables that matter. Feasibly some kind of PCA could sort out some of the interactions. Lead indicators would be nice.

    Using microsegments/niches can make this simpler, as long as the microsegments reflect behaviour (responsiveness to various button pushes), then you can untangle the drivers of behaviour in those (but this could be a very noisy exercise), then sum up the impact across all microsegments. The “unimportant” microsegments would be those that are high loyalty (to either side), since changing their vote would be expensive. The important microsegments have to be not too small, not committed, able to be influenced, and of course, identifiable. I can do some of this with customer bases. The problem with electorates is being able to join the dots when much (behaviour/commitment/etc) detail is not observable (without spending heaps).

    I firm I used to work with was asked to handle Latham’s “marketing”. This was before Latham was leader. Ultimately they decided against. They might have done better than what happened. They qual side of the firm was quite perceptive, and they knew how to run surveys that had focus (and were drillable by people like me). In the end, ML came unstuck by PR disasters and serious lack of internal support, whereas JWH’s enforcers demanded and got loyalty. [IMHO, of course]. Tragedy was that almost all of the Govt’s credibility for economic management was due to Keating, but the ALP didn’t spin that effectively. Even now they are losing that race.

    Last bit. About six months ago I was attempting a similar task to yours above. A certain, unnamed bank had a serious interest in customer satisfaction, brand awareness, etc, including understanding drivers, impact on sales and retention, measuring whether their initiatives and advertising had a quantifiable (shortish term) effect, etc. I’d already build some marketing mix models which were useful (and are in use). We had 3 years of cust sat (and similar) data, 4000 data points per half year (known weekly).

    Trying to relate ANYTHING to weekly cust sat (or coarser aggregates) simply didn’t work. I tried some radical lag kernel approaches to predict cust sat changes, and fed in whatever external and macro factors might make a difference, and it was not worth it.

    Trying to relate initiative spend against cust sat changes had no legs. Trying to relate campaigns to cust sat had no legs. [It did relate to sales]. Main suspicion was that word of mouth (which wasn’t monitored) was a big influence, and that opinion leaders needed some management.

    Similar problem, but too messy to get done. Maybe if the initiatives had been set up with an experimental design, and the surveys had used controlled panels, and the monitoring had been a bit more focussed…

    Cheers, Tomasso.

  4. Possum Comitatus said

    With PCA, the mean centering family of thought doesn’t deal well with the nature of the influences of most of the dataset as nearly all of the Newspoll series are bimodal and multi modal with a good whiff of skewness.

    Because different bands of values in the various sets (for instance the satisfaction series and their derivatives) seem to have different strengths of influence on the primary vote series of both parties, but conditional upon how far away the previous observation of the primary vote series was from key threshold values (such as 43.5 for the ALP) – the PCA type prism I’ve fiddled with certainly points out the obvious, but nothing I already didn’t know, and it misses a lot of stuff. That said, PCA isn’t exactly my forte.

    Lead indicators, or the discovery/creation/derivation thereof is the ultimate goal – the interesting thing with some of them though is that they often work backwards! Even simple things like net government satisfaction is actually a trailing indicator nearly half the time for the GOVPRIMARY series, simple covariance for about 25% of the time and a leading indicator only about 10-15% of the time.

    That suggests that the most likely behaviour is for people to change their voting intention first, then get pissed off with the government after they’ve made that decision. That sort of weird positive self-reinforcement behaviour of the electorate seems to be rampant throughout the Newspoll series.

    One of the big lead indicators for the Opposition primary vote is the interest payments to disposable income ratio (which would be expected considering the combination of important, self-perceived economic well-being effects it represents)

    This thing:
    https://possumcomitatus.wordpress.com/2007/05/24/why-howard-is-rooted-in-one-simple-graph/

    – it’s a powerful long term trend effect with a variable lag, but also soft in that there is a lot of variability created by all the short term political shenanigans around that underlying trend.

    Something I’ve been thinking about would be to create a model based on purely economic variables, but then weight them according to the relevant perceptions of the electorate about those economic effects that could be derived from Newspoll and Morgan surveys as well as the commercial consumer confidence type surveys that I have access to.

    The microsegment stuff would be great to use, but the information in the Newspoll series aren’t rich enough to separate out those rusted on votes with the seat by seat spatial distribution that would be ideal. This type of aggregation bias plagues the dataset in this regard.

    The best that I could do here is to use the “either side of 40” effect for the Opposition primary vote and the “either side of 43” effect for the Coalition primary, and then model those two sets of numbers. That provides an election threshold effect and I’ve played around with that to varying degrees of success.

    Alternatively – I could remove the 35% of the primary vote for the ALP and the 36% of the primary vote for the Coalition that accounts for the rusted on + random primary vote levels for the two parties which their primary vote level never falls below, then look at the distribution of those modified series to whittle down the swinging voter number even further (maybe using stochastic dominance – novel, but might work). I could then look at those periods where the Coalition and ALP primary votes where at their lowest and analyse the behaviour of the satisfaction series for the two parties at those points in time to give me an adjustment mechanism for the satisfaction series, so I could then have a satisfaction series that somewhat accounted for the rusted on voters. That then would let me model the swinging vote against an approximation of the swinging voters satisfaction dynamics (I’m a bit dubious about the integrity of the latter though).That may cut through some of the aggregation bias that plagues the series in terms of its effect on the primary vote.

    I might get onto that right now!

    Thanks Tom, especially the bit about customer satisfaction at the end – that’s good food for thought.

  5. Terima Kasih Atas Informasinya…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: