Possums Pollytics

Politics, elections and piffle plinking

Beltway Theory or JTSAU?

Posted by Possum Comitatus on May 19, 2007

 

Paul Kelly tells us that there are duelling banjo’s realities in the electorate, there are he tells us, “inside the beltway” theories suggesting that the government must win because the Canberra press gallery says so the economy is so good, and there is the “outside the beltway” shmucks…. that’d be you and me, who seem to be ignoring the journos wisdom and giving the Newspoll folks incorrect answers.

So let’s test the “Beltway Theory”.

Let’s create an index of economic strength so we can test it against voting behaviour as defined by Newspoll.

So to start, lets create a basic index that represents GDP growth, the interest rate as defined by the standard variable home loan rate and unemployment. GDP growth is easy, as it increases the index will move up, as it decreases the index will move down. All we need to do is weight it.

Interest rates and unemployment are different, as they go down the economic index must go up, so let’s weight the inverse of each of those. Then we add up the three weighted values and we’ll have a measurement that moves up with a good economy and down with a bad one.

To show how this all works, let’s use the first month of the series, December 1985.

GDP growth for this month was 0.141338%. That’s about 1.7% growth for the year.

So the first component of the index is W1*GDP where W1 is a weight we will give the GDP series. Let this weight W1=20 so that W1*GDP= 2.826756

The second component is the interest rate level which in Dec 1985 was 13.5%.

Taking the inverse of this: 1/13.5 = 0.074074. So for our index we need to weight this number. Let’s give it a weight of 100 so that the weighted value for the interest rate component in the index is 7.407407

Finally let’s do the same for unemployment. In Dec 1985 the unemployment rate was 7.8%.The inverse of 7.8 is 0.128205.If we give the inverse unemployment a weight of 100, the unemployment component of the index is 12.82051.

So the total index value for Dec 1985 is 2.827+7.407+12.82= 23.05468

Are these weights reflective of reality? Mostly. The highest value attained for each of the weighted values (GDP, Interest rates, unemployment) is 13.8, 16.5 and 22.7 respectively while the minimum values are -6.8, 5.8 and 9.1.

The highest index value was 22.7 which is its value today; the lowest was 9.1 in December 1992.

That’s not perfect, but it will do for the purposes of this test because that index moves up with good economic times and down with bad economic times. We are more concerned with the movement rather than the actual values.

So now we have our index, regress the governments’ primary vote since 1985 on its own lagged value and this economic index (we’ll call this ECONINDEX)

If the “inside the beltway” theory holds, then the coefficient for ECONONINDEX will be positive. The positive changes in the index should walk hand in hand with increases in the governments primary vote as estimated by Newspoll.

If the economic condition of the country seriously influences voter behaviour, then the coefficient for ECONINDEX should be a relatively large value AND be statistically significant i.e. have a Prob value less than 0.1, preferably less than 0.05.

So let’s test the theory by doing the regression.

 

Dependent Variable: GOVPRIMARY  
Method: Least Squares    
Date: 05/19/07 Time: 14:59    
Sample (adjusted): 1986M01 2007M05  
Included observations: 257 after adjustments  
         
         
Variable Coefficient Std. Error t-Statistic Prob.
         
         
C 8.929002 1.725481 5.174790 0.0000
GOVPRIMARY(-1) 0.758701 0.040235 18.85692 0.0000
ECONINDEX 0.041177 0.020988 1.961928 0.0509
         
         
R-squared 0.602035 Mean dep var 42.437
Adjusted R-squared 0.598901 S.D. dep var 3.8455
S.E. of regression 2.435473 Akaike 4.6297
Sum squared resid 1506.608 Schwarz 4.6711
Log likelihood -591.9246 F-statistic 192.12
Durbin-Watson stat 2.206406 Prob(F-statistic) 0.0000

The variable is significant enough, but its value of 0.04 suggests that the difference between the economy of December of 1992 and the economy of today is only worth 0.04*( 22.7-9.1) = 0.5576% to the governments primary vote.

Now that’s clearly twaddle by any yardstick. I created a couple of other economic measurements as well incorporating business expectations, and various weighting mechanisms and they all turned out to be in the same ballpark.

So lets look at this regression just for the Howard government period:

Dependent Variable: GOVPRIMARY  
Method: Least Squares    
Date: 05/19/07 Time: 15:17    
Sample: 1996M03 2007M05    
Included observations: 135    
         
         
Variable Coefficient Std. Error t-Statistic Prob.
         
         
C 12.87587 3.829054 3.362677 0.0010
GOVPRIMARY(-1) 0.699339 0.067491 10.36189 0.0000
ECONINDEX 0.001680 0.048245 0.034830 0.9723
         
         
R-squared 0.473274 Mean dep var 43.110
Adjusted R-squared 0.465293 S.D. dep var 3.3607
S.E. of regression 2.457537 Akaike 4.6581
Sum squared resid 797.2126 Schwarz 4.7227
Log likelihood -311.4264 F-statistic 59.302
Durbin-Watson stat 2.044193 Prob(F-statistic) 0.0000

Lo and behold, its absolute twaddle. The index is irrelevant to the Howard government primary vote. Its value is so small its meaningless and the thing couldnt be more statistically insignificant if it tried.

Now let’s do it in a disaggregated way, where we’ll junk the index and look at the components of the index and we’ll throw consumer confidence and the yearly %change in the value of the ASX in there as well: And we’ll do it for the Howard era.

Dependent Variable: GOVPRIMARY  
Method: Least Squares    
Date: 05/19/07 Time: 15:23    
Sample (adjusted): 1996M03 2007M03  
Included observations: 133 after adjustments  
         
         
Variable Coefficient Std. Error t-Statistic Prob.
         
         
C 0.745883 4.243908 0.175754 0.8608
GOVPRIMARY(-1) 0.554957 0.070452 7.877036 0.0000
GDP 1.291302 1.080009 1.195640 0.2341
INT 0.677360 0.240362 2.818083 0.0056
UNEMP 0.475960 0.200412 2.374910 0.0191
CONCONF 0.093489 0.031964 2.924811 0.0041
ASX 0.120735 0.063475 1.902078 0.0594
         
         
R-squared 0.536205 Mean dep var 43.212
Adjusted R-squared 0.514120 S.D. dep var 3.2799
S.E. of regression 2.286292 Akaike 4.5429
Sum squared resid 658.6184 Schwarz 4.6950
Log likelihood -295.1052 F-statistic 24.278
Durbin-Watson stat 1.967456 Prob(F-statistic) 0.0000

So we have GDP that’s not statistically significant to the governments primary vote, we have interest rates working opposite to the beltway theory whereby interest rate increases walk hand in hand with increases in the governments primary vote, likewise with unemployment, the higher it gets, the higher the government primary vote can be expected to be, with only consumer confidence and stock market capitalisation experiencing increases that walk hand in hand with increases to the governments primary vote – but by very small amounts.

So what do we have in the end?

Well we have a “beltway theory” that reckons governments live and die by the state of the economy, and we have the reality where GDP is meaningless to the governments primary vote, and interest rates and unemployment act completely contrary to the beltway theory. Strike 3 for Beltway – it’s just Journo’s Talking Shit As Usual.

10 Responses to “Beltway Theory or JTSAU?”

  1. slim said

    Love your work!

    Keep cutting through the crap, spin, dogma and mythologising with your razor-sharp regression analysis.

  2. watt said

    Awesome stuff!

    Thank you for posting it!

  3. gusface said

    very entertaining : )

  4. possumcomitatus said

    My pleasure folks,I’m glad you’re all enjoying it.
    Who would have thought that the econometric dismantling of media horseshit could ever find an audience😉

  5. Appu said

    Can’t say I understand any of it, but if this is empirical evidence Paul Kelly makes stuff up,
    well,who would of thunk it!

  6. Rocket said

    I have just found your website – as a maths graduate (years ago!) I am intrigued. My analyses are much more primitive than yours but they do point in the same direction!

  7. EconoMan said

    Possom, are you familiar with Election forecasting models based on the economy? See Andrew Leigh’s paper on competing approaches to forecasting elections.

    Most use variables such as ‘better off than 12 months ago’ , ‘expect to be better off in 12 months’ as opposed to quarterly GDP growth. Unemployment or other variables also can be ‘change relative to a year ago’ rather than absolute levels.

    Depending how sophisticated you believe the electorate is, you could also compare key variables to global or OECD averages. That would clearly put Australia’s current performance into perspective — we are only doing OK compared with many countries ATM.

  8. possumcomitatus said

    I’ve always found election forecasting an interesting proposition, but there isn’t much around in the Australian sphere for time series based models (which is the type I’m doing).There’s only about a dozen examples off the top of my head and I find most of them have been problematic in some way, often extremely problematic. The chief problem for the time series models in the Australian sphere is that they tend to use only Federal Election results as their data points. There’s only been 41 elections and that really isn’t enough data points to be able to spec out a model properly because you run into the parsimony issues.

    The smaller the number of data points, the less variables you can realistically use to model the series if you want to keep some decent level of precision. To get around this I’m not using just Federal Elections, I’m using Newspoll which gives me 258 datapoints and growing. That’s a large enough sample to practically overcome the parsimony issue (which basically says that simple models are preferable to complex ones because every extra variable you add to the model increases the uncertainty contained within the model).In theory, parsimony is always an issue regardless of the size of your sample. In practice in the real world, a sufficiently large enough sample overcomes the parsimony issue if one doesn’t get too carried away by overspecifying the model.

    That then allows me to include a larger number of variables without ruining the precision, while simultaneously accommodating the fact that elections tend to be close by accounting for the movements in the polls in the month before the election with a dummy variable (I’m developing a few other ways to achieve this rather than dummy variables as well, I haven’t got them working quite right yet). For forecasting purposes that effectively gives me the “what would be the level of support for each party if an election campaign wasn’t underway” and then adjusts those levels of support to take into account the compression effects of the campaign.

    The models that others have done for Australian elections, because they were using only election results, they were forced to use things like change in GDP over a quarter, or change in unemployment over the electoral cycle because of their small sample.I don’t have that problem (I have a different set of problems) so I can use monthly data as the smallest unit. That also allows me to measure any impulse responses from events, as well allowing me to measure the impact of economic variables in time frames larger than their smallest unit – such as change in unemployment over the previous 12 months, or 2 years etc should I need to.

    Another problem with the previous modelling is the timeliness aspect.Because elections effectively deal with human decision making, and because the framework and context that each of us undertake those decisions within changes over time (for instance, we are bombarded with far more information about the economy today than people were in the 1950s), comparing election results from the 1950s with 2004 for example breaks down, in my mind, any notion of temporal consistency. They were two completely different periods with completely different social, economic and more importantly, information environments. I’m reluctant to go back as far as December 1985 (which is where my dataset starts).

    On using variables like “expect to be better off in 12 months” – I’m hesitant about expectations variables for a number of reasons relating the the inherent uncertainty contained within them. If I want to fold some level of expectations information into a model I’d rather use consumer confidence because I know how that operates compared to observable reality.

    What I’m actually attempting to do is model party political support as a function of observable economic reality. And because I can adjust that party political support for the compression effect that happens in election campaigns, and because I’m working with a far larger dataset than others have done, I can let these models contain far more more information in terms of explanatory variables than others have done simply because of the methodology. I’m using a different time series methodology because, to put it bluntly, I think it’s a superior approach.

    Another reason why I’m doing it this way, actually a really fundamental reason why I’m doing it this way is that I plan on building Vector Autoregressive models closer to the election where I’ll model the coalition primary vote sequence and the ALP primary vote sequence together as a group of equations, which allows me to model the feedback that occurs between the two series as well as doing Granger Causality tests – well that’s the plan anyway.

    You raise an interesting point on international comparisons for things like growth. I’ll look into that and see what I can come up with.

  9. EconoMan said

    I’ve no doubt you know much more about the econometrics / stats side of things, but I know enough to understand what you are talking about.

    I’d argue that one of the reasons why those models use annual rather than quarterly is not just parsimony, but because it reflects what most voters would actually know. But fair point you make on consumer confidence being a reasonable substitute for ‘expect to be better off’.

    One final point. To put it crudely, it seems that your model is predicting Newspoll results, not election results. While that might give you more data points, it is less useful. You then have the uncertainty of mapping from polls to elections. Uncertain not only because of sampling error (which one hopes ‘means’ to zero with 258 datapoints), but because of the poll’s weak performance in predicting elections (see the same Andrew Leigh papaer).

    Interested in your thoughts on the water post…

  10. possumcomitatus said

    Annual changes probably do reflect better what voters know than quarterly or monthly changes, but the trends of those changes that walk hand in hand with the trends of the changes in party political support can be done more accurately with smaller time units of data than longer ones, because having smaller ones allows you to model changes over a larger time period (3 months, 12 months, 2 years etc) but yearly changes of GDP or unemployment for example don’t allow you to model any effects that may be due to GDP growth or unemployment in periods smaller than 12 months.

    By using monthly rather than annual changes, I can model the same things that can be modelled using annual changes and model them in the same way, but it also allows me to model 3, 6 and 12 month lag effects (lag effects to any level), as well as 3, 6 and 12 month moving average effects (or moving average effects to any level) as well as how those economic effects crystalise out into the voters minds during the election campaigns. That cant be as accurately achieved by using just annual data.

    You are spot on, I am modelling Newspoll, not electoral outcomes. But I’m only modelling primary votes with Newspoll, which lets me ignore the preference distribution issue which is where most of the uncertainty seems to comes in. For the uncertainty involving the differences between the Newspoll primary vote findings and the electoral findings, I think that uncertainty can be reduced by ARCH modelling of the error terms of my eventual forecast model. Until I get that specced out, or something reasonably approaching it – I cant deal with the ARCH component.

    I plan at this stage to model preference distributions with probability functions later when there’s some more data around on preference flows and the preference deals for the election are finalised. That’ll then allow me to get my primary vote forecasts and overlay the preference distribution model onto it to get my final TPP forecast. Until then I’m just exploring relationships between primary votes and observable economic reality. The interest payments to disposable income ratio with a 2 year lag is a good example. That’s profound if its true, and there is a good economic logic backing it.

    I only started doing this one week ago. The blog actually started last Thursday by accident – it was only going to be a place that I could stick some diagrams and stuff and have them automatically archived for debates I have around the net. But then the traffic started and it’s taken on a life of its own. So I’m only in the very initial stages of doing what I plan to do. I’m still laughing at the fact that I’ve become an accidental blogger.

    I am planning getting back to you on the water issue, hopefully by tonight. You raised some interesting points about property rights that I need to think about more to do them justice.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: