The Rudd government has made a mistake.
Not just a mistake on data analysis, nor just a mistake on managing information flows, nor even “just” a mistake of implementing a policy that has no evidential foundation.
They’ve made a mistake that will undermine their entire policy agenda for the next three years unless they rectify the process responsible for creating it.
We all remember the FuelWatch saga – where the ACCC assured us that their modelling was correct (even though they refused to release the relevant data), where they assured us that FuelWatch in WA reduced the price differential between Perth and the Eastern Capitals even though there were those of us that thought the data, at least the data we could independently scrounge up regarding petrol prices at the time, didn’t really show any such thing.
“Evidence based policy” was how FuelWatch was spun. A noble cause in and of itself – I’d imagine we’d all prefer policy to be based on evidence, it sure beats the alternatives .But what if the evidence the policy was based on was so questionable that the line between “evidence based policy” and “policy based evidence” – that style of political management where data is interpreted according to the needs of the policy rather than policy being designed to meet the needs of the evidence – became indistinguishable?
Professor Don Harding, economist at the Department of Economics and Finance at La Trobe University , was kind enough to give us a peek at an as yet unpublished draft paper he’s currently working on called “Foolwatch- A Case study of econometric analysis and evidenced-based-policy making in the Australian Government”
He has kindly allowed his draft version of the paper to be downloadable here and Don encourages anyone with any views on its content, particularly the econometrics, to contact him with feedback via his details which can be found on his page linked to his name.
Professor Harding went through the painstaking task of pulling the actual data out of this ACCC supplied chart that we’ve talked about here previously, in order to model it. There’s quite an irony here – the ACCC refused to release the data for a whole lot of basically spurious reasons, but made the mistake of releasing a graph of the data which determined folk like Don could actually use to extract the very data the ACCC were trying to hide.
What he found was disturbing.
Far from the ACCC econometric modelling that was used by the Rudd government to justify their longstanding Fuelwatch proposal being robust, it was misspecified and incorrectly tested. Their use of the nominal retail margin as a variable rather than the real retail margin is inconsistent with standard econometric approaches to this kind of modelling and their explanations of what their data actually represented were seriously lacking in specificity – to the point where it was difficult to derive just what it was they were actually measuring, how they were measuring it, how they tested it -and this uncertainty has lead to all conclusions drawn from the ACCC research to be seriously questioned.
Don Harding found that on the evidence available to the ACCC, the conclusions drawn by the Commission – that FuelWatch did not increase petrol price margins in WA – is in fact false and that no such conclusion can actually be derived from the data once it is modelled correctly. At most, the average reduction in the real price margin due to Fuelwatch is less than one third of one cent per litre, but could statistically be between a 1.01 cent per litre reduction, through to a 0.43 cent per litre increase using an orthodox 95% confidence interval.
Don also goes to great lengths to point out that it isn’t the fault of the ACCC econometricians here, and this is something that I wholeheartedly agree with. The data monkeys aren’t responsible for the problems here – far far from it.
It is the process that is at fault, and those whom manage that process.
The conclusions that Graeme Samuel was feeding to the media when the FuelWatch shitfight was happening were incorrect, because the modelling he was basing them on was incorrect. It cannot be proven that FuelWatch in WA did not increase petrol price margins.
Labor’s “evidence based policy” spiel over FuelWatch was nothing but political spin – but it was probably not deliberate spin. Wayne Swan no doubt believed that the modelling he received was accurate. Yet the problem was that the modelling in question that became the basis of the Labor political justification was created in an environment of zero-transparency.
And this gets us back to how the Rudd government has made a mistake that will undermine their entire policy agenda for the next three years unless they rectify the process responsible for creating it.
At the moment, the Rudd government is following a very astute and responsible technocratic process for the high volumes of future policy delivery they will engage in over the next three years.
“You cannot manage what you do not know” is the basic currency of good policy development.
So Rudd, correctly, instituted large numbers of reviews and government inquiries to ostensibly gather information and make recommendations so that the government will be in a position later this year to formulate policies armed with information on observable reality.
Despite the bleatings over these reviews from the shallow end of the media pool in this country with their profound ignorance over the pointy end of politics, the army of inquiries and reviews initiated by the government is, in and of itself, a necessary requisite for the type of “evidence based policy” program Rudd has been stating he will pursue.
But the big problem here, and one that will (and I say “will” pretty confidently) derail this policy program is the way in which the Federal government, their departments and their agencies treat third party access to the very data whose analysis often becomes the basis of policy recommendations.
We’ve got this enormous communications technology infrastructure that enables the efficient and near instantaneous aggregation of knowledge and expertise being effectively sidelined and ignored by political and management practices that are 15 years out of date – but done at their own peril.
The gatekeeping of information by departments and agencies used to be possible, having the public treat unseen internal analysis of that data as gospel from which policy was recommended was also generally accepted – but those days are gone.
What has just happened here with FuelWatch, a fairly comprehensive debunking of the analysis that was used to justify a relatively irrelevant piece of policy, will increasingly happen to other areas that are far, far from irrelevant.
The reason it will increasingly happen is simple – there is a greater number of interested policy specialists, analysts and general expertise that is external to government than there is within government. While this has pretty much been the case over the last 15 years or so, what differentiates then from now is that the external expertise can easily be aggregated and organised at virtually zero cost by the online world and the results of their independent analysis can be distributed widely to a very large, highly influential and still rapidly growing audience.
If the Rudd government is actually interested in “evidence based policy” rather than descending into the world of orchestrating ‘policy based evidence’– they need to adopt a data accessibility regime where as much of the data that is the basis for policy recommendations is made available to the wider public at the earliest possible time in the policy development cycle. Likewise, departmental and agency analysis must be released publically for scrutiny.
The FuelWatch saga is the perfect example of the need for such a data treatment regime.
Under an “evidence based policy” approach, the government wouldn’t have stated that FuelWatch was going to be implemented; they would have said that here is one possible proposal – will it work?
Then they would not only have commissioned the ACCC to do research, but made the relevant data publically accessible at the same time. This way, the external expertise would have had their say, it would have been in the public domain getting refined, praised or smashed under the burden of scrutiny, and the best pieces of research would have been propelled to the top of the pile under the power of their own merit -all before the ACCC research was completed.
If the full ACCC research was then released for public scrutiny once it was finished, the larger external expertise would have not only highlighted the inaccuracies and poor methodology of the ACCC analysis, but highlighted independent competing analysis that would have killed off a poor policy initiative before it was ever implemented.
Yet now the government is facing the ultimate embarrassment of not only getting slugged because the ACCC analysis on which they relied was wrong, but also from having to wear analysts, and consequently the media, pouring big buckets on the policy in the near future when the results of the policy will be measured and most likely demonstrated to be a failure.
The two benefits of this approach to third party data access are simple. Firstly, it’s a far superior political risk-management approach. The worst thing a government can do is implement a policy that becomes a failure – making the data whose analysis becomes the basis of policy recommendations publically accessible not only increases the likelihood that poor data analysis and modelling will be fingered before it gets a chance to pollute government policy (as well as being widely publicised as doing just that), but also provides the government with a zero cost alternative resource from which they can pinch and co-opt the better bits at their leisure.
Secondly, the competitive effects between internal and external analysis will reduce lazy analysis or analysis tweaked to favour certain agendas unrelated to actual policy outcomes, particularly from government departments and agencies which, as a result of basic human nature, often get influenced by random bouts of empire building and inter-agency and inter-departmental pissing contests.
The big excuse that always gets dragged out about now on why such a thing cant happen, that the government doesn’t have the resources nor time to review the external analysis, is usually said by people that have close to zero understanding of the way the crowdsourcing of information in today’s technology and communications rich environment actually works in practice.
The agencies and departments won’t have to follow the external debate – the external debate will make itself known quite comprehensively when departments get it wrong, and when better ideas are available. At the end of the day, while there may be large volumes of expertise available external to the government – in reality only a small amount will be deployed on any given piece of data or policy, with larger amounts being deployed critiquing that external analysis which is where the value of distributed and aggregated knowledge comes in. The government simply won’t be flooded with hundreds, let alone thousands, of competing pieces of data analysis – they’ll just be made aware of the best few, which is really all it takes.
If the government believes that evidence based policy is truly desirable, then they need to open up the relevant data to third party access. Policy development in this country will be far better for it, the quality of public debate will be far better for it and over the longer term, the political fortunes of the government will be far better for it.
I encourage everyone to have a squiz at Professor Don Harding’s draft paper (and it is only a draft paper at this stage) – even though some of it is econometrics heavy, most of it isn’t, and it makes for a damn fine read about the key issues that surround “evidence based policy” and it’s possible pitfalls when not undertaken properly.