Image may be NSFW.
Clik here to view.It’s all about the ‘killer facts’. If you want to get social science into policy, then – as Alex Stevens’ wonderful covert ethnography of high-level policymaking shows – killer facts are the name of the game. And we try hard on the blog to get these across to you, as often and clearly as we can.
But sometimes it’s necessary to take a step back, and think about whether these killer facts are, well, ‘facts’ at all. These issues come up repeatedly on the blog, not least when debating the effects of inequality on society; indeed, the very first post on the blog was about the Spirit Level debates, and we’ve come back to this since.
In this pair of posts, I want to challenge every researcher (and every user of research) to demand another bit of credibility in fact-creation: full transparency.
Image may be NSFW.
Clik here to view.This is all prompted by a friend pointing out that the Center for Global Development now has a policy of publicly sharing ALL the data and computer code that underlies all the numbers they create in their publications (where they are allowed to). I think everyone should do this – and in fact, I would go as far as saying that it is morally dubious not to do this. In this post I’ll explain why this is necessary, and then next week I’ll show why the opponents of transparency are misguided.
In favour of transparency
My favourite argument in favour of transparency comes from Jeremy Freese, to which I’ve tacked on my own thoughts – but his paper is great reading [(see also the whole SMR special issue on this from 2007 including papers by King, Firebaugh, Abbott and a response by Freese; Jeremy's paper is freely available, and I'm happy to send on any of the other papers to anyone who wants them - transparency cuts in many ways...). This Fraser Institute 2009 report looks good too].
The essential reasons for transparency are to overcome three problems. Firstly – and as anyone who has done any quantitative social science will know – there are a whole load of arbitrary decisions in doing any analysis. Which part of the sample do I use, and do I weight them? How do I turn the raw questions into the variables in my model? Which exact outcome do I look at? Which variables do I include in the model? What sort of statistics do I run? Etc. And sometimes people just make plain mistakes. In fact, the number one rule I was told as a new researcher was, ‘if you get a really exciting result, you’ve probably done something wrong’ – and they were right.
Secondly, in an ideal world, the errors created by these arbitrary decisions would be random, and – on average – research results would still be the truth. Ah, to live in such an ideal world. In reality, researchers are biased towards a particular answer. This does not mean that they set out to fib; it’s mainly a matter of unconscious biases, and the desire to create a ‘statistically significant’ result that will get you published in a good journal, and thereon to fame and fortune.
Finally, data collection usually happens with public money, or (to a lesser degree) using money from charities. This money is often being spent to create knowledge and promote the public good. If someone has gone to the trouble of spending all this money and bothering people, then it seems best to share this data for other people to use, helping to further scientific understanding at the lowest possible economic and ethical cost.
Does all this matter
In practice, getting hold of other people’s data (let alone the code that sets out their analyses) is often a challenge. Freese cites Wicherts et al (2006) who found only 27% of authors in the American Psychological Association’s top journals complied with repeated requests for data for verification purposes. Even worse replicability was found historically in economic journals. Wicherts et al’s latest (2011) paper moreover finds that worse papers (weaker evidence, more apparent errors) were less likely to agree to share their data – which sounds a lot like covering up to me.
And this matters. There are many examples of authors trying to replicate published findings that are being bandied about in public debate and finding themselves completely unable to recreate the results (see the examples cited in Freese and particularly McCullough et al) – including debates around abortion & crime, school choice, and the impact of policing.
More widely, I love Lehrer’s piece in the New Yorker on the ‘decline effect’ – someone finds a new ‘truth’, there’s a delay before people get the data to replicate it, and then it turns out it wasn’t true to begin with. But only after everyone has got excited about the original finding. Again, this it not deliberate fraud, but rather is the result of chance findings and data mining as described by Ionaddis 2005, in paper with the name ‘Why most published research findings are false’.
But…
That said, there are several strong arguments against transparency, which are the subject of much debate between Freese, King, Abbott and Firebaugh (and others). In the second part of this post, I’ll look at these arguments against transparency, and see if they stand up to scrutiny. And finally I’ll conclude with a message about how all of us interested in inequalities – researchers, policymakers, and people who just want to know the truth – should change the way they do things as a result.
Image may be NSFW.
Clik here to view.
Clik here to view.
