Issues in Science and Technology Librarianship
We have no contributed Viewpoint to share in this issue, so I have taken the liberty of writing my own. Let this be a reminder to our readers: We want your opinions! Please consider submitting a Viewpoint for an upcoming issue of ISTL.
The conventional wisdom among Open Access advocates and librarians is that articles that are freely available will be read more, downloaded more, and by extension cited more. It seems like a no-brainer: take down the walls and people will come in. Indeed, a number of studies published in recent years have claimed to confirm that assumption. But a new paper by Philip Davis and his colleagues at Cornell University suggests that the citation effect may not be there after all (Davis et al. 2008).
We desperately need objective, quantifiable evidence that OA does what it claims to do, rather than taking these things as a matter of near-religious faith. Only with hard evidence can we refute publisher claims that OA is evil, destructive, and unnecessary, and demonstrate to all stakeholders that OA is worthy of further investment and advocacy. Studies of OA's effects, especially those published in journals that real scientists see (as opposed to the "library literature"), help bring the debate into the arena where it matters most: among scientists themselves.
But what happens when the numbers don't support the claims? Any study that calls into question the efficacy of OA will be eagerly seized upon by the opponents of OA and used to attack further efforts and policies that are now, after many years, beginning to bear fruit. We have already seen some publishers set up little-used author-pays OA options as a straw man to "prove" that authors don't care about OA and don't want to pay for it. The last thing we need is to give the naysayers more ammunition. Nevertheless, it probably is accurate to say that most authors don't care about OA -- not because they think it's a bad thing, but because they remain wholly ignorant of its existence as a matter of debate and because, for most of them, it's irrelevant since they already have subscription access paid for by their institutions.
Studying the effect of OA in the scholarly communication environment is devilishly tricky. There are many variables and unknowns that can't be quantified or controlled. Davis et al. have moved a step forward by randomizing the selection of articles to make freely accessible, and comparing this group to a larger group of subscription-restricted articles in the same set of journals. This should cancel out the "self selection" bias, which postulates that articles chosen for OA tend to be the higher quality papers that would garner more citations regardless. Indeed, when this variable was removed, the citation advantage disappeared.
Critics have been quick to point out that the time frame for Davis' study was short, and that the articles have not been out long enough for their full citation impact to be apparent. Davis has indicated that his team will continue to track the articles for several more years. But even in the follow-up interval since the end of the study, the initial conclusion hasn't been changed (Davis 2008). In truth, this kind of citation study takes years. Ideally one would track articles for at least the "cited half life" horizon of a given discipline, which is measured in years, before reaching a confident conclusion that OA does or does not, in fact, affect the citation rate. The trouble is, people don't want to wait years for an answer -- policy decisions are being made in the here and now.
What researchers are really trying to measure is the added audience of readers that would not otherwise see these articles. The core audience for the articles, as Davis notes, is "highly associated and concentrated among the elite research institutions around the world." These people already have access to the articles anyway, so the OA variable is meaningless to them. Those outside this core audience of readers and authors are an unknown quantity, and may not be publishing much of their own, hence they are not showing up as additional citations in ISI-covered journals. Their presence is however clearly evident in significantly higher browsing and download statistics. And to many OA advocates, that is precisely the point: to widen the audience for scholarly literature beyond the traditional confines of top universities and institutes to include laypeople, patients, taxpayers, and students all over the world. Counting citations is not the best way to measure this latter impact of open access.
Davis and his colleagues are to be commended for applying statistical rigor to these arguments, and letting the chips fall where they may. We may not always like the answers, but it is vital that the questions be asked.
Davis, Philip. 2008. Comment in response to Kent Andersen, "Open access doesn't drive citations," posted August 6, 2008, [Online]. Available: http://scholarlykitchen.sspnet.org/2008/07/31/open-access-doesnt-drive-citations/ [Accessed August 7, 2008].