FacebookBufferPocketShare

Stevan_HarnadThe problem with peer review today is that there is so much research being produced that there are not enough experts with enough time to peer-review it all. As we look to address this problem, issues of standards and hierarchy remain unsolved. Stevan Harnad wonders whether crowd-sourced peer review could match, exceed, or come close to the benchmark of the current system. He predicts crowdsourcing will indeed be able to provide a supplement to the classical system, hopefully improving efficiency and accuracy, but not a substitute for it.

If, as rumoured, Google builds a platform for depositing un-refereed research papers for “peer-reviewing” via crowd-sourcing, can this create a substitute for classical peer-review or will it merely supplement classical peer review with crowd-sourcing? In classical peer review, an expert (presumably qualified, and definitely answerable), an “action editor,”  chooses experts (presumably qualified and definitely answerable), “referees,” to evaluate a submitted research paper in terms of correctness, quality, reliability, validity, originality, importance and relevance in order to determine whether it meets the standards of a journal with an established track-record for correctness, reliability, originality, quality, novelty, importance and relevance in a certain field.

In each field there is usually a well-known hierarchy of journals, hence a hierarchy of peer-review standards, from the most rigorous and selective journals at the top all the way down to what is sometimes close to a vanity press at the bottom. Researchers use the journals’ public track-records for quality standards as a hierarchical filter for deciding in what papers to invest their limited reading time to read, and in what findings to risk investing their even more limited and precious research time to try to use and build upon.

lapis-lazuli-115962_1280Sifting for quality. Image credit: Precious stones by  (Pixabay, Public Domain)

Authors’ papers are (privately) answerable to the peer-reviewers, the peer-reviewers are (privately) answerable to the editor, and the editor is publicly answerable to users and authors via the journal’s name and track-record.

Both private and public answerability are fundamental to classical peer review. So is their timing. For the sake of their reputations, many (though not all) authors don’t want to make their papers public before they have been vetted and certified for quality by qualified experts. And many (though not all) users do not have the time to read unvetted, uncertified papers, let alone to risk trying to build on unvalidated findings. Nor are researchers eager to self-appoint themselves to peer-review arbitrary papers in their fields, especially when the author is not answerable to anyone for following the freely given crowd-sourced advice. And there is no more assurance that the advice is expert advice rather than idle or ignorant advice than there is any assurance that a paper is worth taking the time to read and review.

The problem with classical peer review today is that there is so much research being produced that there are not enough experts with enough time to peer-review it all. So there are huge publication lags because of delays in finding qualified, willing referees, and getting them to submit their reviews in time. And the quality of peer-review is uneven at the top of the journal hierarchy and minimal lower down, because referees do not take the time to review rigorously.

The solution would be obvious if each un-refereed, submitted paper had a reliable tag marking its quality level: Then the scarce expertise and time for rigorous peer review could be reserved for, say, the top 10% or 30% and the rest of the vetting could be left to crowd-sourcing. But the trouble is that papers do not come with a-priori quality tags: Peer review determines the tag.

The benchmark today is hence the quality hierarchy of the current, classically peer-reviewed research literature. And the question is whether crowd-sourced peer review could match, exceed, or even come close enough to this benchmark to continue to guide researchers on what is worth reading and safe to trust and use at least as well as they are being guided by classical peer review today.

Image credit: Hierarchy of Journals SUNY Geneseo

And of course no one knows whether crowd-sourced peer-review, even if it could work, would be scale-able or sustainable.

The key questions are hence:

  1. Would all (most? many?) authors be willing to post their un-refereed papers publicly (and in place of submitting them to journals!)?
  2. Would all (most? many?) of the posted papers attract referees? Competent experts?
  3. Who/what decides whether the refereeing is competent, and whether the author has adequately complied? (Relying on a Wikipedia-style cadre of 2nd-order crowd-sourcers [pdf] who gain authority recursively in proportion to how much 1st-order crowd-sourcing they have done — rather than on the basis of expertise — sounds like a way to generate Wikipedia quality, but not peer-reviewed quality…)
  4. If any of this actually happens on any scale, will it be sustainable?
  5. Would this make the landscape (un-refereed preprints, referee comments, revised postprints) as navigable and useful as classical peer review, or not?

My own prediction (based on nearly a quarter century of umpiring both classical peer review and open peer commentary) is that crowdsourcing will provide an excellent supplement to classical peer review but not a substitute for it. Radical implementations will simply end up re-inventing classical peer review, but on a much faster and more efficient PostGutenberg platform. We will not realize this, however, until all of the peer-reviewed literature has first been made open access. And for that it is not sufficient for Google merely to provide a platform for authors to put their un-refereed papers, because most authors don’t even put their refereed papers in their institutional repositories until it is mandated by their institutions and funders.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Stevan Harnad currently holds a Canada Research Chair in cognitive science at Université du Québec à Montréal (UQAM) and is professor of cognitive science at the University of Southampton. In 1978,  Stevan was the founder of Behavioral and Brain Sciences, of which he remained editor-in-chief until 2002. In addition, he founded CogPrints (an electronic eprint archive in the cognitive sciences hosted by the University of Southampton), and the American Scientist Open Access Forum (since 1998). Stevan is an active promoter of open access.

Harnad, S. (1998/2000/2004) The invisible hand of peer reviewNature [online] (1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

Harnad, S. (2011) Open Access to Research: Changing Researcher Behavior Through University and Funder Mandates. JEDEM Journal of Democracy and Open Government 3 (1): 33-41.

Harnad, Stevan (2013) The Postgutenberg Open Access Journal. In, Cope, B and Phillips, A (eds.) The Future of the Academic Journal (2nd edition). Chandos.

Print Friendly