[go: up one dir, main page]

Scientific Publishing Is Killing Science

Here’s how to fix it.

Even the National Institutes of Health acknowledges that biomedical science has a growing credibility problem. Last month, Director Francis Collins and Deputy Director Lawrence Tabak wrote that “the recent evidence showing the irreproducibility of significant numbers of biomedical-research publications demands immediate and substantive action.” The evidence they cite includes a startling 2011 report by researchers at the pharmaceutical company Bayer, who were unable to reproduce the results of nearly two-thirds of a set of peer-reviewed, pre-clinical drug studies. Collins and Tabak cite several reasons why researchers produce so much bad science these days, but among scientists in both academia and industry there is a growing feeling that how we publish science is a big part of the problem. Scientific publishing is failing, and it’s taking science down with it.

Scientific publishing is supposed to facilitate research by carrying out two major functions: quality control and dissemination. Quality control is, in principle, achieved by peer review: a manuscript is checked by two or three expert reviewers, who scrutinize its methods, data, and reasoning. After peer review, a journal makes the research useful to others by disseminating it as printed articles that end up in university libraries, and, more recently, as electronic articles on journal websites.

Academic journals today, even ones that only exist online, are still based on the centuries-old model of printed and bound matter that gets shipped to subscribers.

But researchers worry that academic journals are failing at both of these functions: Peer review is failing to ensure the quality of published research, and new research fails to get into the hands of those who need it, ending up behind journal paywalls after a review process that can take more than year. To fix these problems, we need to recreate scientific publishing for the Internet, argues Richard Price, the CEO of Academia.edu, a site that offers a suite of social media tools aimed at helping scientists and other academics share their research. Price, who holds a Ph.D. in Philosophy from Oxford University, told me that what gets him going in the morning is the possibility of making the world’s research endeavors more efficient and more rigorous by changing how we publish academic work. He believes that an improved publishing process should have three features: 1) speed: Researchers should be able to upload their manuscripts as soon as they’re written; 2) community peer review: Manuscripts should be evaluated by the whole community, not just two or three reviewers; and 3) open access: Papers should be accessible to all who want to read them.

Price and his team are building software tools to do this. Their site allows researchers to share and discuss manuscripts, helps users follow particular topics, and provides ways to measure the impact of individual articles and researchers. It sounds useful, but the critical question is this: Will busy academics, with strong career incentives to stick with traditional publishing, bother to participate? The answer appears to be yes; Academia.edu keeps a running tally of its users (7,689,443 when I last checked). Even more promising, users are not just early career scientists; Price told me that senior researchers, with established reputations, are also joining in large numbers.

PRICE’S PROJECT IS ONE of several attempts to rethink how science is reviewed and disseminated. Michael Eisen, a biologist at the University of California-Berkeley, has frequently argued that traditional journals, particularly high profile ones, are hindering science: “Peer review as practiced in the 21st century biomedical research poisons science … the mythical veneer of peer review has created the perception that a handful of journals stand as gatekeepers of success in science, ceding undue power to them, and thereby stifling innovation in scientific communication.”

Like Price, Eisen believes that open-access publishing and community, post-publication peer review will play an essential role in fixing science. Eisen co-founded the Public Library of Science (PLOS), a publisher of a series of open-access scientific journals. While most of these journals work like open access versions of traditional journals, one of them, PLOS One, follows a more radical publication model. With the goal of leaving the evaluation of a paper’s significance to the scientific community, PLOS Onepromises to “publish all papers that are judged to be technically sound.” Post-publication review can then happen with a set of software tools on the journal’s website.

Other organizations are also experimenting with ways to make post-publication peer review effective. The National Institutes of Health recently created a discussion forum called the PubMed Commons, which is conveniently integrated with the widely used PubMed citation database. PubMed Commons requires users to register under their real names; while PubPeer, an independent forum, allows anonymous comments. eLife is both a journal and a communication platform. Backed by major research foundations in the U.S., U.K., and Germany, eLife is experimenting with new approaches to pre-publication peer review, while also developing tools for tracking post-publication peer review.

“The whole model will change,” Price says. Academic journals today, even ones that only exist online, are still based on the centuries-old model of printed and bound matter that gets shipped to subscribers. Manuscripts are still written in paper-mimicking applications like Microsoft Word, and distributed as non-machine-readable PDF files that are laboriously formatted to look like printed articles. As Price envisions it, the future of scientific publishing won’t involve recognizable journals at all. Scientific publishing will look more like GitHub, a successful site that provides tools for programmers to share, review, and find code. Researchers will use a set of integrated software tools to write, format, share, and track their work openly, building their reputations and their careers with community peer review.

The biggest challenge will be to get the scientific community behind these new and unfamiliar ways of evaluating and disseminating research. Very few of my colleagues are happy with the current state of science publishing, but they are not willing to risk their careers by abandoning traditional and widely recognized measures of scientific accomplishment. Given the fierce competition for funding and jobs, it’s difficult to see how informal post-publication peer review and social media cachet could replace the terribly flawed, but easily recognized success of publishing in a high-impact journal. But this replacement doesn’t have to happen all at once. If these new experiments in publishing prove that they can deliver better research more efficiently, the scientific community will adopt them.

Related Posts