In a clever sting operation, Science magazine revealed recently that many of the open access science journals that have proliferated in the last few years are not what they ought to be.
As you can read in much more detail here, the magazine worked with researchers to create a sham article, convincing in many ways but fundamentally flawed both ethically and in experimental design. Any competent peer reviewer should have caught the problems and rejected the article.
Instead, over half of the 304 journals that received the paper accepted it.
Some open access journals such as PLOS ONE (which Science says was the only one to flag potential ethical problems, as well as ultimately rejecting the paper for poor scientific quality) passed the test. Still, plenty of folks who should know better were taken in (ahem: Science didn’t send the paper to Open Medicine).
The list of journals that were sent the article was very broad, including journals published by big players like Sage and Elsevier, and like Kobe University. The chosen journals were selected from open-access journals listed on the respected Directory of Open Access Journals and from a list of ‘predatory’ journals created by University of Colorado library scientist Jeffrey Beall. Beall’s markers for a ‘predatory’ journal are idiosyncratic, for example he apparently considers poor English a sign of an untrustworthy journal, thus putting many majority world publications on his list, perhaps more than deserve to be there (other criteria include poorly defined editorial hierarchy and undisclosed author charges). Still, his concerns do provide some clues as to the eventual problems in many of Science’s chosen journals.
A peer review process is fundamental, at Open Medicine and any respectable journal—though it was apparently lacking in some but not all of the journals put to the test. Clearly it must also be a competent peer review process.
Other problems: we have frequently pointed out the problems with subscription-only, impact-factor-focused journals. For example, they are highly susceptible to commercial influence through reliance on medical device and pharmaceutical advertising. Many open access journals (including Open Medicine) also rely on (accepted) author payments to recoup some costs of publication—indeed, this is the standard model for open access publishers.
As Science’s investigation shows, open access journals are not immune to many of the same flaws as traditional journals. Scientific rigour and impeccable ethics are critical, whether a journal is open access or not.
It’s worth taking a look at this article in the Guardian in which the author suggests that the problem identified by the Science sting is not that open access is a flawed concept, but that peer review processes, in general, are just not very good. He promotes what’s called (of course) Open Evaluation, an emerging consensus about ways to improve quality control through an ongoing, post-publication process.
Food for thought.
Commenting on this Blog entry is closed.