In the past few weeks, highly respected magazines Science and the Economist published long investigative reports about important aspects of the current scientific enterprise. The picture they together drew was rather disturbing.
The Economist, titling its piece ‘Unreliable research’, examined the growing evidence that a substantial fraction of what is being published in scientific journals today is wrong. This varies very much from one field to another, but in many areas not only are erroneous results being published but, more dangerously, they do not get checked and corrected afterwards. I hasten to insist that no one is accusing scientists of deliberately publishing wrong results. The general complaint rather is that science seems to have lost its self-correcting mechanism that had made it perhaps the most robust human enterprise in history.
Science, the renowned and influential magazine, addressed the growing trend of “open access” publishing, where scientists resort to online journals that charge a fee (usually several hundred dollars) for a paper. While still implementing a review system (and rejecting a fraction of the submissions), these journals make it much easier to publish a paper and make access to it “open” to everyone. This contrasts with the dominant academic publishing paradigm that relies on very high subscription rates paid by libraries to get access to the papers for their institutions’ researchers. Many scientists have seen in this dominant publishing system a monopoly and a hindrance to the wide dissemination of research results and thus “open access” journals have mushroomed everywhere and in all fields.
The magazine conducted a simple but shocking experiment: A (bogus) paper was concocted with ideas and results bearing no relation to reality and was submitted to 304 open-access journals — 157 of them accepted it! (Critics later insisted that most of those 304 journals were known to be low-quality ones and not all open-access journals are so bad.)
More importantly, the Economist has decried the high rate of wrong results that get published (in both publishing systems) and never get corrected.
Why is this happening? For several reasons. First and foremost, because the “replication” of research, where another team redoes an experiment and either confirms it or finds no such results, is rarely done. Because journals are more interested in “positive findings”, not “repeats” that may call in question previous works. And because it is often very difficult to redo another team’s research, as full details are rarely given. Furthermore, because academia has now established a “publish or perish” culture, researchers are pushed to publish often, even if they themselves are far from convinced of the validity of their “findings”.
The Economist tells us that in the few cases where reviews were conducted on a number of results on a specific topic, the rate of confirmation of the earlier “findings” ranged between 10 and 25 per cent!
That is quite shocking, especially when one recalls that most, if not all, journals conduct reviews (often exhaustive) of the papers before accepting them for publication or rejecting them. And indeed, almost all those results that were shown to be either wrong or impossible to repeat were published after such expert reviewing. In fact, reviews of the reviewing process have pointed to serious flaws in it. To be sure, many journals are thorough and highly reputable, but even the highest rated ones have in recent years published totally erroneous papers.
About 15 years ago, a world-renowned researcher I had been collaborating with went into quite a depression when the data that another team had published and presented at top conferences was recognised as totally wrong. Unfortunately, he had spent several years publishing papers analysing the data and drawing conclusions, all of which were now also wrong.
And I remember when once a stunning astronomical observation was announced, with superlative statistical “significance”, only to be shown to be totally artificial.
Which brings me to another reason for the current turmoil in the scientific enterprise: Most scientists are badly, if at all, trained in statistics. Indeed, nowadays, the data are so humongous that deep and sophisticated mastery of statistical tools is necessary in order to avoid drawing wrong but glittery conclusions. Moreover, scientific models have become so complex (in most fields) that it is easy to get a “fit” to the data, one that seems to make sense and imply some physical mechanism when none is at work.
So what should be done about this? First, academics — and more so their superiors — need to relax the pressure of publishing. Indeed, it is much more valuable for everyone to publish fewer but more solid papers than larger quantities that only add to the enormous confusing junk that is out there. Secondly, the digital world has changed everything and with it, the scientific enterprise needs to evolve, with more sharing, mutual checking and support in research.
The Economist and Science have done the academic community a great service by raising awareness of the problem. Most importantly, we need to educate the next generation of researchers about the perils of publishing too quickly. The whole scientific enterprise is at stake.
Nidhal Guessoum is a professor and associate dean at the American University of Sharjah. You can follow him on Twitter at: www.twitter.com/@NidhalGuessoum