A Crisis in Communication
DOI:
https://doi.org/10.4322/cto.2014.088Resumo
In 1963, at the end of his 20-year tenure as Editor of The Lancet, Theodore “Robbie” Fox gave a series of three lectures at the London School of Hygiene and Tropical Medicine. The transcripts of those lectures were published as a book entitled Crisis in Communication: The Functions and Future of Medical Journals two years later. In the book, Fox makes a stark prediction: “A day will come when journals will be superseded as a means of publishing new research.” (FOX, 1965). As the fiftieth anniversary of the book’s publication draws near, it is worth reassessing Fox’s prediction. Will journals be superseded 50 years from now?
At first glance that seems unlikely. The first scientific journal, Philosophical Transactions, was published in 1665. Three centuries later, when Fox wrote Crisis in Communication, there were 6,000 medical journals in existence. Now, Scopus, one of the largest indexes of scholarly output, includes around 21,000 journals in its database: it has indexed 21 million articles published between 1823 and 1996, and a further 33 million records published within the last two decades. Indeed, according to a recent analysis, scientific output continues to increase by around 8-9% per year (BORNMANN; MUTZ, 2014).
This increase in the number of articles is often at the expense of quality, with many papers never being cited. According to one estimate, as many as 12% of papers in the clinical sciences, and 27% of papers in the life sciences, are not cited within 5 years of publication (REMLER, 2014). The drivers for this are clear — academics feel pressured to publish as many papers as possible in order to get tenure and to secure new research funding. Furthermore, the huge volumes of papers being published also puts pressure on the academic community in other ways: readers are struggling to keep up with the literature; and many journals are finding it increasingly hard to secure the services of qualified peer reviewers to judge the suitability of papers for publication.
The process of doing science is also changing. Not so long ago, single-author papers were considered to be essential for career advancement, as they demonstrated the ability to do research independently. More recently, large-scale collaborations are the order of the day, with hundreds, sometimes thousands, of researchers collaborating on expensive, international scientific studies. Jim Gray, a computer scientist, identified this shift in the scientific method calling it ‘the fourth paradigm’. In his words (GRAY, 2007, p. xix):
The world of science has changed, and there is no question about this. The new model is for the data to be captured by instruments or generated by simulations before being processed by software and for the resulting information or knowledge to be stored in computers. Scientists only get to look at their data fairly late in this pipeline. The techniques and technologies for such data-intensive science are so different that it is worth distinguishing data-intensive science from computational science as a new, fourth paradigm for scientific exploration.
Even small-scale studies often generate large amounts of data, and making these data discoverable and reusable is becoming an increasingly important priority for the scientific community. The rise of the open science movement, in which both the raw data and final publication are made available for easy access and reuse, has opened up opportunities for scientific-data-hosting companies like Figshare and Dryad, as well as for open-access ‘mega-journals’ like PLoS One and Scientific Reports. However, open-access journals, which generate revenue by charging authors an ‘article processing charge’ (APC) to publish their paper, have the potential to make the problem worse, not better, by making it easier for authors to publish their work.
Fox considered journal publishing to be in crisis 50 years ago, but the crisis in communication is arguably far worse now than it was then (FOX, 1965). Funders and academic institutions need to address the current crisis in two ways. First, they need to reward scientists for the quality, not quantity, of the work that they publish; surrogate markers of quality, such as journal impact factors, are widely perceived to be imperfect ways of judging an individual scientist’s research output and new metrics are urgently needed. Second, funders and institutions should encourage their researchers to publish much of their work in repositories. These standardized reports would not necessarily be read by human beings, but rather would hold data and data descriptors in a form that is machine-readable to allow accurate indexing and meta-analysis by computers. There are early signs that such an approach would be feasible, but it will take time to change 350 years of academic culture. How long, exactly, remains to be seen. After all, as Niels Bohr so eloquently put it: “it is very difficult to predict — especially the future” (MENCHER, 1971, p. 37).
- Figshare and Scientific Reports, which are mentioned in this article, are part of Macmillan Science and Education, the author’s employer. The views expressed here are personal and do not necessarily reflect the opinions of Macmillan Science and Education.