Academic publishing has already reached a point where too much material of too little substance is being published, and this trend is continuing.1 The ostensible reason for academic publishing is to communicate useful information to academic peers. But of all papers published in the top scientific journals (i.e., those listed in the citation index ISI Web of Knowledge)—7,279 science and social science journals from 2002 through 2006—only 40.6 percent were cited at least once in the five years following publication.2 More recent compilations with large databases indicate much the same proportions.3 Moreover, evidence suggests that social science papers are cited at a significantly lower level. And it should be noted that this includes self-citations: deleting these might lower the number considerably. In 1981, the total number of journals was 74,000 and by 1990 that number had risen to 108,500.4 By 2003, the total reached 172,000.5 Yet, we regularly see the creation of new journals in our fields every year. While we have no citation statistics for the large group of lesser journals, it would be a reasonable assumption that they are cited at a much lower level than the 40.6 percent indicated above. And we believe it might be reasonable to ask if some of these journals should continue, or at least be purchased by libraries. This is one area where cost/benefit analysis seems to play no role. One academic at MIT opined that "(i)f the bottom 80% of the literature 'just vanished,' I doubt the scientific enterprise would suffer."6
The huge expansion of academic publishing has seen a commensurate increase in the number of journals, issues, and pages produced, but with exponential increases in costs. For example, at the UCLA libraries the number of serials increased by 40 percent during the period 1980 to 2000, while annual subscription costs increased by 1200 percent to a staggering $5.8 million (see figure 1). Many university libraries have been forced to cancel a great number of subscriptions or otherwise reduce access to journals. One proposed solution has been to expand online journal production. We have grave misgivings about expanding modes of scholarly output, however, whether print or electronic. Too much literature already exists, much of it weak and redundant, and much of the latter we believe to be encouraged by the widespread de facto policy of academic reward based on bulk rather than on quality. This policy is unethical, is the root of many problems, and should be changed to one that encourages quality and discourages quantity. While our experience is more with science, engineering, medicine, social science, and management, it appears that the humanities have much the same problem. A recent study by Emory English professor Mark Bauerlein, for example, suggests that many articles and monographs in English literature are hardly used.7
We view this glut of unutilized and even inconsequential literature as mostly a function of reward systems in universities, research institutes, and funding agencies. Indeed, scholarly publishing may be more about promoting scholars than promoting scholarship.8 There has long been the notion that “deans can’t read but they can count.” Once confined to research universities, this culture is now prevalent in most institutions requiring publication. Passing the problem to future generations, the urge to publish filters down to trainees such as graduate students and postdoctoral fellows. Just as faculty publications are counted as criteria for promotion, students and fellows need to publish to compete in securing their next positions. In some departments at one of our institutions (UCLA), it is actually mandated that a graduate student have two or even three first-author papers published or in press before the dissertation can be approved. The faculty member responsible for these students thus feels a double pressure to proliferate papers from his research group. And as any economist knows, production follows subsidy, the subsidy in this case being academic rewards. This issue has been addressed by several academics, including one of us.9
Although the word “crisis” is much overused, we see this glut as already a crisis. Right now there is only a relatively moderate surge, mostly from Europe and North America, but a tsunami of academic papers from China and India written in English is approaching.10 In 2007, the number of articles from mainland China was 203,016, an increase of 340 percent over the number of articles published in 2000, and this figure is expected to expand as China and India establish world-class universities. 11
The downsides to the literature glut are manifest:
- The massive refereeing load that all qualified academics carry. Because there is often so much to review, we suspect that the quality of refereeing is adversely affected. And because of the load, refereeing is now often passed on to less qualified people, even to students.
- The mountain of redundant and often useless reading one must do to research any topic.
- The number of dubious/disputable ideas that are published. Because there is far too much to read and carefully consider in contemporary literature, questionable ideas too frequently find their way past referees into print and go unchallenged by immediate commentary in subsequent issues of the journal. We see this dialectic as critical to the search for truth, but some journals do not encourage or even allow commentary. Once an erroneous concept gets into the literature and garners many citations, it becomes much more difficult to correct. And that creates even more glut. For example, an extremely high estimate of the soil erosion rate in Europe appeared in Science. Before it could be corrected, several investigators unknowingly utilized this highly erroneous rate in their own science projects. Close examination showed that the rate was derived recklessly and should have been caught by qualified referees.12
- The financial load on libraries to make all this “information” available to scholars. Continuing to use the present system will mean that libraries will have decreasing resources and some really significant publications may no longer be available (see figure 1).
- The neglect of the longstanding qualified literature. Because of the headlong rush to publish new material, older literature is not being properly perused. Thus the wheel is continually being reinvented, or perhaps better stated, old wine is being put in new bottles. This lack of attribution to earlier work comes not so much from foolery or knavery as simply from less available time to sift through the extant piles of literature, especially material not yet available electronically.
- The present system consumes vast amounts of paper. Moreover, the cost of transporting, handling, and properly storing this mass is considerable. The present system is environmentally irresponsible.
- Most important, the present culture breeds an entrepreneurial careerism, and thence a cynicism that is inimical to the true academic enterprise. Indeed, we find that the present culture to be pseudointellectual because it systematically diverts intellectual activity into more visible but less productive channels.
In short, we find the present system to be inefficient, irrational, unfair, outrageously expensive, and environmentally irresponsible—in two words, unscholarly and unethical.
The entrepreneurial enterprise mentioned above deserves more discussion. It is interesting that some academics who so deplore the entrepreneurial spirit in capitalistic economies can be so entrepreneurial in their own endeavors. And it’s not just for the love of knowledge; it’s often for private gain. As formulated now, the goal of getting many articles into print often becomes more important than making a real contribution. It is thus understandable that new assistant professors tend to ask “How can I get eight (or nine or twelve) publications in the next five years in order to get promotions, tenure and pay increases?” rather than “How can I contribute to scholarship in my field?”
A real contribution to a field often questions conventional wisdom, arguing that some well-established scholars may be in error. Such articles can be difficult to get past referees and editors. Juan Miguel Companario has reviewed many Nobel Prize laureates whose papers were rejected several times before being accepted.13 These examples range from Svante Arrhenius (1903) to Günter Blobel (1999), and the problem often obtains even for lesser mortals who submit groundbreaking papers.
Methods of producing many papers are well known. The most common is to include just enough contribution to be recognized as such in a single paper—just enough to be reluctantly accepted. These articles are sometimes referred to as “Least Publishable Units” (LPUs). Thus, multiple papers are required to deliver the full message. Rather than being challenged by the work, leaders in the field may in fact be patronized with citations and fawning comments to obtain their favor. The paper is then fleshed out to reach a respectable length. Sequential papers often contain much overlap, a device known as “shingling,” and are usually sent to different journals, sometimes simultaneously. The route to having many papers published can be a low road.14
The increasing tendency toward group authorship of papers presents particular problems. While we appreciate the need for teamwork in many instances and applaud the synergy that may accompany that, we recognize that authors are often added as a “courtesy” where an acknowledgment might be more appropriate. No matter how many authors listed, however, we must insist that no paper is worth more than 100 percent. That is, the fractional parts attributed to each author of a paper can add up to no more than one. We sometimes hear empty praise on the order of “X has two hundred publications,” whereas X actually may have only two hundred fractional parts of papers, the sum total being far less than two hundred—and conceivably a very small fraction of that in some instances. However, the cachet of the larger number sometimes overawes academic promotion committees and leaves scholars who write papers on their own at a serious disadvantage.
Much is made, sometimes vacuously, of refereed papers in academia. The fact that a paper is refereed does not necessarily make it good: indeed, there is strong scientific evidence to the contrary.15 While we hold Science and Nature as role models, poor papers can appear in those publications as well, and classic papers sometimes appear in grey literature. An example in physical geography is a groundbreaking study that changes the way we look at stream processes that was published in a relatively obscure government document.16 Most journals require only two referees, and there is always a good chance that they may lack sufficient knowledge in the required area or too busy to give the paper their full consideration (as mentioned above) or biased. The latter point is underlined by the recent disclosures of “Climategate.”17 When the majority of academics in a discipline or field hold a particular view, they may conspire to exclude contrary findings. Indeed, where groupthink and political correctness dominate, a double standard for accepting academic papers for publication may exist.18 The true worth of a paper is the contribution it makes over a longer period of time, and that’s where citations prove so valuable as a measure of significance for more senior academicians.
Because most of the academic enterprise is supported by tax dollars, student fees, and private donations by the public, we believe that as academics we have an obligation, a social contract even, to deliver new knowledge in a rational, reliable, and cost-effective manner. All of this takes on more gravity in the current fiscal crisis in academe. Far beyond the costs of buying marginal journals, universities invest huge amounts to subsidize research (release time, travel, etc.). Perhaps it would be better for those not producing viable research to teach more. And perhaps some institutions should emphasize teaching rather than research.
It is now fashionable to blame commercial publishers such as Elsevier and Springer (the publisher of this journal) for the glut and the expense, but we see this as a symptom of the problem rather than the problem itself, which is an artificial market for often substandard products.19 Commercial publishers are providing outlets for this flood and doing it well. The limited clientele for many of these highly specialized journals means that the unit cost must be high, another economic reality. Journals like Academic Questions, which come from dues-subsidized and nonprofit professional associations, are often thought to be cheaper, but they too are also becoming expensive. For example, to subscribe to the Journal of Geophysical Research, published by the American Geophysical Union, costs about $6,000 per year. However, we do not give commercial presses a clean bill of health. In conjunction with obtaining the copyright for papers, they often gouge libraries with their monopolistic practices.
The good news in this otherwise dismal picture is that at least two basic quantitative measures of the citation visibility of established journals (as well as the researchers themselves) now exist. The most important is the Citation Impact Factor from the Institute for Scientific Information (ISI). By this index—essentially an average of how many times an article is cited— Nature and Science score about 30, while most major disciplinary journals score perhaps 1 to 2. But a vast majority of journals score below 1.0, and some are hardly visible by this index. ISI also has another journal index, a “citation half-life” score that indicates the duration of viability of articles—are they classic or ephemeral? Other indices are available, but the ISI is the best known, so we will use it for this discussion.
As with any quantified measures of human ability and accomplishment, there are strong objections to the use of ISI ratings.20 We quite understand that these measures of journal quality are imperfect (e.g., self-citations, bias by academic specialties, bias for trendy topics such as “critical studies,”), but refinements and improvements continue.21 However imperfect these numerical measures may be, they are far superior to mere opinions that have too often been colored by self-interest and petty biases. In the departmental setting these dubious criteria were usually those of senior colleagues who made the tenure and promotion decisions concerning younger faculty and advocated the often obscure journals in which the seniors had published.
After ISI ratings became available, it did not escape our notice that there was often little correlation between what ISI indicated to be prestigious and what had theretofore been perceived as prestigious in our departments and universities. Indeed, younger faculty have occasionally been heavily penalized for not publishing in the “right” journals or for choosing the more ethical path of a few meaningful publications in the truly best journals. And conversely, we could point to the thick curricula vitae of senior academics who accumulated publications that propelled them to high rank but contributed very little to the advancement of the academic enterprise.
While we do not endorse discontinuing the publication of journals based solely on ISI ratings, such ratings should be a strong influence. Utility is a commonly accepted criterion and we fail to see much utility in journals that are read, or at least cited, by few people.
It has been widely suggested that new online open access journals will alleviate the situation by taking papers away from the commercial publishers. We doubt it. Again, consider the proliferation of new print journals over the past two decades, which does not seem to have caused a dearth of submissions to already existing journals. We believe that academics can and will supply papers for a seemingly infinite number of journals produced, but at a cost to the quality of the articles published. More ominous are the liabilities peculiar to e-journals that seem to be difficult to overcome. One is the substantial easing of length restrictions due to the elimination of print and paper costs. Science and Nature get much of their clout by the brevity and tightness of article organization and writing, qualities that we strongly suggest other journals emulate. We also wonder if such e-journals will be perceived as “phantom” and ephemeral (indeed, a recent one is actually titled Ephemera). How much time will referees be willing to spend adjudicating such journals, especially if, in the absence of space constraints, their articles become as bloated as we fear they will?
There will then be the problem of rating e-journals, which might take a somewhat different form than print journals. Until then, what scholar would want to invest an outstanding paper in such a risky venture? We also note that e-journals are not necessarily cheap. Authors in the online journal PloS Biology were charged $1,500 for each article in 2003,22 and the current price has increased to $2850.23 Such “author pays” schemes have severe long-term financial liabilities, including the fact that the cost to be sustainable will soon rise to $8000 to $25,000 per article, making the whole situation much worse than at present.24 Finally, there is the problem of archival permanence, whether of e-journals or of regular print journals stored by libraries only in electronic form.
What to do? A new academic publishing culture must arise in which quality looms foremost. On the demand side, one way of dealing with this is to consider ISI journal rankings at promotion time, especially at tenure. Another method (used by Harvard Medical School) of dealing with tenure and major promotions—and one we greatly favor—is to evaluate the candidate’s best, say, five publications, or perhaps a maximum of one hundred pages of writing, for the period considered. This would thus encourage candidates to take what in biological reproduction is termed a “K” strategy: devoting attention to a few publications and ensuring that they are of the highest quality possible rather than producing many lesser pieces. For more senior candidates, citation counts, both journal and personal, could be added to the other evidence considered.
A supplementary route might be closer to the supply end of the problem: using the buying power of a large university library system as a lever. The monopolistically controlled costs of electronic subscriptions are rising steeply. Universities and libraries are experiencing their worst fiscal crisis in decades. The financial glory days of the past decade may never return. Using the massive University of California (UC) library system as an example, we present a scenario that might bring pressure to bear on the publishers of academic journals. A first step would be that UC librarians meet with their chancellors with the plan to have the UC system lead the charge toward a new model of journal publication. The Science/Nature model could serve as the starting point—as already noted, these journals publish very short, dense articles (two to six pages) and have extremely high ISI ratings. In sum, they are very significant. Therefore, the concept of a journal that prints shorter articles but has a vastly higher ISI impact would be the driving concept behind this model, given the reality of rising production costs during a time of financial crisis. Then, the UC system would:
- Focus on the natural sciences first;
- Rank all journals as follows: [20 – (normal page length) ´ ISI score = “page impact score” (PI score)]; e.g., for Science: [(20 – 6) ´ 30 = 420];
- Cancel subscriptions to the bottom 25 percent immediately, giving “advice” that the best way for them to improve their PI-score is to (1) cut article length and adopt the Science/Nature model, and (2) move cut material into required appendices available online at the journals’ website;
- Start working with various library and professional associations to adopt the UC approach;
- See the minimum score for UC subscriptions rise as more and more journals work to improve their PI score;
- Revise, in conjunction with academic senate representatives and administration, promotion criteria to emphasize publications in high PI-score journals;
- See, with possible variations, the Science/Nature model and PI-score approach be embraced by the social science and even the humanities.
A coalition of university systems across several states might produce greater and faster results.
To those who might say that few, if any, institutions or agencies are taking this high road, we ask, why not begin here? We believe this culture would spread and, in the longer term, reduce the number of publications and therefore journals produced and acquired by libraries. But the great advantage of a new culture would be the greatly enhanced quality of scholarly articles and the ethical atmosphere in which we work. We believe our views are widely shared. The authors of this piece come from four very different schools: letters and science, medicine, management, and engineering. The present malaise afflicts us all.
We thank Sharon Farb and Cynthia Shelton of the UCLA Library and also the UCLA Library Committee on Scholarly Communication for critical readings and commentary.
Figure 1. Number of serials and total cost of serials, UCLA libraries, 1975–2001.
From 1988 to 2001, the number of serials held steady because of subscription curtailments but the cost approximately doubled and continued to climb. (Statistics accessed January 15, 2007, Association of Research Libraries, University of Virginia, http://fisher.lib.virginia.edu/cgi-local/arlbin/arl.cgi?task=setupgraph.)
Stanley W. Trimble is a professor of geography at the University of California at Los Angeles, Los Angeles, CA 90095-1524; [email protected]. He is the co-author with Andy D. Wardof Environmental Hydrology, 2nd ed. (CRC Press, 2004) and editor of The Dekker Encyclopedia of Water Science (CRC Press, 2008). From 1996 to 2006 he was joint editor-in-chief of the Elsevier journal Catena. He is presently writing an environmental history of the upper Midwest.
Wayne W. Grody, M.D., Ph.D., is Professor of Pathology & Laboratory Medicine, Pediatrics, and Human Genetics at the David Geffen School of Medicine at UCLA, Los Angeles, CA 90095-7035; [email protected]. He is the director of the Diagnostic Molecular Pathology Laboratory within the UCLA Medical Center, one of the country’s first facilities to offer DNA-based tests for diagnosis of a variety of genetic, infectious, and neoplastic diseases. He is an attending physician in the Department of Pediatrics specializing in the care of patients with or at risk for genetic disorders. He has been one of the primary developers of quality assurance and ethical guidelines for DNA-based genetic testing for a number of governmental and professional agencies.
Bill McKelvey is Professor of Strategic Organizing and Complexity Science at the UCLA Anderson School of Management, Los Angeles, CA 90095-1481; [email protected]. HisOrganizational Systematics (University of California Press, 1982) remains the definitive treatment of organizational taxonomy and evolution. He is co-editor, with Peter Allan and Steve Maguire, of the Sage Handbook of Complexity and Management (Sage, 2010).
Mohamed Gad-el-Hak has taught and conducted research at the University of Southern California, the University of Virginia, the University of Notre Dame, the Institut National Polytechnique de Grenoble, the Université de Poitiers, the Friedrich-Alexander-Universität Erlangen-Nürnberg, the Technische Universität München, and the Technische Universität Berlin. He has lectured extensively here and overseas. He is currently the Inez Caudill Eminent Professor of Mechanical Engineering at Virginia Commonwealth University, Richmond, VA 23284; [email protected]. He was named the Fourteenth ASME Freeman Scholar in 1998. In 1999 he was awarded the Alexander von Humboldt Prize and in 2002 was named ASME Distinguished Lecturer and inducted into the Johns Hopkins University Society of Scholars.
A short version of this paper by Mark Bauerlein, Mohamed Gad-el-Hak, Wayne W. Grody, Bill McKelvey, and Stanley W. Trimble appeared under the title “We Must Stop the Avalanche of Low-Quality Research” in the June 13, 2010, Chronicle of Higher Education, http://chronicle.com/article/We-Must-Stop-the-Avalanche-/65890/.
[1] Peter Jacsó, “Five-Year Impact Factor Data in the Journal Citation Reports,” Online Information Review 33, no. 3 (2009): 603–14.
[2] Philippe Baveye, “Sticker Shock and Looming Tsunami: The High Cost of Academic Serials in Perspective,” Journal of Scholarly Publishing 41, no. 2 (2010): 190–214.
[3] David P. Hamilton, “Publishing by—and for?—the Numbers,” Science 250 , no. 4986 (December 7, 1990): 1332. Interestingly, for the period 1981 to 1985, Hamilton identified 4,500 top journals and found that only 45 percent of the articles were cited in the five years after they were published (as compared to 40.6 percent found by Jacsó for 2002 to 2006). Thus, different authors (Jacsó, Baveye, and Hamilton ) at different times have found similar citation patterns in what they consider to be the best journals. The significance of the decrease in articles cited between the two periods from Hamilton and Jacsó is uncertain but certainly suggests that matters are not improving.
[4] Ulrich’s International Periodicals Directory (New Providence, NJ: R.R. Bowker, 2003).
[5]Hamilton, “Publishing by the Numbers,” 1332.
[6] Mark Bauerlein, “Professors on the Production Line: Students on Their Own” (Working Paper 2009–01, Future of American Education Project, American Enterprise Institute, Washington, DC, 2009).
[7] Ursula M. Franklin, “Does Scholarly Publishing Promote Scholarship or Scholars?” Scholarly Publishing 24 (July 1993): 248–52.
[8] Mohamed Gad-el-Hak, “Publish or Perish—An Ailing Enterprise?” Physics Today 57 (March 2004): 61–62, http://www.people.vcu.edu/~gadelhak/Opinion.pdf.
[9] Baveye, “Sticker Shock,” 203.
[10] Ibid.
[11] The complete story of the error is told in John Boardman, “An Average Erosion Rate for Europe: Myth or Reality,” Journal of Soil and Water Conservation 53, no. 1 (January 1998): 46–50.
[12] Juan Miguel Companario, “Rejecting Nobel Class Articles and Resisting Nobel Class Discoveries,”http://www2.uah.es/jmc/nobel/nobel.html; Editorial, “Coping with Peer Rejection,” Nature 425 (October 16, 2003): 645–46, http://www.nature.com/nature/journal/v425/n6959/full/425645a.html.
[13] Thomas H. Berquist, From the Editor’s Notebook, “Duplicate Publishing or Journal Publication Ethics 101,” American Journal of Roentgenology 191, no. 2 (2008): 311–12, http://www.ajronline.org/cgi/content/full/191/2/311.
[14] Hans-Dieter Daniel, “Publications as a Measure of Scientific Advancement and of Scientists’ Productivity,” Learned Publishing 18, no. 2 (April 2005): 143–48.
[15] Stanley W. Trimble, “Classics in Physical Geography Revisited: S.C. Happ, G. Rittenhouse, and G. Dobson, ‘Some Principles of Accelerated Stream and Valley Sedimentation,’ U.S. Department of Agriculture, Technical Bulletin 695,” Progress in Physical Geography 32, no. 3 (2008): 337–45.
[16] Stanley W. Trimble, “An Environmental Scientist Parses Climategate: Why Stoop if the Science Is Solid?” Academic Questions 23, no. 1 (Spring 2010): 54–56.
[17] Stanley W. Trimble, “The Double Standard in Environmental Science,” Regulation 30 (Summer 2007): 16–22, http://www.cato.org/pubs/regulation/regv30n2/v30n2-1.pdf.
[18] Stephen P. Boyd and Andrew Herkovic, Crisis in Scholarly Publishing: Executive Summary, report, Subcommittee of the Stanford Academic Council Committee on Libraries, May 18, 1999, http://www.stanford.edu/~boyd/schol_pub_crisis.html.
[19] M.H. MacRoberts and Barbara R. MacRoberts, “Problems of Citation Analysis: A Critical Review,”Journal of the American Society for Information Science 40, no. 5 (1989): 342–48; T. van Leeuwen, H.F. Moed, and J. Reedijk, “Critical Comments on ISI Impact Factors: A Sample of Inorganic Molecular Chemistry Journals,” Journal of Information Science 25, no. 6 (1999): 489–98; David Colquhoun, “Challenging the Tyranny of Impact Factors,” Nature 423 (May 29, 2003): 479; Editorial, “Not So Deep Impact,” Nature 435 (June 23, 2005): 1003–1004, http://www.nature.com/nature/journal/v435/n7045/full/4351003b.html.
[20] Aline Solari and Marie-Helene Magri, “A New Approach to the SCI Journal Citation Reports. A System for Evaluating Scientific Journals,” Scientometrics 47, no. 3 (2000): 605–25; Wolfgang Glaenzel and Henk F. Moed, “Journal Impact Measures in Bibliographic Research,” Scientometrics 53, no. 2 (2002):171–93.
[21] David Malakoff, “Opening the Books on Open Access,” Science 302 (October 24, 2003):
[22] Baveye, “Sticker Shock,” 197.
[23] Ibid.