At the conference on “The Role of the University in our Time” that I attended last week at Monte Verità in Switzerland, I heard a fair amount about the “Shanghai rankings. “ The term was usually uttered in a tone of disdain. When a speaker said, “the Shanghai rankings,” what he usually meant was something like, “the miserable, unjust, superficial, seductive-to-small-minds-but-none-the less-irresistible Shanghai rankings.” But the outraged adjectives could be left unspoken, for the same reason that it is unnecessary to say “insipid American chocolate.”
I confess that I did not know what the Shanghai rankings are. They sounded vaguely like a system for evaluating the quality of tea. But no. The Shanghai Rankings are a list of universities around the world ranked according to their “academic or research performance.” As it happens, universities in the United States do exceptionally well in this system of ranking. The U.S., with 4.6 percent of the world population has 31.6 percent of the top 500 universities, and 54 percent of the top 100 universities. Europe as a whole has 41.5 percent of the top 500, and 33 percent of the top 100.
The Shanghai Rankings are the work of Center for World-Class Universities at Shanghai Jio Tong University, and they employ a simple and fairly transparent system of criteria:
Criteria |
Indicator |
Code |
Weight |
Quality of Education |
Alumni of an institution winning Nobel Prizes and Fields Medals |
Alumni |
10% |
Quality of Faculty |
Staff of an institution winning Nobel Prizes and Fields Medals |
Award |
20% |
Highly cited researchers in 21 broad subject categories |
HiCi |
20% |
|
Research Output |
Articles published in Nature and Science* |
N&S |
20% |
Articles indexed in Science Citation Index-expanded, and Social Science Citation Index |
PUB |
20% |
|
Per Capita |
Per capita academic performance of an institution |
PCP |
10% |
Total |
0 |
0 |
100% |
One can see at a glance that the Shanghai Rankings are a system that doesn’t depend on subjective assessments, reputational attributes, or factors that can be easily gamed. On the other hand, it is a system that gives enormous weight to contributions in the hard sciences: 20 percent of the rank derives from the number of articles that a university’s faculty published in the journals Nature and Science.
While none of the factors can be easily gamed, there is room on the margins for maneuvering. Another 20 percent of the rank derives from the number of citations to articles written by faculty members. Institutions and scholars do sometimes try to inflate these figures. The weight given to faculty members and alumni who win Nobel Prizes and Fields Medals might also be considered a hidden subjectivity. The prizes represent human choices about the relative significance of scientific and mathematical contributions. They frequently go to individuals whose main work was performed decades earlier and that may not reflect all that much on the current quality of the university.
And as more than one person at the Swiss conference observed, research is carried out by researchers, not by universities. A system of ranking that extrapolates from the work of individuals to the quality of the institution may miss some important considerations—both good and bad.
Once it was clear to me what the Shanghai system is, I recalled that I have indeed run across it from time to time, but I guess it never struck me as all that important or interesting. I don’t think it is much of a focus for American pride or American academic planning. The fretfulness of the European academics at the conference, however, invites further thought. Or at least questions. The rankings suggest an arena for international competition. The European academics felt their nations and Europe as a whole suffer a genuine disadvantage in the preponderance of U.S. universities in the upper regions of the list.
The disparity prompts two kinds of questions: Why does it exist? And what are its consequences? It exists because of the convergence of a variety of historical factors. High on my list is the enormous investment in university-based science research that America made beginning with the Manhattan Project and continuing through the Cold War to the present. Next I’d add the huge federally-funded science projects such as the Space Race and the War on Cancer. And I would also add the relative ease with which university researchers in the United States can move their discoveries outside the lab to develop them as commercial projects. Our system of intellectual property and market capitalism offers powerful incentives for university scientists.
I was struck at the conference at the near invisibility of these factors to the European academics. They emphasized other considerations, which are indeed real, but in my view are much further down the scale. One such factor is that American researchers have to compete with one another for federal funding, and many proposals don’t get funded at all. Apparently, European scientists enjoy more stable funding from the state—but, free from the anxiety of grant-competition, are less creative in what they propose. No doubt I am blurring details, but that’s the basic picture. The Europeans at the conference (and some Americans too) were vexed at the large amount of support for research at American universities that comes from business and industry. They worried that this corrupts the spirit of science, impedes basic research, conduces to an excess of applied work, and hampers the open communication on which science is based.
And it is all the more aggravating that a system open to such temptations should continue to out-perform systems that shun the temptation. Eight of the top ten universities in the world, as ranked by the Shanghai system, are in the U.S.
Another factor that wove in and out of the discussion at Monte Verità was the epistemological foundation for modern science. The basic division of scholarly opinion on this is between those who view science as essentially autonomous and those who emphasize its cultural fluidity. Those who see science as an intellectually autonomous enterprise emphasize that it is accumulates knowledge and is driven by the method of “conjectures and refutations.” It strives for objectivity and offers the individual scientist the ideal of setting aside his biases and extra-mural commitments for the freedom of inquiry itself. Science in this view can only be marred by the intrusion of political ideology.
Those who emphasize the cultural fluidity of science, by contrast, make much of the choices and directions that scientists happen to pursue at particular historical moments in particular cultural and political circumstances. Science may delude itself into thinking it is above ideology, but to the contrary, the winds of ideology are always in its sails. That being the case, we should welcome what ideology can offer.
Towards the end of the conference, for example, a speaker extolled as the two greatest epistemological advances of our time the collapse of the distinction between facts and theories, and the rise of feminism.
I don’t know to what extent this view prevails in European universities, but I suspect that if it is widespread, we would have to rank it pretty far up among the reasons why science in European universities is less fertile than science in American universities. Not that American universities are immune to such epistemo-pathologies. We too have exponents of the view that science is socially constructed and therefore ought to be even more open to political promptings than it already is. But these American exponents are seldom themselves scientists.
The other question raised by the disparity in the Shanghai rankings between American and European universities is whether in the broad sense it makes any difference. Does it have consequences? The Europeans quite plainly thought so. University research, in their view, drives economic prosperity. Of course, American university presidents and proponents of big science say much the same thing. It isn’t obvious to me that it is true, but I’ll leave that for another time.