Global university rankings offer opportunities for reflection on academic policies, but run into the difficulty of capturing local specificities

Rankings are an inescapable reality and, despite their conceptual biases and the many criticisms, they are tools that contribute to the understanding of the state of education and research around the world. This view is sometimes reflected in the academic governance of some higher education institutions, which begin to consider aspects of these classifications when designing their own policies.

Almost ten years ago, the mathematician Philippe Vincke, professor at the University of Brussels, anticipated that the rankings would become a benchmark for institutional policy-making by universities. In the international scenario, some of these classifications have been consolidated and now occupy a prominent place – the Academic Ranking of World Universities (ARWU), by Jiao Tong University, Shanghai, the QS World University Ranking, by Quacquarelli Symonds (QS), the British Times Higher Education (THE) and, in Brazil, the Ranking Universitário Folha (RUF) which reached its seventh edition this year.

According to Sabine Righetti (in an article published in Repensar a universidade – desempenho acadêmico e comparações internacionais (in English: Rethinking the universities – Academic Performance and International Comparisons), a recent book organized by Jacques Marcovitch, professor and former rector of University of São Paulo), the scientific literature estimates the existence of ten global university rankings that assess universities worldwide, and other 60 national ones that analyze the performance of countries institutions. The phenomenon of university rankings has also been a subject of growing academic discussions and elaborations.

Critical approach

The rector of UFMG, Professor Sandra Goulart Almeida, affirms that rankings provide recognition and visibility, but she also highlights that UFMG’s mission is not defined by such instruments. “It is imperative to approach every ranking from a critical perspective, evaluating, in each case, their choices of methodology, their indicators and their impacts on the outcomes” she says. “In addition, we have other parameters, common to public and tuition-free institutions, referring to the country and the Brazilian society, such as the social impact of our activities,” she adds.

In this sense, the Deputy Dean for International Affairs at UFMG, Professor Dawisson Lopes, advocates the incorporation of extra-mural activities to the criteria employed by rankings as a way to compose a clearer picture of the social mission of public universities in Latin America. “They are institutions that deal with a country-view, while in other continents there are also many institutions which are extraordinarily successful in scientific production and technology development, but fail to show this high commitment nation-building. That is simply not taken into account” he says.

With variations in terms of methodology, each ranking seeks to capture the quality of institutions through different categories such as research, teaching, employability, institutional relationship with companies, internationalization, reputation and innovation. According to Professor Carlos Basílio Pinheiro, Director of Scientific Production at UFMG’s Research Office, when a university gets a better or worse position than another in a ranking, this implies for some “defining what is good and bad, right and wrong, by virtue of the criteria chosen for evaluation”.

“University rankings – with all their conceptual and methodological flaws – end up establishing both the concept of ‘quality university’ and the expected profile of a professor,” argues Basílio. Like the rankings or not, it will be increasingly difficult for any higher education institution to ignore them, because they came to stay”.

Faced with this reality, it is the responsibility of higher education institutions to adopt “a moderate, critical and objective look at these indicators, which can be a useful tool for learning about our own universities”, says Professor Dawisson Lopes. “After having evaluated over twenty different rankings, breaking down factors, understanding their biases and imperfections, I believe they can help understand what UFMG is by the other’s looks,” says the professor of the Political Science Department.

According to Professor Jacques Marcovitch, rector of USP from 1997 to 2001, “international comparisons have become inescapable facts of the 21st century.” In the book titled Repensar a universidade, he outlines the competencies of an intelligence unit in the universities from the Brazilian state of São Paulo, which would be responsible for curating indicators, both for the purpose of monitoring the institution performance and for making international comparisons. This novel bureaucracy, in Marcovitch’s conception, would differ from a mere sector of statistics and academic information: while the latter is engaged with data collecting, calculations and the like, this intelligence unit “tracks, checks and makes available to every member from the academic community some institutional performance metrics”.

Recently, UFMG was ranked among the three best Brazilian universities in two different rankings: one international – THE – and the other national – RUF. In Brazil’s RUF, UFMG had its teaching ranked the best in the country for the fifth consecutive year, both among public and private universities.

Even without taking international rankings as a compass for its operations, UFMG recognizes their importance in the international arena. In 2011, the Federal University of Minas Gerais decided to formally join QS and started responding institutionally to the demands of this agency, which at the time requested data such as citations, number of publications delivered by faculty members, registration of national and international patents, number of computer sets available on campus, including those at laboratories and for use of students, as well as the amount of investment in libraries and extra-mural projects. Currently, most of these classifications rely on external sources such as research databases (Scopus, Elsevier, Web of Science, Clarivate Analytics and Lattes / CNPq and the Ministry of Education itself).

According to Professor Carlos Basílio, since 2017 UFMG’s Office of Scientific Production has built a system for collecting information from CNPq, Capes, Fapemig, and other research foundations that guarantees a more accurate mapping of resources invested in research at UFMG. “Our team also monitors databases to point out mistakes related to the association of production and citation of the work of UFMG faculty members. This piece of information helped refine data informed to rankings, hence positively influencing the performance of our university in these rankings” he explains.

Bias

What a country understands as a university mission may be very different from what another nation understands as such. “This depends on aspects such as the historical formation of the higher education system in each country, the way it is financed and how it connects to society,” says researcher Sabine Righetti, in a doctoral thesis defended in 2016 at the University of Campinas (Unicamp). Sabine is a researcher at the Laboratory of Advanced Studies in Journalism (Labjor) and the Laboratory of Studies on Higher Education (LEES), both at Unicamp, and currently takes up the academic coordination of RUF.

“In addition to biases related to ethnocentrism, there are also in these classifications market and methodological aspects, to which one must be attentive,” says Professor Dawisson Lopes. In an article available in Repensando a universidade, Professor Luiz Nunes de Oliveira, from the Physics Institute of São Carlos (USP), comments that the methodological diversity in which the external classifications are based stems from “the lack of commitment [of the rankings] with the planning of universities”. He argues that evaluation is a tool embedded in the planning process and therefore should not be seen as an end in itself, and emphasizes that international rankings “produce imperfect portraits of the universities listed therein because they inevitably disregard this principle”.

Professor Ricardo Takahashi, from the Department of Mathematics at UFMG and special advisor to UFMG’s Presidency, also comments on some methodological weaknesses in the rankings. One of them is the ‘reputation’ criterion, measured through the consultation of employers and academy members. According to him, this data, which has relevant weight, is susceptible to propagandistic actions brought forth by universities and can create an artificial movement. “There are institutions structured to accomplish marketing strategies in order to raise their reputational indicators,” warns the researcher, who reports receiving correspondence from universities that look for good references.

Professor Carlos Basílio Pinheiro, on the other hand, sees another problematic aspect at the same indicator: not always does the main researcher indicate the name of the institutions to which her peers (whom s/he collaborates with) are linked to. He also recalls a statistical problem that he considers to be relevant: the size of the institution influences certain results. The larger universities are, the more likely an achievement of outstanding world awards will be.

Another factor whose effects are still difficult to estimate, is the attempt of the main rankings to adjust the weight of citation and publication indicators, in order to capture the diversity that exists within different areas of knowledge. “There are several different methodologies that they adopt in order to reduce the presence of areas such as physics and biology, which typically have a more expressive raw production, and give greater weight to humanities and the arts” explains Takahashi.

In his opinion, this strategy probably interferes heavily in the ranking of Brazilian and Latin American universities, as the output in the field of humanities is not much published in international journals and stored in databases, either because it is written in Portuguese or Spanish, or because of the tradition of book publication.

As for the hierarchy promoted by rankings, another aspect seems not to be perceived by the general public: the difference of scoring between higher education institutions is usually very small – to the point that they bear no statistical significance. Professor Carlos Basílio exemplifies this problem with data from RUF 2018: there are no relevant differences in the final grade: USP’s (1st place) score is 97.42, whereas UFRGS’s (5th) score is 95.86.