Enhancing sustainable development goals or promoting universities? An analysis of the times higher education impact rankings

Núria Bautista-Puig (Faculty of Engineering and Sustainable Development, University of Gävle, Gävle, Sweden)
Enrique Orduña-Malea (Department of Audiovisual Communication, Documentation and History of Art, Universitat Politècnica de València, Valencia, Spain)
Carmen Perez-Esparrells (Faculty of Economic Sciences and Business, Autonomous University of Madrid, Madrid, Spain)

International Journal of Sustainability in Higher Education

ISSN: 1467-6370

Article publication date: 16 June 2022

Issue publication date: 19 December 2022

5321

Abstract

Purpose

This study aims to analyse and evaluate the methodology followed by the Times Higher Education Impact Rankings (THE-IR), as well as the coverage obtained and the data offered by this ranking, to determine if its methodology reflects the degree of sustainability of universities, and whether their results are accurate enough to be used as a data source for research and strategic decision-making.

Design/methodology/approach

A summative content analysis of the THE-IR methodology was conducted, paying special attention to the macro-structure (university score) and micro-structure (sustainable development goals [SDG] score) levels of the research-related metrics. Then, the data published by THE-IR in the 2019, 2020 and 2021 edition was collected via web scraping. After that, all the data was statistically analysed to find out performance rates, SDGs’ success rates and geographic distributions. Finally, a pairwise comparison of the THE-IR against the Times Higher Education World University Rankings (THE-WUR) was conducted to calculate overlap measures.

Findings

Severe inconsistencies in the THE-IR methodology have been found, offering a distorted view of sustainability in higher education institutions, allowing different strategic actions to participate in the ranking (interested, strategic, committed and outperformer universities). The observed growing number of universities from developing countries and the absence of world-class universities reflect an opportunity for less-esteemed institutions, which might have a chance to gain reputation based on their efforts towards sustainability, but from a flawed ranking which should be avoided for decision-making.

Practical implications

University managers can be aware of the THE-IR validity when demanding informed decisions. University ranking researchers and practitioners can access a detailed analysis of the THE-IR to determine its properties as a ranking and use raw data from THE-IR in other studies or reports. Policy makers can use the main findings of this work to avoid misinterpretations when developing public policies related to the evaluation of the contribution of universities to the SDGs. Otherwise, these results can help the ranking publisher to improve some of the inconsistencies found in this study.

Social implications

Given the global audience of the THE-IR, this work contributes to minimising the distorted vision that the THE-IR projects about sustainability in higher education institutions, and alerts governments, higher education bodies and policy makers to take precautions when making decisions based on this ranking.

Originality/value

To the best of the authors’ knowledge, this contribution is the first providing an analysis of the THE-IR’s methodology. The faults in the methodology, the coverage at the country-level and the overlap between THE-IR and THE-WUR have unveiled the existence of specific strategies in the participation of universities, of interest both for experts in university rankings and SDGs.

Keywords

Citation

Bautista-Puig, N., Orduña-Malea, E. and Perez-Esparrells, C. (2022), "Enhancing sustainable development goals or promoting universities? An analysis of the times higher education impact rankings", International Journal of Sustainability in Higher Education, Vol. 23 No. 8, pp. 211-231. https://doi.org/10.1108/IJSHE-07-2021-0309

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Nuria Bautista-Puig, Enrique Orduña Malea and Carmen Perez-Esparrells.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

The sustainable development goals (SDGs) are a collection of 17 interlinked goals that emerged as a global strategy to solve critical problems (e.g. climate change, poverty) and emphasised the importance of evidence-based decision-making. SDGs establish a defined horizon for the millennium development goals (MDGs) by enlarging their scope, reach and engagement in their creation and implementation (Fisher and Fukuda-Parr, 2019). Agenda 2030, approved in the Rio + 20 conference in 2012, is a 15-year plan that must be embedded in the decision-making of governments, businesses, universities and society at large. The plan encompasses a roadmap of targets and indicators for aligning both developing and developed countries on the path of sustainable development. SDGs refer to policy demands across a wide variety of topics (e.g. health, gender equality). It is therefore challenging to develop specific indicators or tools that can value the goals’ contributions or impacts (Rafols et al., 2021; Siegel and Bastos Lima, 2020).

Higher education institutions (HEIs) are drivers for the achievement of the full set of goals, through their role in human formation, knowledge production and innovation (Chankseliani and McCowan, 2021). As such, universities stand out as models of sustainability (Wright, 2004) due to their societal responsibility in training future professionals and leaders and in stimulating public awareness of sustainability (Collins, 2017). By assuming the role of “agents of change”, HEIs are therefore pivotal to the search for solutions to current environmental problems (Hesselbarth and Schaltegger, 2014). In this regard, considering the three core university missions (research, teaching and knowledge transfer), these institutions can address sustainable development from different perspectives or dimensions: university governance (e.g. vision, missions, strategic plans), campus operations (e.g. energy use, waste), community outreach (e.g. exchange programmes) and assessment and reporting (e.g. dissemination of sustainability results achieved) (Cortese, 2003; Lozano et al., 2015). In addition, developing general educative actions (e.g. programmes for consciousness-raising regarding climate change, hunger, gender) and formulating proper research questions (Ligozat et al., 2020) are also advisable actions.

As proof of this commitment, the number of HEIs that are signing declarations and agreements (e.g. the Talloires Declaration, the Halifax Declaration, the SDG Accord and the Abuja Declaration) designed to foster sustainable development (SD) and education for sustainable development has increased over time (Lozano et al., 2015). Linked to the sustainability actions, the publication of reports is another instrument for institutions to disclose and publicise their commitment to sustainability and, by extension, the SDGs (Caputo et al., 2021). However, universities have been slow to adopt sustainability practices (Lozano, 2006; Velazquez et al., 2006) due to a resistance to change and a lack of awareness, interest and funding. For HEIs to fully engage, achieve and promote the SDGs, they must overcome the lack of long-term policies, insufficient resources (De La Poza et al., 2021) and the absence of specific targets for countries in which higher education requires reform and rebuilding (Heleta and Bagus, 2021).

In parallel with this situation, the literature related to sustainability and higher education has been continuously evolving, building different areas of interest. Such areas include the relation between universities and specific sustainable topics, such as urban planning, energy efficiency or climate change (Figure 1, cluster in red) or the relationship between sustainability and education (Figure 1, cluster in blue), training, students (Figure 1, cluster in green) and organisation, management and public relations (Figure 1, cluster in yellow) [1]. Therefore, all the sustainability dimensions are addressed. Regardless of the field, a considerable number of publications that include SDG-related keywords [2] can be observed in Scopus data (from 109 publications in 2015 to 2,904 publications in 2020), which could reconfigure these clusters in the short term.

Given that HEIs are key actors within the context of global sustainability, understanding their contribution to the SDGs remains crucial. However, while there are an increasing number of universities aligning their activities with the SDGs, to document and evidence the wide variety of activities relevant to sustainable development being undertaken by universities is still needed (Chankseliani and McCowan, 2021). In the same train of thought, appropriate tools to measure this contribution have not yet been developed. Moreover, it is important to design alternative initiatives or rankings that measure universities’ contributions to sustainability goals, solve methodological problems and provide data collection at the global level.

Despite the notable interest in global university rankings that measure university dimensions (e.g. THE-WUR, QS-WUR, ARWU, etc.) (Hazelkorn, 2015; Marginson and van der Wende, 2007), these rankings are subject to criticisms concerning the vague concepts they measure and the creation of biases towards:

  • specific data sources (i.e. biased towards the Global North);

  • the research dimension (Safón, 2019); and

  • the arbitrary weighting used to rank institutions (Gadd, 2021).

In addition, those global rankings do not take societal aspects into consideration (e.g. open science initiatives, sustainability or diversity), thus limiting the use of rankings in determining whether HEIs contribute to the SDGs. This leads the scientometric community to advocate for fair and responsible rankings (Gadd, 2020). The translation of the policy language goals into performance indicators has been indicated to be a primary concern (Ràfols, 2020) because quantitative metrics might miss many of the values of the targets (Unterhalter, 2019; Torabian, 2019).

Despite the wide variety of university rankings (Orduna-Malea and Perez-Esparrells, 2021), those valuing sustainability in its broader sense (green rankings) are scarce and represent a change in ranking designs (De La Poza et al., 2021). The most popular ranking was the GreenMetric World University Ranking, a global sustainability ranking developed by the Universitas Indonesia (UI) in 2010 that values the following dimensions: setting and infrastructure (15%), energy and climate change (21%), waste (18%), water (10%), transportation (18%) and education (18%). However, its simplicity has been considered a crucial limitation in terms of its “categories and indicators in comparison with other systems and the demands of the data types required are generally low for participants and less empirical than those used in other systems” (Lauder et al., 2015). Galleli et al. (2021) compare this green ranking with the Times Higher Education – World University Rankings (THE-WUR), finding a significant gap in the specificity of the two rankings. Other ranking and evaluation tools related to the commitment of universities (at a national or regional level) to sustainability have emerged. Nonetheless, these initiatives are geographically centred in the Anglo-Saxon regions (e.g. STARS and AASHE STARS at the USA, and People and Planet Green League in the UK).

First introduced in 2015, the SDG index was the first global ranking system translating the SDG narrative to measure the contribution of universities at the country level while using a wide range of appropriate indicators for each goal (Lafortune et al., 2018). However, collection data limitations, unmeasurable metrics and the low accuracy of the composite index jeopardised its utility (Diaz-Sarachaga et al., 2018). The Times Higher Education Impact Ranking (THE-IR) was launched in 2019 as the first global ranking at the institution level aimed at measuring the contribution of HEIs in each of the 17 SDGs.

Despite the youth of this ranking (the third edition appeared in 2021), THE-IR has already aroused interest in the higher education community, with both detractors and followers. Torabian (2019) considers this ranking as a positive move that demonstrates HEIs (and the broader public) want to address SD. However, the pursuit of certain indicators or rankings (e.g. in terms of research output) might encourage practices that conflict with the goals themselves. Cardozo et al. (2021) analysed the 2020 edition and found 16.67% of Ibero-American universities were indexed in the overall ranking. SDG8, “Decent work and economic growth”, was the goal most addressed (9.81%), followed by SDG3, “Good health and well-being” (9.68%) and SDG4, “Quality education” (7.69%). Iskandaryan (2020) described how one Russian university is implementing SDG4, while De La Poza et al. (2021) assessed the level of reporting and alignment of SDG achievements with the overall THE-WUR ranking score. They found that the highest-ranked universities are more committed to SDG9, “Industry, innovation and infrastructure” and SDG16, “Peace, justice and strong institutions”. Finally, Calderon (2021) offered a critical overview of this ranking and suggested that it needs to be contextualised, given the worldwide scope of SDGs, according to a regional or national basis that allows a like-to-like comparison between countries.

As with any other ranking, THE-IR should be analysed to ascertain whether its methodology reflect the sustainability degree of universities, and their results are accurate enough to be used as a data source for research and strategic decision-making. While THE-IR has been analysed in the literature, these studies do not include a comprehensive analysis of the THE-IR methodology. De la Poza et al. (2021) compared THE-IR and THE-WUR to discover whether the performance on one SDG in THE-IR is related to the scores obtained in THE-WUR, considering universities’ regions and disciplines. However, they did not cover the raw overlap between THE-IR and THE-WUR (which might also reveal not only a potential influence of the latter on the former but also the existence of specific geopolitical strategies to obtain visibility in this ranking). Finally, previous literature has not included all available editions. Given the high variability between the first and second editions (due to the incorporation of all SDGs in the second edition instead of the 11 SDGs considered in the first one), the inclusion of the third edition seems necessary to offer a wider and more accurate picture of this ranking and its methodology.

Therefore, this study aims to analyse and evaluate the THE-IR methodology, coverage and scores offered.

2. Times Higher Education Impact Rankings: a methodological approach

THE-IR follows a complex methodology that includes numerous metrics and composite indicators that should be briefly detailed to comprehend and appropriately contextualise the main results offered in this study. This ranking intends to measure the contribution of universities against each of the 17 goals. This ranking includes all universities teaching at either undergraduate or postgraduate levels; no minimum research requirements are required. The methodology consists of a micro level (scores for each SDG) and a macro-level (creation of an overall score).

At the micro level, each SDG aims to capture the contribution of universities across four wide areas: research, teaching, stewardship (i.e. universities as custodians of significant resources, including not only physical resources but also human resources), and outreach (i.e. work that universities undertake with their local, regional, national and international communities). To accomplish this, a set of metrics is assigned to each SDG. These metrics are grouped into three different categories (research metrics, continuous metrics, evidence metrics) based on the nature and purpose of each metric (Table 1). While research metrics are derived from an external product (Scopus), the remaining metrics (continuous and evidential material) derive from institutional data provided privately and directly by the HEIs to the ranking publisher.

Following this design, each SDG is organised into different groups of metrics. The first group is always related to research metrics (27% out of the total SDG score). The remaining groups (composed of continuous and evidence metrics) depend on each SDG. At the end, a final score (from 0 to 100) is obtained for each SDG.

At the macro-level, only four SDGs are weighted to build the overall score. SDG17 is measured for all universities (22% of the overall score), while the remaining three SDGs selected are those achieving the top three highest scores for each university (26% each), excluding SDG17.

3. Methods

A summative content analysis (Harwood and Garry, 2003) and a descriptive statistical analysis and have been used to accomplish with the objectives of this work.

The summative content analysis aims to analyse documents and texts that seeks to quantify their content in terms of predetermined categories and in a systematic and replicable manner. In this particular case, Impact Rankings Methodology 2021 v.1.3 (THE Impact Ranking, 2021) has been inspected to determine the use of research-related metrics, specifically the types of indicators used per SDG, their weight in each SDG, and their breadth of use throughout the 17 SDGs. This analysis was limited to the research-related metrics for the following reasons. First, these metrics are bibliometric indicators that have been provided by a commercial database and extensively analysed in the bibliometric literature, therefore providing clear guidance in evaluating their appropriateness. Second, these metrics are transversal in the THE-IR methodology (research metrics is a fixed category used to measure each SDG). Third, these metrics represent a fixed weight of 27% of the overall score for each SDG, rendering their influence on the final score noteworthy. Continuous and evidential metrics have not been directly analysed because they are provided by HEIs privately.

The statistical descriptive analysis is used to describe the characteristics of a sample or data set. In this particular case, data from all available editions of the THE-IR (2019, 2020 and 2021) were considered. All data was directly collected in May 2021 from the official website through web scrapping. This same procedure was followed to collect the data related to the THE-WUR for the same ranking editions. For each university, the university name, the university region (country), and all performance scores were extracted and analysed through descriptive statistics. Pearson correlations were used to calculate the stability of the distribution of universities providing specific SDG data over the different ranking editions. Then, a pairwise comparison method was used to compare both rankings. Countries' information was matched with geographical regions according to a list provided by the United Nations [3]. In addition, information regarding the designation of “developing” and “developed” areas was also included. ArcGIS [4] software was used to plot the coverage and overlap data maps. Venn Diagram Plotter [5] software was used for creating the Venn diagram.

4. Results

4.1 Analysis of the impact rankings methodology

The THE-IR exhibits a ranking design that raises several concerns at both the macro and micro levels. At the macro-level structure, the consideration of only four out of 17 SDGs is debatable as it allows a university to achieve the top position with zero values for 13 SDGs. However, the major concern is that only one SDG (SDG17: “Partnership for the goals”) is common for all universities, while the remaining scores are related to the top three SDGs with a better score in each university. This means that universities will be compared against different SDGs and mixed into the same classification. While not explicitly expressed in the Berlin Principles [6], this procedure breaks the effective practices in building university rankings.

Another consequence of the method followed by this ranking is that longitudinal analyses are not permitted as universities might be scored in relation to different SDGs over the editions. For example, Trinity College Dublin (Ireland) is evaluated in relation to SDG5, SDG9 and SDG11 in 2020 but to SDG3, SDG9 and SDG16 in 2021. The University of Bahrain constitutes an even more extreme case as all SDGs evaluated were different (SDG2, SDG4 and SDG11 in 2020; SDG5, SDG8 and SDG16 in 2021). This is not an isolated phenomenon as only 23% of all those universities ranked both in the 2020 and 2021 THE-IR editions (695 institutions) were evaluated through the same four SDGs.

This means that the overall score for one university might be generated with data completely different to that used in the previous ranking edition. This could partly explain the abrupt position changes several universities have exhibited in successive editions. Such examples, all of which reduce the reliability of the ranking, include France’s Aix-Marseille University (from 20th in 2020 to 201–300th in 2021), Denmark’s Aalto University (from 47th in 2020 to 201–300th in 2021), Saudi Arabia’s Princess Nourah bint Abdulrahman University (from 401–600th in 2020 to 27th in 2021) or Indonesia’s Institut Teknologi Sepuluh Nopember (401–600th in 2020 to 64th in 2021).

Moreover, the weights assigned to each of the four SDGs are subjective and not explicitly justified. Because SDG17 is a more generic and controversial goal, its higher weight in the final score is unjustified. These arbitrary weights break Principle 9 (Make the weights assigned to different indicators [if used] prominent and limit changes to them) included in the Berlin Principles on Ranking Higher Education Institutions [7]. While some voices have criticised these principles as being decoupled from actual ranking practices (Barron, 2017), updating these principles would uncover a greater number of inconsistencies.

The final scores include data ranges for some universities. While the use of ranges is generally a useful practice to minimise the use of rigid scores, as well as to tolerate margins of error in data, excessively wide ranges produce other side effects. For example, THE-IR’s 2021 edition reveals 103 universities with an overall score of 9.2–36.4, which makes the indicated value useless.

At the micro level, the scores obtained for each SDG can also admit data ranges, finding the same limitation as that mentioned above. For example, 103 universities obtained a score of 0.3–29.0 for SDG12 in the THE-IR’s 2021 edition. Even more extreme behaviour was found concerning the University of Alkafeel (Iraq), which obtained a score of 0.1–38.4 for SDG7. It is unclear how the overall score is calculated when the four SDGs considered have scores with such wide ranges.

The mixture of metrics to measure each SDG introduces another discussion point. The score in each SDG mixes indicators related to “research on topics related to the sustainable development objectives”, and metrics related to “the sustainability of the university as a whole institution”. Furthermore, that one university conducts research on topics related to those goals does not necessarily imply its researchers are contributing to the development of the SDGs or working sustainably.

The number of indicators used by SDGs and the distribution of indicators by type (research, continuous and evidence metrics) is otherwise not homogeneous, which makes possible the existence of metrics with extremely different weights in the final SDG score. For example, SDG4 (“industry, innovation and infrastructure”) is built upon only four indicators, while SDG11 (“sustainable cities and communities”) is built upon 19 indicators (Table 2).

The raw methodology includes 231 different indicators. All 48 research indicators used for each of the 17 SDGs are displayed in Table 3 accompanied by their specific weight in each of the SDGs measured (definitions of these metrics are available in the supplementary material on URL: https://doi.org/10.21950/HSFRDH.

These results raise the following concerns: A co-authorship indicator is introduced in the measurement of three SDGs: cooperation with developing countries (measured twice, in SDG1, “poverty” and SDG17, “partnership for the goals”) and gender (SDG5, “gender equality”). However, co-authorship is not considered in the remaining SDGs.

Each SDG (from 1 to 16) includes a productivity indicator (referred to as Publications in Table 1), which is tailored in each SDG by using topic-based queries in Scopus (Iskandaryan, 2020). While this approach is topic-sensitive and denotes a substantial effort, the use of queries from Scopus could be controversial as different query approaches can alter the resulting country rankings (Armitage et al., 2020). SDG17 includes this same indicator but also includes all publications regardless of the topic. Therefore, the same publication can be measured in at least two SDGs by default. Otherwise, this indicator exhibits different weights: 13.55% (SDG17; “partnership for the goals”), 13% (SDG8, “decent work”), 11.60% (SDG9, “industry”), 10% (SDG1, “poverty”) and 7% in the remaining objectives. This variability is not justified. In addition, this indicator is not normalised according to the size of the university. Otherwise, several principles (e.g. Principles 2, 4, 6 and 9) included in the Leiden Manifesto for research metrics are somewhat compromised (Hicks et al., 2015).

Concerning the measurement of citation-based impact, a normalised indicator (Field Weighted Citation Impact) is used. However, its usage is limited to a small number of SDGs (3, 4, 5, 8, 9 and 17). The absence of the indicator in other SDGs seems to be not clearly justified.

A journal-level metric (Citescore) is mainly used in this ranking. However, it is excluded in four SDGs (1, 3, 9 and 17). While the generic nature of SDG17 can explain its absence, it remains unclear why Citescore is not used in the remaining three SDGs. Moreover, the weight of this indicator is equal to 10% for all SDGs except for SDG8 (“decent work and economic growth”), where it is equal to 14%. The methodology details that this indicator actually measures the proportion of a university’s publications that, according to the Citescore metric, appear in the top 10% of journals. The arbitrariness of this threshold (10%), the absence of discipline filtering (Citescore is a metric without field-normalisation) and the nature of multidisciplinary journals render this count of publications debatable.

An altmetric measure (paper views) is included in SDG3 (“health”) and SDG4 (“education”), but it does not appear in any other SDG. The use of Scopus as a data source might indicate that PlumX is being used as an altmetric data source, but no explicit information has been found. The inclusion of this indicator, with a final weight of 10% in the overall SDG score, is therefore not justified.

Finally, an extra indicator measuring patents-citing publications (referred to as citing patents) is used in SDG9 (“industry, innovation and infrastructure”), but it is outside the research indicators group. This decision is also debatable as the parameter measures citations received by publications from patents. A similar indicator (clinical citations) used in SDG3 (“health”) is otherwise included in the research indicators group. The citing patents indicator, as a citation-based indicator, also reveals a transversal nature, which makes it appropriate to be measured in all SDGs, as the other bibliometric indicators used.

4.2 Institutional participation

The participation of universities in the THE-IR has increased over the editions. In 2020, 768 universities with data in at least one SDG were ranked (out of 859). For the 2021 edition, up to 1,117 universities with data in at least one SDG were ranked (out of 1,240). The number of SDGs for which universities submitted data varied significantly between institutions.

Considering the 2021 edition, we can find four different types of universities. First, the interested universities (9.5 out of all universities submitting information). These universities submitted data for less than 4 SDGs. Consequently, they are not ranked in the overall score. Second, the strategic universities (36.9% of universities). These institutions submit data for a small number of SDGs, around the minimum required (between 4 and 6 SDGs). Third, the committed universities. These institutions provide data for a large quantity of SDGs (between 7 and 15). Finally, the outperformer universities. These institutions submit data for almost all available SDGs (Figure 2).

The distribution of universities per number of SDGs provided shows a slight change from 2020 to 2021 (R = 0.62; alpha value= 0.01). Comparing the 2020 and 2021 editions, we can see an increase in the raw number of universities providing only the four required SDGs (200 universities in 2021), which constitutes a particularly frequent action, followed by those universities providing data on 16 SDGs (184 in 2021). There was an important decrease in institutions that provided data for all the 17 SDGs from 2020 (19.1% of all submitting universities) to 2021 (7.4%).

The number of universities providing data on each SDG (referred to as “providing universities” in Table 4) remains constant for the 2020 and 2021 editions (R = 0.95), as well as the number of universities where the corresponding SDG is used in their overall score (referred to as “measured universities” in Table 4) (R = 0.99; alpha value = 0.01). We can observe that SDG4 (“quality education”), SDG5 (“gender equality”) and SDG10 (“reduced inequalities”) are the SDGs most frequently submitted by universities. On the contrary, SDG2 (“hunger”) and SDG14 (“life below water”) are the SDGs less frequently provided by universities. Otherwise, an anomalous behaviour is detected for SDG2, with a significant drop in the number of universities providing data on this SDG from 2020 (33.9% of all universities) to 2021 (10.9%). However, SDG2 is used in the overall score for the 83% of universities providing data on this SDG, thus exhibiting an elevated success rate (Table 4).

4.3 Geopolitical analysis

Results confirm an increase in the participation of countries in the THE-IR, from 76 in 2019 to 94 in 2021, exceeding the number of countries in the THE-WUR and the UI GreenMetric ranking for the first time (Table 5). Complete data on the evolution of the number of universities by country in the three editions of THE-IR and THE-WUR is available in the supplementary material.

Japan and Russia are the most represented countries in the THE-IR’s 2021 edition (both with 75 universities). These countries are followed by the UK (50) and Turkey (49) [8]. The increasing presence of several countries in the different editions of the THE-IR is remarkable. For example, Russia included 45 new universities in 2021 that were not included in the 2019 edition, India had 36 new universities, Iraq 34 and Brazil 23.

The overlap between THE-IR and THE-WUR is indicated in Figure 3. The raw number of universities included in both rankings is growing over the years (from 328 to 687). However, this value is misleading as the coverage of both rankings has increased. 68.36% of universities indexed in the THE-IR were not indexed in the THE-WUR in the 2020 ranking edition. This value drops to 61.5% in 2021.

Several countries participating in the THE-IR are not indexed in THE-WUR (it is noteworthy that institutions do not choose to participate in the THE-WUR; rather, they are selected based on their performances). These countries predominantly come from Latin America and the Caribbean (e.g. Uruguay, Paraguay, Ecuador, Dominican Republic, Curaçao) or West Asia (Palestine, Bahrain, Azerbaijan and Armenia). The notable presence of Uzbekistan (12 universities), Ecuador (10 universities indexed) or Azerbaijan (6 universities) should be emphasised (Table 6).

Other countries exhibit extremely unbalanced behaviour regarding their presence in the THE-IR, particularly if we consider the 2021 edition. For example, China includes 91 universities in the THE-WUR (4th by country rank), but only 13 are participating in the THE-IR (26th by country rank). Germany includes 48 universities in the THE-WUR (9th by country rank), while only six are participating in the THE-IR (44th by country rank).

If we limit the analysis to the elite positions (top 200), we can observe a greater presence of countries in the THE-IR (37 countries) than in the THE-WUR (27 countries). However, despite the greater variety of countries in the THE-IR, most top universities are principally located in developed countries (the UK with 38, Australia with 24 and Canada with 20 universities). Otherwise, only 52 out of the top 200 universities ranked in the THE-WUR are participating in the THE-IR in 2021 (this value was 42 in 2020). These results provide evidence that the top universities indexed in the THE-WUR are not yet participating the THE-IR, but this behaviour may change in coming years if the THE-IR coverage continues to grow.

The distribution of universities throughout the three editions is presented in Figure 4. On the one hand, East Asia (211 universities in 2019; 231 in 2020; and 250 in 2021), North America (199 in 2019; 202 in 2020; and 211 in 2021) and Northern Europe (147 universities in 2019; 154 in 2020; and 155 in 2021) are the most well-represented by the THE-WUR. On the other hand, East Asia (55 universities in 2019; 92 in 2020), Latin America and the Caribbean (55 in 2019 and 88 in 2020), West Asia (86 in 2020 and 140 in 2021) and Eastern Europe (51 in 2019 and 133 in 2021) are the most well-represented by the THE-IR. Thus, the number of developing countries (187%) has significantly increased in the THE-IR in comparison with the THE-WUR. This might suggest a different strategy for these countries participating in this SDG ranking.

The regions that presented the highest overlap between both rankings are East Asia (43 countries overlap in 2019 to 85 in 2021) and Eastern Europe (from 31 countries in 2019 to 70 in 2021), whereas African Regions (East, West and Southern Africa) scarcely overlap.

5. Discussion

Despite other initiatives launched to measure sustainable development, the THE-IR is the first worldwide ranking that values the contribution of HEIs towards SDGs. This study attempts to explain the methodology, coverage and geopolitical issues (by area and by country) related to this emergent ranking.

First, the content analysis undertaken revealed inconsistencies in the appropriateness and weights assigned to the research metrics employed (Table 3), which might jeopardise the alignment of these metrics to the contributions to the SDGs. These results reinforce the general concerns that have already been indicated in the literature related to the use of performance metrics to measure sustainability (Torabian, 2019; Calderon, 2021; De la Poza, 2021; Rafols et al., 2021). Beyond research metrics, the irregularities uncovered support the work by Gadd (2020, 2021) with new insights, proving that a lack of a solid ranking structure (methodological design) has the potential to cause adverse consequences (unrealistic results, incorrect decision-making and opportunistic behaviours). Otherwise, the continuous and evidential metrics have not been analysed. As the review of this institutional evidence is not public, the final evaluation of each SDG cannot be reproduced. Consequently, the final scores and ranking positions used in the Results section should be interpreted under this circumstance. Future studies should address this metric to obtain a complete picture of the THE-IR method. For example, conducting systematic surveys among university staff could provide insights into the accuracy and stability of THE-IR.

On the other hand, the results related to the THE-IR/THE-WUR coverage and overlap should be also discussed because the rules regarding participation in these rankings are different. While participation in the THE-WUR is based on performance, participation in the THE-IR operates on a voluntary basis. This fact can explain the growing number of universities from developing countries (most new HEIs) participating in the THE-IR as a reputational phenomenon that results from being excluded from the THE-WUR. The HEIs of developing countries could be becoming aware of sustainability’s relevance to promoting their universities and increasing their academic brands. Thus, concurring with Hazelkorn and Mihut (2021), this ranking represents an opportunity for institutions to accrue intangible assets in the form of reputation and competitiveness within the ranking’s sphere of influence through achieving the SDGs and improving their positions on the ranking. In turn, as Hazelkorn (2015) foresaw, the university systems of these countries have noted how their flagship universities are positioned more highly in this ranking system than in the THE-WUR and how newcomers have emerged onto the scene of global rankings, reshaping the landscape of these institutions.

The low participation observed of the elite universities might be related to a cost-benefit analysis at the institutional-level. The cost of compiling evidence and the work it encompasses does not present a significant advantage in improving their academic image, which their presence in the THE-WUR ranking already ensures (De la Poza et al., 2021). Likewise, given that the engagement of universities with sustainability goals is evaluated, obtaining lower positions in the THE-IR could affect their reputation. Therefore, non-participation is a plausible option for most top research universities.

Participation in the ranking being voluntary and the general lack of incentives and funding to foster the involvement of countries might explain the low participation of other regions (e.g. Eastern Africa). Institutions with fewer resources cannot afford participation as it requires extra effort, which constitutes a paradox given this ranking aims to promote sustainability goals. However, the rising participation of other countries from developing regions (e.g. Latin America, the Caribbean and West Asia), which are exclusively participating in THE-IR, suggest a strategic interest (and investment) in promoting “sustainability” branding and increasing academic reputation.

More precisely, the surprising percentage of universities (16.1% in the 2021 edition) providing only the minimum required data to be ranked (four SDGs; Figure 2) reinforces the hypothesis that universities in certain regions are making strategic efforts to participate in this ranking due to the possibilities that the designed methodology allows. This is one of the most important unintended consequences. In addition, providing minimum data – also called “cherry-picking” (Forestier and Kim, 2020) and “SDG washing” (Buhmann, 2018) – should not be an acceptable approach due to the integrative and indivisible nature of the agenda (United Nations, 2015). As Calderon (2021) suggests, increasing the number of mandatory SDGs and basing the overall score on them could provide more stability to the forthcoming editions of this ranking.

Finally, this study has used the available online ranking information as a primary source, which relies on the goodwill and evidence of universities (the information on the SDGs is submitted by each university to the THE-IR). The authors are unaware of the quality procedures the ranking publisher has adopted for considering, evaluating and displaying the data gathered. Any remaining errors are our responsibility.

6. Conclusions

THE-IR is the first global university ranking intended to measure HEIs against all the sustainability goals. However, the coverage, methodology and results provided by this ranking raise concerns about their (dis)alignment with the United Nations Agenda objectives. The key conclusions obtained are displayed below in a dual manner:

6.1 Ranking method

The macro-structure level of the THE-IR breaches acceptable standards for designing university rankings (two universities can be ranked in the same list through different indicators, and one university can be measured with different data through different editions), hindering the results’ comparability and limiting their understandability. At the micro-structure level, the use of research metrics compromise several of the Leiden Principles for research evaluation, while the use of scores with extremely wide ranges prevents an accurate interpretation of the final ranked positions. For these reasons, it is concluded that the results derived from this ranking may not be as accurate as assumed.

6.2 Ranking coverage and overlap

The results reveal an increasing number of countries participating in the THE-IR, with a greater representation of worldwide regions in elite positions (top 200) than that obtained in the THE-WUR. The growing number of universities from developing countries and the absence of world-class universities reflect a lack of interest in this ranking from elite universities and an opportunity for less-esteemed institutions. For this reason, it could be concluded that this emergent ranking is being primarily used by institutions that do not achieve leading positions in the THE-WUR. Such institutions can build strategies to achieve higher positions that are benefited by the serious limitations found in the ranking methodology.

Given the results obtained, there is no solid alignment between the THE-IR results and universities’ development of the SDGs, generating a distorted view of the sustainability degree achieved by higher education institutions. Despite the results obtained, the effort involved in preparing the THE-IR is appreciable due to the complexity involved. In this sense, a redefinition of the ranking methodology is recommended to solve some of its limitations. Whilst the design of new proposals to measure the achievement of SDGs at universities is far from the objectives of this study, the following basic recommendations are drawn:

  • Avoid global scores and stick to the specific un-ranged scores of each SDG, where each university is compared through the same battery of indicators.

  • Reward the number of SDGs on which each university submits data, and the achievement of minimum scores (thresholds) to minimise strategic actions.

  • Reduce the weight of research metrics, and use these indicators more equitably across all SDGs, avoiding measuring the same indicator in different SDGs.

  • Offer disaggregated results for each SDG. On the one hand, research (which denotes interest and impact on issues related to an SDG, but does not necessarily imply being sustainable), teaching, transfer and institutional management.

  • Value the regional or national orientations of universities as well as other qualitative dimensions (e.g., HEIs’ societal contribution).

The findings have clear implications for different actors in the university rankings arena. Firstly, university managers can be aware of the THE-IR’s validity when demanding informed decisions, and better preparing their institutions for participating in sustainability-related rankings. Secondly, university ranking researchers and practitioners can access a detailed analysis of the THE-IR to determine its properties as a ranking and use raw data from THE-IR in other research studies or reports. Thirdly, these results can help the ranking publisher to improve or resolve some of the inconsistencies found in this study. Fourthly, the development of public policies related to the evaluation of the contribution of universities to the SDG should avoid the inclusion of the overall results provided by THE-IR as much as possible, due to the inconsistencies identified in this work. If used, we strongly recommend highlighting each SDG separately, indicating the number of universities analysed in that SDG, and driving readers to the full current methodology. Otherwise, the information provided in public policies might be misinterpreted, leading to inappropriate decisions, which could, in turn, lead to economic and reputational consequences to the higher education institutions. Given the global audience of the THE-IR, the results obtained contribute to minimising the distorted view about the sustainability in the higher education institutions that the THE-IR projects to the Society, and alerts governments, higher education bodies and universities to take precautions when making (economic) decisions based on this ranking.

Last, further research should be undertaken on specific case studies that can help to gain insight into the practical implementation of the ranking. The motivation by different types of organisations (that include comprehensive, technical, public, private or non-profit universities, or institutions placed in large cities) to participate in this ranking should also be explored, particularly considering the distinction between developed and developing countries. It might also be interesting to conduct qualitative research (e.g. a DELPHI analysis) through university leaders to determine the perceived advantages and disadvantages of participating in the THE-IR and whether the doing meaningfully contributes to assessing the impact of universities in achieving SDGs.

Figures

Map of keyword co-occurrences of publications on sustainability and higher education
Source: Scopus; powered with VOSviewer (https://www.vosviewer.com).

Figure 1.

Map of keyword co-occurrences of publications on sustainability and higher education

Source: Scopus; powered with VOSviewer (https://www.vosviewer.com).

Type of university participation in the Impact Rankings

Figure 2.

Type of university participation in the Impact Rankings

Overlap of universities between the three editions

Figure 3.

Overlap of universities between the three editions

Distribution of universities in the THE-WUR (right) and the THE-IR (left) over the years

Figure 4.

Distribution of universities in the THE-WUR (right) and the THE-IR (left) over the years

Overview of metrics analysed in the impact rankings

Metrics Definition Source Timespan Score
Research Metrics that measure research output and impact a Scopus b five-year
window
27%
Continuous Metrics that measure values that vary continually across a range Universities c Closest
academic year
Variable
Evidence Metrics that measure the existence of policies or initiatives; that require evidence to be validated d 3 points

Source: www.timeshighereducation.com/world-university-rankings/impact-rankings-2021-methodology

a

publication queries are based on tailored queries, available at: https://data.mendeley.com/datasets/87txkw7khs/1#file-71334e20-2e74-4557-9002-0b8d2b28103e.

b

2021 edition: between 2015 and 2019.

c

2021 edition: Unless otherwise stated, the data used refer to the academic year closest to January to December 2019.

d

for each metric: if statement: 1 point; if also evidence: 1 point; if also evidence is public: 1 point

Distribution of indicators by sustainable goal and indicator type

SDG Indicators Total
Research Continuous Evidence
1 3 1 9 13
2 3 2 9 14
3 3 1 6 10
4 3 2 5 10
5 3 3 12 18
6 3 0 13 16
7 3 1 11 15
8 2 3 8 13
9 2 2 0 4
10 3 4 10 17
11 3 1 15 19
12 3 1 9 13
13 3 1 7 11
14 3 0 15 18
15 3 0 13 16
16 3 1 11 15
17 2 0 7 9
Total 48 23 160 231

Research indicators used in the-IR (2021 edition)

Research related to
each Sustainability Development Goal (SDG)
Indicator Weight
1. Poverty Cooperation co-authorship 7
Field-weighted citation impact 10
Publications 10
2. Hungry Citescore 10
Field-weighted citation impact 10
Publications 7
3. Health and well-being Paper views 10
Clinical citations 10
Publications 7
4. Quality education Paper views 10
Citescore 10
Publications 7
5. Gender equality Gender co-authorship 10
Citescore 10
Publications 7
6. Water (services) and sanitation Citescore 10
Field-weighted citation impact 10
Publications 7
7. Energy and energy efficiency Citescore 10
Field-weighted citation impact 10
Publications 7
8. Decent work and economic growth Citescore 14
Publications 13
9. Industry, innovation and infrastructure Publications 11.60
Citing patents 15.40
10. Reduced inequalities research Citescore 10
Field-weighted citation impact 10
Publications 7
11. Sustainable cities and communities research Citescore 10
Field-weighted citation impact 10
Publications 7
12. Responsible consumption and production research Citescore 10
Field-weighted citation impact 10
Publications 7
13. Climate action research Citescore 10
Field-weighted citation impact 10
Publications 7
14. Conservation and sustainable use of oceans, seas and marine resources research Citescore 10
Field-weighted citation impact 10
Publications 7
15. Land ecosystems and biodiversity research Citescore 10
Field-weighted citation impact 10
Publications 7
16. Peace and justice Citescore 10
Field-weighted citation impact 10
Publications 7
17. Research related to all SDGs Cooperation co-authorship 13.55
Publications 13.55

Number of universities providing data for each SDG in the the-IR

SDG 2020 2021
Providing
universities
Use
rate
Measured
universities
Success
rate
Providing
Universities
Use
rate
Measured
universities
Success
rate
1 373 43.4 103 27.6 591 47.7 196 33.2
2 291 33.9 68 23.4 135 10.9 112 83.0
3 621 72.3 332 53.5 872 70.3 412 47.2
4 677 78.8 246 36.3 966 77.9 406 42.0
5 549 63.9 202 36.8 776 62.6 226 29.1
6 331 38.5 83 25.1 520 41.9 87 16.7
7 362 42.1 136 37.6 560 45.2 223 39.8
8 481 56.0 241 50.1 685 55.2 394 57.5
9 494 57.5 174 35.2 680 54.8 249 36.6
10 459 53.4 101 22.0 669 54.0 186 27.8
11 470 54.7 140 29.8 656 52.9 221 33.7
12 360 41.9 108 30.0 503 40.6 140 27.8
13 377 43.9 76 20.2 566 45.6 112 19.8
14 242 28.2 40 16.5 379 30.6 42 11.1
15 269 31.3 45 16.7 402 32.4 71 17.7
16 454 52.9 209 46.0 653 52.7 274 42.0
17 808 94.1 768 95.0 1155 93.1 1117 96.7
Notes:

Use rate (%): percentage of universities providing data on the corresponding SDG in respect to the total universities providing data on at least one SDG (859 in 2020 edition; 1,240 in 2021 edition).

Success rate (%): percentage of universities where the corresponding SDG is used in their overall score in respect to all universities providing data on this SDG.

Providing universities: number of universities providing data on the corresponding SDG.

Measured universities: number of universities where the corresponding SDG is used in their overall score

Number of universities and countries ranked

Ranking 2019 2020 2021
Universities Countries Universities Countries Universities Countries
THE-IR 467 76 768 85 1,117 94
THE-WUR 1,258 86 1,400 92 1,526 93
UI GreenMetric 781 86 911 84 956 80

Unique countries participating in the the-IR but not indexed in the the-WUR

Country (region) Universities
Afghanistan (SA) 1 in 2020; 2 in 2021
Armenia (WA) 1 in 2021
Azerbaijan (WA) 2 in 2020; 6 in 2021
Bahrain (WA) 3 in 2020; 3 in 2021
Bosnia and Herzegovina (SE) 1 in 2020; 1 in 2021
Cambodia (SEA) 1 in 2021
Curaçao (LAC) 1 in 2021
Dominican Republic (LAC) 2 in 2021
Ecuador (LAC) 5 in 2019; 6 in 2020; 10 in 2021
Kosovo 1 in 2021
Palestine (WA) 1 in 2019; 2 in 2020 and 2021
Paraguay (LAC) 2 in 2021
Sudan (NA) 1 in 2021
Uruguay (LAC) 1 in 2020; 2 in 2021
Uzbekistan (CA) 1 in 2020; 12 in 2021
Notes:

a unique value indicates the number of universities in the three editions.

WA: West Asia; WE: West Europe; SSA: Sub-Saharan Africa; SEA: South-east Asia, LAC: Latin America and the Caribbean; SE: East Europe; NE: North Europe; NA: North Africa; CA: Central Asia

Notes

1.

The following query was performed in Scopus: TITLE-ABS-KEY (universit* OR “higher education”) AND TITLE-ABS-KEY (sustainab*). This query is intended to gather publications regarding both sustainability and higher education. From the corpus of publications gathered, all those containing at least the following keywords were filtered out: “Sustainable Development”, “Sustainability”, “Education”, “Higher Education”, “University”, “University Sector”, “Universities”, “Higher Education Institutions”, “Education for Sustainable Development”, “Sustainability Education”, “Sustainable Development Goals”, “University Campus”. The field was limited to the social sciences. A total of 19,809 keywords were obtained.

2.

“Sustainable Development Goals”, “Sustainable Development Goal”, “SDG”, “SDGs”, “Sustainable Development Goals (SDGs)”.

4.

available at: www.arcgis.com

8.

Japan is the country that includes the most universities in all three published ranking editions.

References

Armitage, C.S., Lorenz, M. and Mikki, S. (2020), “Mapping scholarly publications related to the sustainable development goals: do independent bibliometric approaches get the same results?”, Quantitative Science Studies, Vol. 1 No. 3, pp. 1092-1108, doi: 10.1162/qss_a_00071.

Barron, G.R. (2017), “The Berlin principles on ranking higher education institutions: limitations, legitimacy, and value conflict”, Higher Education, Vol. 73 No. 2, pp. 317-333, doi: 10.1007/s10734-016-0022-z.

Buhmann, K. (2018), “Future perspectives: doing good but avoiding SDG-washing. Creating relevant societal value without causing harm”, in Mulder, H., Scheltema, M., van't Foort, S. and Kwant, C. (Eds), OECD Guidelines for Multinational Enterprises: A Glass Half Full, OECD, pp. 127-134.

Calderon, A. (2021), “Why SDG-focused impact rankings need to be contextualized”, University World News, available at: www.universityworldnews.com/post.php?story=2021042815055074 (accessed 24 December 2021).

Caputo, F., Ligorio, L. and Pizzi, S. (2021), “The contribution of higher education institutions to the SDGs – an evaluation of sustainability reporting practices”, Administrative Sciences, Vol. 11 No. 3, pp. 1-13, doi: 10.3390/ADMSCI11030097.

Cardozo, M.L., Japura, G.A., Vargas-Murillo, A.R., Sancho, G.F.I. and López, H.R.P. (2021), “Characterization of Ibero-American universities by SDG in times higher education impact rankings 2020”, Turkish Journal of Computer and Mathematics Education, Vol. 12 No. 5, pp. 689-700, doi: 10.17762/turcomat.v12i5.1470.

Chankseliani, M. and McCowan, T. (2021), “Higher education and the sustainable development goals”, Higher Education, Vol. 81 No. 1, pp. 1-8, doi: 10.1007/s10734-020-00652-w.

Collins, T.J. (2017), “Review of the twenty-three year evolution of the first university course in green chemistry: teaching future leaders how to create sustainable societies”, Journal of Cleaner Production, Vol. 140, pp. 93-110, doi: 10.1016/j.jclepro.2015.06.136.

Cortese, A.D. (2003), “Planning for higher education the critical role of higher education in creating a sustainable future need for a new human perspective envisioning a sustainable future”, Planning for Higher Education, Vol. 31 No. 3, pp. 15-22, available at: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.607.6556&rep=rep1&type=pdf (accessed 24 December 2021).

De La Poza, E., Merello, P., Barberá, A. and Celani, A. (2021), “Universities’ reporting on SDGs: using the impact rankings to model and measure their contribution to sustainability”, Sustainability, Vol. 13 No. 4, pp. 1-30, doi: 10.3390/su13042038.

Diaz-Sarachaga, J.M., Jato-Espino, D. and Castro-Fresno, D. (2018), “Is the sustainable development goals (SDG) index an adequate framework to measure the progress of the 2030 agenda?”, Sustainable Development, Vol. 26 No. 6, pp. 663-671, doi: 10.1002/sd.1735.

Fisher, A. and Fukuda-Parr, S. (2019), “Introduction – data, knowledge, politics and localizing the SDGs”, Journal of Human Development and Capabilities, Vol. 20 No. 4, pp. 375-385, doi: 10.1080/19452829.2019.1669144.

Forestier, O. and Kim, R.E. (2020), “Cherry-picking the sustainable development goals: goal prioritization by national governments and implications for global governance”, Sustainable Development, Vol. 28 No. 5, pp. 1269-1278, doi: 10.1002/sd.2082.

Gadd, E. (2020), “University rankings need a rethink”, Nature, Vol. 587 No. 7835, p. 523, doi: 10.1038/d41586-020-03312-2.

Gadd, E. (2021), “Love DORA, hate rankings”, Impact of Social Sciences [blog], available at http://eprints.lse.ac.uk/111336 (accessed 24 December 2021).

Galleli, B., Teles, N.E.B., dos Santos, J.A.R., Freitas-Martins, M.S. and Junior, F.H. (2021), “Sustainability university rankings: a comparative analysis of UI green metric and the times higher education world university rankings”, International Journal of Sustainability in Higher Education, Vol. 23 No. 2, doi: 10.1108/IJSHE-12-2020-0475.

Harwood, T.G. and Garry, T. (2003), “An overview of content analysis”, The Marketing Review, Vol. 3 No. 4, pp. 479-498.

Hazelkorn, E. (2015), Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence, Palgrave McMillan, London.

Hazelkorn, E. and Mihut, G. (2021), “Introduction: putting rankings in context looking back, looking forward”, in Hazelkorn, E. and Mihut, G. (Eds), Research Handbook on University Rankings: theory, Methodology, Influence and Impact, Edward Elgar Publishing, Gloss, pp. 1-17, doi: 10.4337/9781788974981.00008.

Heleta, S. and Bagus, T. (2021), “Sustainable development goals and higher education: leaving many behind”, Higher Education, Vol. 81 No. 1, pp. 163-177, doi: 10.1007/s10734-020-00573-8.

Hesselbarth, C. and Schaltegger, S. (2014), “Educating change agents for sustainability – learnings from the first sustainability management master of business administration”, Journal of Cleaner Production, Vol. 62, pp. 24-36, doi: 10.1016/j.jclepro.2013.03.042.

Hicks, D., Wouters, P., Waltman, L., De Rijcke, S. and Rafols, I. (2015), “Bibliometrics: the leiden manifesto for research metrics”, Nature, Vol. 520 No. 7548, pp. 429-431, doi: 10.1038/520429a.

Iskandaryan, R. (2020), “Rethinking higher education through sustainable development goals (SDGs): a Russian perspective”, in Kantola, J., Nazir, S., Salminen, V. (Eds), Advances in Intelligent Systems and Computing, Springer, pp. 328-334, doi: 10.1007/978-3-030-50791-6_42.

Lafortune, G., Fuller, G., Moreno, J., Schmidt-Traub, G. and Kroll, C. (2018), Global Responsibilities. Implementing the Goals, Bertelsmann, p. 476, available at: https://s3.amazonaws.com/sustainabledevelopment.report/2018/2018_sdg_index_and_dashboards_report.pdf (accessed 24 December 2021).

Lauder, A., Sari, R.F., Suwartha, N. and Tjahjono, G. (2015), “Critical review of a global campus sustainability ranking: GreenMetric”, Journal of Cleaner Production, Vol. 108, pp. 852-863, doi: 10.1016/j.jclepro.2015.02.080.

Ligozat, A.L., Névéol, A., Daly, B. and Frenoux, E. (2020), “Ten simple rules to make your research more sustainable”, PLoS Computational Biology, Vol. 16 No. 9, p. e1008148, doi: 10.1371/journal.pcbi.1008148.

Lozano, R. (2006), “Incorporation and institutionalization of SD into universities: breaking through barriers to change”, Journal of Cleaner Production, Vol. 14 Nos 9/11, pp. 787-796, doi: 10.1016/j.jclepro.2005.12.010.

Lozano, R., Ceulemans, K., Alonso-Almeida, M., Huisingh, D., Lozano, F.J., Waas, T., Lambrechts, W., Lukman, R. and Hugé, J. (2015), “A review of commitment and implementation of sustainable development in higher education: results from a worldwide survey”, Journal of Cleaner Production, Vol. 108, pp. 1-18, doi: 10.1016/j.jclepro.2014.09.048.

Marginson, S. and van der Wende, M. (2007), “To rank or to be ranked: the impact of global rankings in higher education”, Journal of Studies in International Education, Vol. 11 Nos 3/4, pp. 306-329, doi: 10.1177/1028315307303544.

Orduna-Malea, E. and Perez-Esparrells, C. (2021), “Multidimensional taxonomy of university rankings”, in Hazelkorn, E. and Mihut, G. (Eds), Research Handbook on University Rankings: theory, Methodology, Influence and Impact, Edward Elgar Publishing, Gloss, pp. 106-126, doi: 10.4337/9781788974981.00017.

Rafols, F. (2020), “Consensus and dissensus in ‘mappings’ of science for sustainable development goals (SDGs)”, Leiden Madtrics Blog Post on 10th August, Vol. 121 No. 2, pp. 897-915, available at: https://leidenmadtrics.nl/articles/consensus-and-dissensus-in-mappings-of-science-for-sustainable-development-goals-sdgs

Rafols, I., Noyons, E., Confraria, H. and Ciarli, T. (2021), “Visualising plural mappings of science for sustainable development goals (SDGs)”, in Glänzel, W., Heeffer, S., Chi, P-S. and Rousseau, R. (Eds), Proceedings of the 18th International Conference on Scientometrics and Informetrics (ISSI2021), ISSI, pp. 949-954, available at: https://kuleuven.box.com/s/kdhn54ndlmwtil3s4aaxmotl9fv9s329 (accessed 24 December 2021).

Safón, V. (2019), “Inter-ranking reputational effects: an analysis of the academic ranking of world universities (ARWU) and the times higher education world university rankings (the) reputational relationship”, Scientometrics, Vol. 121 No. 2, pp. 897-915, doi: 10.1007/s11192-019-03214-9.

Siegel, K.M. and Bastos Lima, M.G. (2020), “When international sustainability frameworks encounter domestic politics: the sustainable development goals and agri-food governance in South America”, World Development, Vol. 135, doi: 10.1016/j.worlddev.2020.105053.

Torabian, J. (2019), “Revisiting global university rankings and their indicators in the age of sustainable development”, Sustainability: The Journal of Record, Vol. 12 No. 3, pp. 167-172, doi: 10.1089/sus.2018.0037.

Unterhalter, E. (2019), “The many meanings of quality education: politics of targets and indicators in SDG4”, Global Policy, Vol. 10 No. 1, pp. 39-51, doi: 10.1111/1758-5899.12591.

United Nations (2015), “Resolution adopted by the General Assembly on 25 September 2015”, Transforming our world: the 2030 Agenda for Sustainable Development, United Nations, New York.

Velazquez, L., Munguia, N., Platt, A. and Taddei, J. (2006), “Sustainable university: what can be the matter?”, Journal of Cleaner Production, Vol. 14 Nos 9/11, pp. 810-819, doi: 10.1016/j.jclepro.2005.12.008.

Wright, T. (2004), “The evolution of sustainability declarations in higher education”, in Corcoran P.B. and Wals A.E.J. (Eds), Higher Education and the Challenge of Sustainability, Springer, Dordrecht, Netherlands, pp. 7-19, doi: 10.1007/0-306-48515-X_2.

Acknowledgements

This project has been supported by the project UNIVERSEO (Ref. GV/2021/141), funded by the regional government of Generalitat Valenciana (Spain).

Corresponding author

Nuria Bautista-Puig can be contacted at: Nuria.bautista.puig@hig.se

About the authors

Nuria Bautista-Puig attained her BsC in Geography (University of Lleida, Spain), MSc in GIS and remote sensing (University of Zaragoza, Spain) and Ph.D. in Library and Information Science (2020, Universidad Carlos III de Madrid, AASHE Campus Sustainability Research Award, Spanish thesis award). Currently, she is an affiliated researcher at the University of Gävle (Sweden) and R&D Technical Support at UC3M, sponsored by the YUFERING EC-Funded project. Her areas of expertise include the fields of information science, organisational sustainability and bibliometrics/scientometrics. https://orcid.org/0000-0003-2404-0683

Dr Orduña-Malea is Technical Telecommunication Engineer, MA in Library Science, Master in Management of Multichannel Contents, and holds a PhD with a dissertation thesis about webmetrics and university rankings. Currently, he works as a tenured Associate Professor in the Department of Audiovisual Communication, Documentation and History of Art, at the Polytechnic University of Valencia. His lines of research are mainly related to the application of web-based techniques to Science Studies. Enrique also collaborates in funded research projects and international advisory activities about the web visibility of universities, researchers and other online research objects. https://orcid.org/0000-0002-1989-8477

Carmen Perez-Esparrells is B.A. in Economics and Business Administration, M.A. in Economics of Education and Labour Economics and holds a Ph.D. in Economics with a dissertation thesis in university funding. Currently, she is an Associate Professor at the Department of Economics and Public Finance of Autonomous University of Madrid (UAM). She held the positions of Vice-Rector for Innovation at UAM and of Associate Researcher at the Institute for Fiscal Studies, Spanish Ministry of Finance. She has been a member of the Association of Economics of Education (AEDE) since its inception and has held the position of President from 2012 to 2014. https://orcid.org/0000-0003-4466-9825

Related articles