The rankings race mirage

By Dr Ayesha Razzaque
|
September 10, 2025
A representational image showing the silhouette of students wearing graduation caps. — Reuters/File

Every year, around the start of the summer season, major rankings agencies release a raft of university rankings for the coming year. The ones that have acquired the most currency in Pakistan in the last decade or so are QS World University Rankings and the Times Higher Education (THE) World University Rankings. The media and the public have been conditioned to ask the question, ‘How many Pakistani universities made it into the top 500 this year?

At a time when public funding for higher education has been stagnant for eight years straight, and many public universities are running deficits and are unable to meet their financial obligations, I posit that the question we should be asking ourselves is, ‘Should we even be thinking about running in the rankings race?’

This year was a bit of a shock for people; only two Pakistani universities made it into the top 500 (followed by another nine in ranks ranging from 501 to 1000) of the QS World University Rankings. A few Pakistani universities have also made showings in the THE’s World University Rankings, from time to time. These two are relatively new and started in 2004. The original university ranking was the one by US News & World Report, first published in 1983. Very few Pakistani universities have broken into the top 500 in the US News ranking. Why might that be?

To understand why, we need to look at what goes into each of these rankings. Broadly speaking, most of the input data can be categorised into one of three classes: reputation surveys (from employers, faculty at peer universities), self-reported metrics, and bibliometric data (quantitative measures of publications). Reputation surveys are a ripe target for manipulation, self-reported metrics can be vulnerable to creative interpretations, and bibliometric data can set up perverse research incentives.

While most inputs of all three types can be gamed, the first two, reputation surveys and self-reported metrics, are perhaps most vulnerable. QS and THE rely on reputation surveys for approximately 45 per cent and 33 per cent, respectively, and on self-reported metrics for 30 per cent and 40 per cent, respectively. US News relies on reputation for only 25 per cent and no self-reported data, relying on bibliometric data for 65 per cent, making it harder to game. That is probably the reason why Pakistani universities have never performed very well in the US News rankings and why they have been largely overlooked in media reporting and by the public.

In addition to all of the above, I am sure readers can see the fallacy, the gross oversimplification, of reducing the multi-dimensional activities of universities to a single ordinal number. To demonstrate, this year’s QS World University Ranking put Quaid-e-Azam University (recent recipient of a bailout to meet payroll obligations) at number 354. If this ranking is to be believed, that makes it ‘better’ than George Washington University in DC (358), Northeastern University in Boston (384), University of Florence (404), Universitat Mannheim in Germany (416), University of California, Santa Cruz (458), etc. Even if you are only vaguely familiar with any one of these universities, you can tell that such a claim would be nonsense. Pick any university in the ranking as a reference point, and see what names are ranked below it, and a few rows down – you will find yourself shaking your head.

Ranking agencies know that some universities try to game their rankings more aggressively than others and take statistical countermeasures and tweak their methodologies to scuttle attempts that produce abnormal blips and sudden leaps in rankings.

There is also the fact that very few inputs to rankings are relevant to what is relevant to the largest constituency in a university: students, in particular undergraduate students, who make up the largest subgroup in most institutions. Students are more concerned about the quality of instruction, availability of learning resources and support, etc. In the UK, universities conduct (and compete on the results of) the annual National Student Satisfaction (NSS) Survey that is widely understood as a good gauge of the student experience at institutions.

It asks 29 questions (plus some optional ones) like: How good are the teaching staff at explaining things? How often does your course challenge you to achieve your best work? How well has your course developed your knowledge and skills that you think you will need for your future? How fair has the marking and assessment been in your course? How well have teaching staff supported your learning? How easy is it to access subject-specific resources when you need them?

I can imagine many of our university administrators squirming at the thought of their institutions being ranked and held accountable for performance on an NSS-like survey with such direct questions. Better yet, since our goal should be to discern between institutions that are delivering value and those that are not, it might be better to create a national rating, rather than a ranking, based on such a survey. This is not a new idea, and here is where it gets interesting.

A few years ago, the Higher Education Commission (HEC) introduced a rating system that graded public and private universities on a scale from A+ to D. Although the methodology behind it is not publicly disclosed, upon reviewing the ratings, they largely conformed to my perception of the institutions. However, this rating was never officially released, and many university vice-chancellors remain unaware of its existence.

Word is, many for-profit universities that fared poorly on the rating exerted their influence to ensure it did not see the light of day. But here’s the kicker: I was made aware of its existence by someone who works on admissions applications at a UK university, who has access to it and who routinely consults this rating to make admissions decisions for applicants who graduated from Pakistani universities. Like a man-made virus in a government lab in a horror film, the rating got out and is in the wild.

While people running and working in universities may reluctantly agree that universities are for students (not paper mills in service of faculty to meet promotion requirements), most feel it is beneath the dignity of the scholarly profession to accept them as principal stakeholders, seeing their mission in vague terms like ‘enlightenment of society’ and ‘creation of knowledge’ instead.

The community of students attending universities has changed, and with it, their and society’s expectations from academia. The time when a university education was the exclusive preserve of the progeny of royalty, the rich and the elite, unconcerned with returns on their investment, has long passed. For most members of the public attending universities now, among the goals of a better education is gaining access to better livelihoods. Enlightenment and knowledge creation are both well, but in addition to (not in place of) employability, it does not have to be only one or the other.

Once upon a time, the cost of a good university education could be covered by salary as a running monthly expense, as it had not yet risen to the level of a major life expense for which families had to save and plan for years in advance, and possibly go into debt for. Back then, it might have been acceptable to keep holding on to the single goal of enlightenment, unperturbed by the harsh practicalities of life, but that is no longer sufficient. Over the last 25 years or so, the price tag of a nice cellphone has gone up from $200 to $1000, and with it, people’s expectations for what it can do for them; the same is true for a university education.

Universities need to get good before they can start competing against the best in the world, and the bitter truth is that, if I am being charitable, beyond two dozen universities, I would be hard-pressed to find a university I could describe as ‘good’. All major rankings reward research activity, which costs money, and all four provinces are littered with public universities unable to meet payroll obligations. Research takes money, and governments have neither the resources nor the will to support such noble endeavours.

Currently, there are institutions classified as universities that lack functioning bathrooms and drinking water on their premises, reminiscent of rural public schools. We need to be honest with ourselves and reclassify universities as either research-focused, teaching-focused or vocational/community colleges with matching goals. In the meantime, the HEC, its former chairmen, MoFEPT, political representatives and the media need to stop obsessing over university rankings and misdirecting the public. For most (not all) institutions labelled as universities today: Get better at serving your students’ primary needs before you contemplate competing with the world’s best.


The writer (she/her) has a PhD in Education.