The Higher Education Commission (HEC) recently ranked universities for the fifth time since 2006. The ranking has led to a considerable difference of opinion among various scholars, and has also been a subject of controversy in print and electronic media.
The objective of this article is not to criticise the HEC for its efforts, or the methodology and scores used by them that have led to this debate, but to discuss whether the HEC should be involved in ranking universities or not.
When the HEC last ranked universities during my tenure in 2013, even though the ranks awarded to different universities were not as controversial as the ones recently announced, we realised immediately that the HEC should not be in the business of ranking universities. We announced that the HEC will not be ranking universities anymore, as a matter of policy, due to the very nature of its functions, but will outsource the rankings to an independent group in the future.
The HEC is both a regulatory and a funding agency. Although it is perfectly fine for the HEC to critically assess universities in different areas of performance – such as teaching, research and service – and make this data available to the public at large, it is in no way capable of comparing the performances of different institutions that have different focus or programmes and subsequently assigning a score to them. Likewise, the HEC funds public universities, but does not support the development and operating expenditures of private universities. Some public universities get development funding in billions, while others get much less, and the amounts vary from year to year for the same university.
The HEC also funds scholars every year for their PhDs by the hundreds – some from within the universities as new programmes are initiated, or if there is a growth or a deficiency of teaching faculty with terminal degrees. It also awards PhD scholarships to many more scholars through an open competition, who are appointed by the HEC to various universities upon graduation. These placements are usually at the discretion of HEC, even though they are requested by the universities, and vary from year to year. The HEC also funds research and conference activities at universities.
Therefore, how the funding is distributed each year varies, depending on whether PhD scholars are placed in universities or research is supported. All such functions make the HEC an active partner of the universities, in strengthening them and leading to an improved performance in teaching or more research publications. Therefore, the HEC assigning a score to the very categories it supports leads to uneven competition and a conflict of interest. Parents can’t be expected to buy clothes for their children and then rank them on how well they are dressed.
Then there is the issue of methodology, or the scores used. This is a complex process, and the relevant data is not even available in Pakistan and was not sought by the HEC. The rankings announced in 2013 during my tenure were very close to what is generally perceived by the students, parents, and employers alike in terms of academic reputation. This was only possible after we had studied various global rankings (QS, THE, US News etc), made a comparison and used the most relevant scores applicable to our ground reality (such as not having any Nobel Laureates serving in our universities, or not having a high percentage of international students enrolled, such as at most top universities in US, UK, Canada etc).
Universities are also scored in key areas of specialisation, for which they are identified. For example, in 2013, we included LUMS in the business category as well and as such, LUMS ranked as number 1 and IBA as number 2, which was as perceived and expected. Now, LUMS is not even listed in the business category in the current ranking; it was ranked in general category as number 6.
The solution is that most universities should be ranked in multiple categories. For example, universities like NUST and LUMS should have been included in at least three categories: general, engineering and business, since they outrank many other universities in each of these three categories. Not finding LUMS in the business or engineering category misleads a potential student, parent or employer to assume that these universities do not excel in such specialisations.
Likewise, in addition to the general category, the Comsats Institute of Information Technology should also have been included in the engineering category and the PMAS Arid Agriculture University, Rawalpindi should also have been categorised in agriculture, according to their main focus and key strengths.
Lastly, the peer and employer perceptions of a university are extremely important for its academic reputation, which is normally sought through a survey. In an earlier op-ed (‘Shooting for the Stars’, The News, October 15, 2015), I had written that it is important to analyse and understand the ranking criteria to be able to strategise, plan, compete and be ranked at the top. Teaching and research reputation as perceived by peers, and the quality of preparedness of graduates as perceived by potential employers, generally constitute an important and significant portion of the total score (over 30 percent in most global rankings) and therefore matter the most. This data is neither available nor was it sought by the HEC for the rankings, therefore leading to misleading results.
Rankings are important to create healthy competition among universities, leading to self-assessment and improved performance. Universities should be ranked in multiple categories of specialisation, which should include their area of focus as well. Peer university and employer perceptions should be sought through a survey and appropriately scored. Above all, the rankings of universities should be conducted by an independent group, qualified to conduct such an exercise.
The writer is a former chairman of the Higher Education Commission.