Putting tests to the test

Do admission tests ensure quality of entrants in higher education institutions?

Putting tests to the test

Over a decade ago, the Higher Education Commission (HEC) began a series of wide-ranging reforms to improve the quality of Pakistan’s universities. These reforms touched nearly every aspect of higher education. Among other things, changing the conditions of admissions to degree level programmes was one of the pillars of these reforms. Instead of earlier merit-based admission system, the HEC introduced the use of additional admission tests to select the applicants.

This policy involved requiring the applicants to Masters, MPhil, and Ph.D. programmes to take the tests prepared and administered by the National Testing Service (NTS). NTS is a not-for-profit private sector test provider and has been expanding its testing services for recruitment as well as university admission purposes. NTS was established in 2002 and was engaged by the HEC in 2003 to administer tests to applicants aspiring admission in the above-mentioned programmes in all the HEC recognised public and private universities. These tests were called Graduate Admission Tests (GAT).

The HEC mandated two kinds of GATs: GAT (General) for Masters and MPhil and GAT (Subject) for Ph.D. According to the admission criteria specified by the HEC in a document hosted by the HEC’s Quality Assurance (QA) Division, a minimum of 50 per cent cumulative score in GAT (General) was set as a requirement for admission. The readers may note that GATs were supposed to be similar to the Graduate Record Examinations (GRE) offered globally by Education Testing Service (ETS). ETS is a private test provider in the United States, and most universities in the United States require the applicants to their programmes to take this test.

Ostensibly, the HEC’s policy for uniform testing for admission in graduate (Masters, MPhil, and Ph.D.) and professional (Law) programmes was motivated by a concern about the low quality of these programmes. It was based on a belief that the use of tests as filters for admission would somehow improve the quality of students joining the programmes. This would, per the HEC’s expectations, result in an overall improvement in the quality of offerings in Pakistan’s higher education institutions.

But not everyone agreed and in 2011, this policy was challenged in Lahore High Court through public interest litigation. The court decided against the HEC’s position in its judgment and directed the HEC not to sponsor any particular testing service. The court emphasised that the HEC recognised universities were not under any lawful obligation to conduct tests organised by the NTS or be bound by the results of such tests. It also barred the HEC from contracting a private sector testing firm for GATs until some procedural actions were first taken.

The HEC was understandably unhappy about the decision. In a letter written to the universities in May last year, HEC’s chairperson Dr Mukhtar Ahmed suggested that discontinuation of admission tests would derail the HEC’s process of quality assurance while negatively affecting the international compatibility of MPhil and Ph.D. awards. Dr Ahmed exhorted the universities to "refrain from giving up the requirement of admission tests till such time that the HEC prescribes a new test/testing body for the purpose in accordance with the order of LHC."

Additional testing for college applicants has additional costs associated with it. These costs are typically passed on to the public. As such there must be a solid evidence-based rationale for mandating such tests.

We share the HEC’s concerns for assuring the quality of higher education programmes and appreciate its efforts and initiatives. In fact, both the advocates as well as detractors of using uniform (or standardised) tests for selection of the candidates would agree on the need for improving the quality of professional programmes in Pakistan. However, constructive critique can only help improve the policies and their implementation. Below, we raise some questions about using tests for filtering out candidates in the same spirit.

The HEC seems confident that the quality of graduate and professional programmes has a strong connection with the use of tests as a filtering device. But this position is, at best, only hypothetical. Prospective policies must be examined and adopted on the basis of evidence and not anecdotes or whims. However, the HEC’s policy position on the efficacy and predictive value of admission tests does not seem to rest on robust evidence generated in Pakistan’s context. It is important for the HEC’s case for the admissions tests to satisfy following concerns on the basis of rigorous evidence.

First, how do we know that using tests results in making the entrants to higher education programmes significantly different and better from the entrants to similar programmes through earlier merit-based admission policy? It is entirely possible that it does make a significant difference. But it is also possible that it does not. The HEC could settle this question by commissioning professional researchers from universities to study rigorously the difference and report on the effectiveness or otherwise of test-based admission policy to improve the quality of entrants to higher education programmes. But, to the best of our knowledge, the HEC has not undertaken any such study.

Second, given a huge disparity in the quality of basic and secondary education imparted to various socio-economic segments of our society, admissions tests can lead to the further exclusion of the disadvantaged from higher education. Do we want this to happen as a society? The HEC should commission another study to see the proportion of children from public schools and low-cost private schools making it to institutions of higher education after the introduction of tests. A count of students with different backgrounds in institutions that use testing requirements for admission may reveal low representation of students from disadvantaged backgrounds of our society.

Here we are not arguing that the higher education institutions should disregard excellence. However, it is well known that tests, no matter how well standardised they might be, carry cultural biases that can result in unfair advantage to applicants from some socio-economic and cultural backgrounds over others. For example, it may be possible for the tests to be biased against some segments of the population merely due to the use of English and other cultural cues that are typically more accessible to the urban elites.

During our education, we have come across students who start with an initial disadvantage, but who improve tremendously over the course of the programme due to their inherent potential and hard work. Using tests as sorting devices for admission to the programme can simply exclude such students from the higher education. Such exclusion can have disastrous social consequences. Here we must also note that the HEC is a public sector organisation paid for by the taxpayers. As such, its mandate to assure the quality of higher education must be implemented in ways that protect the public interest. Consequently, it must seek ways of assuring both equity and excellence and uniform admission tests are not always the best instrument to assure equity.

Third, does every country with high performing higher education institutions make use of admission tests? Certainly not! Many European countries including the UK give precedence to the applicants’ performance in the higher secondary schools. Students are not double tested in such systems. Even in the United States, which is where the testing market for college admissions initially developed its niche, the debate has not completely settled in favour of using tests such as GRE, SATs or ACTs for college admissions.

To our knowledge, more than 200 four-year institutions in the United States base their admissions policies on the assumption that test scores do not necessarily equal merit. These institutions no longer use the SAT or ACT to make admissions decisions for all of their incoming students provided they meet other requirements. SAT and ACTs are meant for admission in the undergraduate admissions, but the logic for introducing them has been similar to the one used by the HEC for graduate level admissions.

There is also widespread disagreement about the use of GRE for admission into graduate level programmes and its ability to predict academic success, also called predictive validity, is controversial. Also, equally high-quality institutions in Europe and the United Kingdom do not require GRE or similar tests for admissions. Wouldn’t it be worthwhile for the HEC to commission a comparative study to inform fully its policies by the debates about student admission requirements in other countries?

Additional testing for college applicants has additional costs associated with it. These costs are typically passed on to the public. As such there must be a solid evidence-based rationale for mandating such tests. To the best of our knowledge, the HEC has offered no such rationale. In the absence of rigorous policy analysis and evidence, the HEC’s policy to mandate tests is open to accusations that it will find hard to defend. Furthermore, by making its policies truly evidence-based, the HEC will be seen as applying the same quality assurance standards to its own policymaking apparatus that it expects to enforce in the higher education institutions.

 

Irfan Muzaffar works independently as a teacher/researcher and is interested in politics of education reforms. He can be reached at imuzaffar@gmail.com

Ayesha Razzaque is an independent education researcher with a PhD in Education from Michigan State University. She can be reached at arazzaque@gmail.com. 

Putting tests to the test