Getting rid of the GRE

An investigation by Science has found that, today, just 3% of “PhD programs in eight disciplines at 50 top-ranked US universities” require applicants’ GRE scores, “compared with 84% four years ago”. This is good news about a test whose purpose I could never understand: first as a student who had to take it to apply to journalism programmes, then as a journalist who couldn’t unsee the barriers the test imposed on students from poorer countries with localy tailored learning systems and, yes, not fantastic English. (Before the test’s format was changed in 2011, taking the test required takers to memorise long lists of obscure English words, an exercise that was devoid of purpose because takers would never remember most of those words.) Obviously many institutes still require prospective students to take the GRE, but the fact that many others are alive to questions about the utility of standardised tests and the barriers they impose on students from different socioeconomic backgrounds is heartening. The Science article also briefly explored what proponents of the GRE have to say, and I’m sure you’ll see (below) as I did that the reasons are flimsy – either because this is the strength of the arguments on offer or because Science hasn’t sampled all the available arguments in favour, which seems to me to be more likely. This said, the reason offered by a senior member of the company that devises and administers the GRE is instructive.

“I think it’s a mistake to remove GRE altogether,” says Sang Eun Woo, a professor of psychology at Purdue University. Woo is quick to acknowledge the GRE isn’t perfect and doesn’t think test scores should be used to rank and disqualify prospective students – an approach many programs have used in the past. But she and some others think the GRE can be a useful element for holistic reviews, considered alongside qualitative elements such as recommendation letters, personal statements, and CVs. “We’re not saying that the test is the only thing that graduate programs should care about,” she says. “This is more about, why not keep the information in there because more information is better than less information, right?”

Removing test scores from consideration could also hurt students, argues Alberto Acereda, associate vice president of global higher education at the Educational Testing Service, the company that runs the GRE. “Many students from underprivileged backgrounds so often don’t have the advantage of attending prestigious programs or taking on unpaid internships, so using their GRE scores serves [as a] way to supplement their application, making them more competitive compared to their peers.”

Both arguments come across as reasonable – but they’re both undermined by the result of an exercise that the department of Earth and atmospheric sciences at Cornell University conducted in 2020: A group evaluated prospective students’ applications for MS and PhD programmes while keeping the GRE scores hidden. When the scores were revealed, the evaluations weren’t “materially affected”. Obviously the department’s findings are not generalisable – but they indicate the GRE’s redundancy, with the added benefit for evaluators to not have to consider the test’s exorbitant fee on the pool of applicants (around Rs 8,000 in 2014 and $160 internationally, up to $220 today) and the other pitfalls of using the GRE to ‘rank’ students’ suitability for a PhD programme. Some others quoted in the Science article vouched for “rubric-based holistic reviews”. The meaning of “rubric” in context isn’t clear from the article itself but the term as a whole seems to mean considering students on a variety of fronts, one of which is their performance on the GRE. This also seems reasonable, but it’s not clear what GRE brings to the table. One 2019 study found that GRE scores couldn’t usefully predict PhD outcomes in biomedical sciences. In this context, including the GRE – even as an option – in the application process could disadvantage some students from applying and/or being admitted due to the test’s requirements (including the fee) as well as, and as a counterexample to Acereda’s reasoning, due to their scores on the test not faithfully reflecting their ability to complete a biomedical research degree. But in another context – of admissions to the Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences (GSBS) – researchers reported in 2019 that the GRE might be useful to “extract meaning from quantitative metrics” and when employed as part of a “multitiered holistic” admissions process, but which by itself could disproportionately triage Black, Native and Hispanic applicants. Taken together, more information is not necessarily better than less information, especially when there are other barriers to acquiring the ‘more’ bits.

Finally, while evaluators might enjoy the marginal utility of redundancy, as a way to ‘confirm’ their decisions, it’s an additional and significant source of stress and consumer of time to all test-takers. This is in addition to a seemingly inescapable diversity-performance tradeoff, which strikes beyond the limited question of whether one standardised test is a valid predictor of students’ future performance and at the heart of what the purpose of a higher-education course is. That is, should institutes consider diversity at the expense of students’ performance? The answer depends on the way each institute is structured, what its goal is and what it measures to that end. One that is focused on its members publishing papers in ‘high IF’ journals, securing high-value research grants, developing high h-indices and maintaining the institute’s own glamourous reputation is likely to see a ‘downside’ to increasing diversity. An institute focused on engendering curiosity, adherence to critical thinking and research methods, and developing blue-sky ideas is likely to not. But while the latter sounds great (strictly in the interests of science), it may be impractical from the point of view of helping tackle society’s problems and of fostering accountability on the scientific enterprise at large. The ideal institute lies somewhere in between these extremes: its admission process will need to assume a little more work – work that the GRE currently abstracts off into a single score – in exchange for the liberty to decouple from university rankings, impact factors, ‘prestige’ and other such preoccupations.

A Kuhnian gap between research publishing and academic success

There is a gap in research publishing and how it relates to academic success. On the one hand, there are scientists complaining of low funds, being short-staffed, low-quality or absent equipment, disoptimal employment/tenure terms, bureaucratic incompetence and political interference. On the other, there are scientists who describe their success within academia in terms of being published in XYZ journals (with impact factors of PQR), having high h-indices, having so many papers to their names, etc.

These two scenarios – both very real in India and I imagine in most countries – don’t straightforwardly lead to the other. They require a bridge, a systemic symptom that makes both of them possible even when they’re incompatible with each other. This bridge is those scientists’ attitudes about what it’s okay to do in order to keep the two façades in harmonious coexistence.

What is it okay to do? For starters, keep the research-publishing machinery running in a way that allows them to evaluate other scientists on matters other than their scientific work. This way, lack of resources for research can be decoupled from scientists’ output in journals. Clever, right?

According to a study published a month ago, manuscripts that include a Nobel laureate’s name among the coauthors are six-times more likely to be accepted for publication than those without a laureate’s name in the mast. This finding piles on other gender-related problems with peer-review, including women’s papers being accepted less often as well as men dominating the community of peer-reviewers. Nature News reported:

Knowledge of the high status of a paper’s author might justifiably influence a reviewer’s opinion: it could boost their willingness to accept a counterintuitive result, for example, on the basis of the author’s track record of rigour. But Palan’s study found that reviewers’ opinions changed across all six of the measures they were asked about, including the subject’s worthiness, the novelty of the information and whether the conclusions were supported. These things should not all be affected by knowledge of authorship, [Palan, one of the paper’s coauthors, said].

Palan also said the solution to this problem is for journals to adopt double-anonymised peer-review: the authors don’t who the reviewers and the reviewers don’t know who the authors are. The most common form of peer-review is the single-blind variety, where the reviewers know who the authors are but the authors don’t know who the reviewers are. FWIW, I prefer double-anonymised peer-review plus the journal publishing the peer-reviewers’ anonymised reports along with the paper.

Then again, modifying peer-review would still be localised to journals that are willing to adopt newer mechanisms, and thus be a stop-gap solution that doesn’t address the use of faulty peer-review mechanisms both inside journals and in academic settings. For example, given the resource-mininal context in which many Indian research institutes and universities function, hiring and promotion committees often decide whom to hire or promote based on which journals their papers have been published in and/or the number of times those papers have been cited.

Instead, what we need is systemic change that responds to all the problems with peer-review, instead of one problem at a time in piecemeal fashion, by improving transparency, resources and incentives. Specifically: a) make peer-review more transparent, b) give scientists the resources – including time and freedom – to evaluate each others’ work on factors localised to the context of their research (including the quality of their work and the challenges in their way), and c) incentivise scientists to do so in order to accelerate change and ensure compliance.

The scientometric numbers, originally invented to facilitate the large-scale computational analysis of the scientific literature, have come to subsume the purpose of the scientific enterprise itself: that is, scientists often want to have good numbers instead of want to do good science. As a result, there is often an unusual delay – akin to the magnetic hysteresis – between the resources for research being cut back and the resulting drop in productivity and quality showing in the researchers’ output. Perhaps more fittingly, it’s a Kuhnian response to paradigm change.

Second draft of India’s OA policy open for comments

The second draft of India’s first Open Access policy is up on the Department of Biotechnology (DBT) website. Until November 17, 2014, DBT Adviser Mr. Madhan Mohan will receive comments on the policy’s form and function, after which a course for implementation will be charted. The Bangalore-based Center for Internet and Society (CIS), a non-profit research unit, announced the update on its website while also highlighting some instructive differences between the first the second drafts of the policy.

The updated policy makes it clear that it isn’t concerned about tackling the academic community’s prevalent yet questionable reliance on quantitative metrics like impact-factors for evaluating scientists’ performance. Prof. Subbiah Arunachalam, one of the members of the committee that drafted the policy, had already said as much in August this year to this blogger.

The draft also says that it will not “underwrite article-processing charges” that some publishers charge to make articles available Open Access. The Elsevier Publishing group, which publishes 25 journals in India, has asked for a clarification on this.

Adhering to the policy’s mandates means scientists who have published a paper made possible by Departments of Biotechnology and Science & Technology should deposit that paper in an Open Access repository maintained either by the government or the institution they’re affiliated with.

They must do so within two weeks of the paper being accepted for publication. If the publisher has instituted an embargo period, then the paper will be made available on the repository after the embargo lifts. CIS, which advised the committee, has recommended that this period not exceed one year.

As of now, according to the draft, “Papers resulting from funds received from the fiscal year 2012-13 onwards are required to be deposited.” A footnote in the draft says that papers under embargo can still be viewed by individuals if the papers’ authors permit it.

The DBT repository is available here, and the DST repository here. All institutional repositories will be available as sub-domains on sciencecentral.in (e.g., xyz.sciencecentral.in), while the domain itself will lead to the text and metadata harvester.

The drafting committee also intends to inculcate a healthier Open Access culture in the country. It writes in the draft that “Every year each DBT and DST institute will celebrate “Open Access Day” during the International Open Access Week by organizing sensitizing lectures, programmes, workshops and taking new OA initiatives.”

Predatory publishing, vulnerable prey

On December 29, the International Conference on Recent Innovations in Engineering, Science and Technology (ICRIEST) is kicked off in Pune. It’s not a very well-known conference, but might as well have been for all the wrong reasons.

On December 16 and 20, Navin Kabra, from Pune, submitted two papers to ICRIEST. Both were accepted and, following a notification from the conference’s organizers, Mr. Kabra was told he could present the papers on December 29 if he registered himself at a cost of Rs. 5,000.

Herein lies the rub. The papers that Mr. Kabra submitted are meaningless. They claim to be about computer science, but were created entirely by the SCIGen fake-paper generator available here. The first one, titled “Impact of Symmetries on Cryptoanalysis”, is rife with tautological statements, and could not possibly have cleared peer-review. However, in the acceptance letter that Mr. Kabra received by email, paper is claimed to have been accepted after being subjected to some process of scrutiny, scoring 60, 70, 80 and 90.75 among some reviewers.

Why is the conference refusing to reject such a paper, then? Is it subsisting on the incompetence of secretarial staff? Or is it so desperate for papers that rejection rates are absurdly low?

Mr. Kabra’s second paper, “Use of cloud-computing and social media to determine box office performance”, might say otherwise. This one is even more brazen, containing these lines in its introduction:

As is clear from the title of this paper, this paper deals with the entertainment industry. So, we do provide entertainment in this paper. So, if you are reading this paper for entertainment, we suggest a heuristic that will allow you to read this paper efficiently. You should read any paragraph that starts with the first 4 words in bold and italics – those have been written by the author in painstaking detail. However, if a paragraph does not start with bold and italics, feel free to skip it because it is gibberish auto-generated by the good folks at SCIGen.

If this paragraph went through, then the administrators of ICRIEST are likely to possess no semblance of interest in academic research. In fact, they could be running the conference as a front to make some quick bucks.

Mr. Kabra professes an immediate reason for his perpetrating this scheme. “Lots of students are falling prey to such scams, and I want to raise awareness amongst students,” he wrote in an email.

He tells me that for the last three years, students pursuing a Bachelor of Engineering in a college affiliated with the University of Pune have been required to submit their final project to a conference, “a ridiculous requirement” thinks Mr. Kabra. As usual, not all colleges are enforcing this rule; those that are, on the other hand, are pushing students. Beyond falsifying data and plagiarizing reports to get them past evaluators, the next best thing to secure a good grade is to sneak it into some conference.

Research standards in the university are likely not helping, either. Such successful submissions as hoped for by teachers at Indian institutions will never happen for as long as the quality of research in the institution itself is low. Enough scientometric data exists from the last decade to support this, although I don’t know how if it breaks down to graduate and undergraduate research.

(While it may be argued that scientific output is not the only way to measure the quality of scientific research at an institution, you should know something’s afoot when the quantity of output is either very high or very low relative to, say, the corresponding number of citations and the country’s R&D expenditure.)

Another reason to think neither the university nor the students’ ‘mentors’ are helping is someone who spoke on behalf of the University to Mr. Kabra had no idea about ICRIEST. To quote from the Mid-Day article that’s covered this incident,

“I don’t know of any research organisation named IRAJ. I am sorry, I am just not aware about any such conference happening in the city,” said Dr Gajanan Kharate, dean of engineering in the University of Pune.

Does the U-of-Pune care if students have submitted paper to bogus journals? Do they check contents of the research themselves or do they rely on whether students’ ‘papers’ are accepted or not? No matter; what will change hence? I’m not sure. I won’t be surprised if nothing changes at all. However, there is a place to start.

Prof. Jeffrey Beall is the Scholarly Initiatives Librarian at the University of Colorado, Denver, and he maintains an exhaustive list of questionable journals and publishers. This list is well-referenced, constantly updated, and commonly referred to to check for dubious characters that might have approached research scholars.

On the list is the Institute for Research and Journals (IRAJ), which is organizing ICRIEST. In an article in The Hindu on September 26, 2012, Prof. Beall says, “They want others to work for free, and they want to make money off the good reputations of honest researchers.”

Mr. Kabra told me he had registered himself for the presentation—and not before he was able to bargain with them, “like … with a vegetable vendor”, and avail a 50 per cent discount on the fees. As silly as it sounds, this is not the mark of a reputable institution but a telltale sign of a publisher incapable of understanding the indignity of such bargains.

Another publisher on Prof. Beall’s list, Asian Journal of Mathematical Sciences, is sly enough to offer a 50 per cent fee-waiver because they “do not want fees to prevent the publication of worthy work”. Yet another journal, Academy Publish, is just honest: “We currently offer a 75 per cent discount to all invitees.”

Other signs, of course, are the use of words with incorrect spellings, as in “Dear Sir/Mam”.

At the end of the day, Mr. Kabra was unable to go ahead with the presentation because he said he was depressed by the sight of Masters students at ICRIEST—some who’d come there, on the west coast, from the eastern-coast state of Odisha. That’s the journey they’re willing to make when pushed by the lure for grades from one side and the existence of conferences like ICRIEST on the other.