Why it’s important to address plagiarism

Plagiarism is a tricky issue. If it’s straightforward to you, ask yourself if you’re assuming that the plagiariser (plagiarist?) is fluent in reading and writing, but especially writing, English. The answer’s probably ‘yes’. This is because for someone entering into an English-using universe for the first time, certain turns of phrase and certain ways to articulate complicated concepts stick with you the first time you read them, and when the time comes for you to spell out the same ideas and concepts, you passively, inadvertently recall them and reuse them. You don’t think – at least at first – that they’re someone else’s words, more so if you haven’t been taught, for no fault of yours, what academic plagiarism is and/or that it’s bad.

This is also why there’s a hierarchy of plagiarism. For example, if you’re writing a scientific paper and you copy another paper’s results, that’s worse than if you copy verbatim the explanation of a certain well-known idea. This is why former University Grants Commission chairman Praveen Chaddah wrote in 2014:

There are worse offences than text plagiarism — such as taking credit for someone else’s research ideas and lifting their results. These are harder to detect than copy-and-pasted text, so receive less attention. This should change. To help, academic journals could, for instance, change the ways in which they police and deal with such cases.

But if you’re fluent with writing English, if you know what plagiarism and plagiarise anyway (without seeking resources to help you beat its temptation), and/or if you’re stealing someone else’s idea and calling it your own, you deserve the flak and (proportionate) sanctions coming your way. In this context, a new Retraction Watch article by David Sanders makes for interesting reading. According to Sanders, in 2018, he wrote to the editors of a journal that had published a paper in 2011 with lots of plagiarised text. After a back-and-forth, the editors told Sanders they’d look into it. He asked them again in 2019 and May 2021 and received the same reply on both occasions. Then on July 26 the journal published a correction to the 2011 article. Sanders wasn’t happy and wrote back to the editors, one of whom replied thus:

Thank you for your email. We went through this case again, and discussed whether we may have made the wrong decision. We did follow the COPE guidelines step by step and used several case studies for further information. This process confirmed that an article should be retracted when it is misleading for the reader, either because the information within is incorrect, or when an author induces the reader to think that the data presented is his own. As this is a Review, copied from other Reviews, the information within does not per se mislead the reader, as the primary literature is still properly cited. We agree that this Review was not written in a desirable way, and that the authors plagiarised a large amount of text, but according to the guidelines the literature must be considered from the point of view of the reader, and retractions should not be used as a tool to punish authors. We therefore concluded that a corrigendum was the best way forward. Hence, we confirm our decision on this case.

Thank you again for flagging this case in the first place, which allowed us to correct the record and gain deeper insights into publishing ethics, even though this led to a solution we do not necessarily like.

Sanders wasn’t happy: he wrote on Retraction Watch that “the logic of [the editor’s] message is troubling. The authors engaged in what is defined by COPE (the Committee on Publication Ethics) as ‘Major Plagiarism’ for which the prescribed action is retraction of the published article and contacting the institution of the authors. And yet the journal did not retract.” The COPE guidelines summarise the differences between minor and major plagiarism this way:

Source: https://publicationethics.org/files/COPE_plagiarism_disc%20doc_26%20Apr%2011.pdf

Not being fluent in English could render the decisions made using this table less than fair, for example because an author could plagiarise several paragraphs but honestly have no intention to deceive – simply because they didn’t think they needed to be that careful. I know this might sound laughable to a scientist operating in the US or Europe, out of a better-run, better-organised and better-funded institute, and who has been properly in the ins and outs of academic ethics. But it’s true: the bulk of India’s scientists work outside the IITs, IISERs, DAE/DBT/DST-funded institutes and the more progressive private universities (although only one – Ashoka – comes to mind). Their teachers before them worked in the same resource-constrained environments, and for most of whom the purpose of scientific work wasn’t science as much as an income. Most of them probably never used plagiarism-checking tools either, at least not until they got into trouble one time and then found out about such things.

I myself found out about the latter in an interesting way – when I reported that Appa Rao Podile, the former vice-chancellor of the University of Hyderabad, had plagiarised in some of his papers, around the time students at the university were protesting the university’s response to the death of Rohith Vemula. When I emailed Podile for his response, he told me he would like my help with the tools with which he could spot plagiarism. I thought he was joking, but after a series of unofficial enquiries over the next year or so, I learnt that plagiarism-checking software was not at all the norm, even if solutions like Copyscape were relatively cheap, in state-funded colleges and second-tier universities around the country. I had no reason to leave Podile off the hook – but not because he hadn’t used plagiarism-checking software but because he was a vice-chancellor of a major university and had to have done better than claim ignorance.

(I also highly recommend this November 2019 article in The Point, asking whether plagiarism is wrong.)

According to Sanders, the editor who replied didn’t retract the paper because he thought it wasn’t ‘major plagiarism’, according to COPE – whereas Sanders thought it was. The editor appears to have reasoned his way out of the allegation, in the editor’s view at least, by saying that the material printed in the paper wasn’t misleading because it had been copied from non-misleading original material and that the supposedly lesser issue was that while it had been cited, it hadn’t been syntactically attributed as such (placed between double quotes, for example). The issue for Sanders, with whom I agree here, is that the authors had copied the material and presented it in a way that indicated they were its original creators. The lengths to which journal editors can go to avoid retracting papers, and therefore protect their journal’s reputation, ranking or whatever, is astounding. I also agree with Sanders when he says that by refusing to retract the article, the editors are practically encouraging misconduct.

I’d like to go a step further and ask: when journal editors think like this, where does that leave Indian scientists of the sort I’ve described above – who are likely to do better with the right help and guidance? In 2018, Rashmi Raniwala and Sudhir Raniwala wrote in The Wire Science that the term ‘predatory’, in ‘predatory journals’, was a misnomer:

… it is incorrect to call them ‘predatory’ journals because the term predatory suggests that there is a predator and a victim. The academicians who publish in these journals are not victims; most often, they are self-serving participants. The measure of success is the number of articles received by these journals. The journals provide a space to those who wanted easy credit. And a large number of us wanted this easy credit because we were, to begin with, not suitable for the academic profession and were there for the job. In essence, these journals could not have succeeded without an active participation and the connivance of some of us.

It was a good article at the time, especially in the immediate context of the Raniwalas’ fight to have known defaulters suitably punished. There are many bad-faith actors in the Indian scientific community and what the Raniwalas write about applies to them without reservation (ref. the cases of Chandra Krishnamurthy, R.A. Mashelkar, Deepak Pental, B.S. Rajput, V. Ramakrishnan, C.N.R. Rao, etc.). But I’m also confident enough to say now that predatory journals exist, typified by editors who place the journal before the authors of the articles that constitute it, who won’t make good-faith efforts to catch and correct mistakes at the time they’re pointed out. It’s marginally more disappointing that the editor who replied to Sanders replied at all; most don’t, as Elisabeth Bik has repeatedly reminded us. He bothered enough to engage – but not enough to give a real damn.

The awesome limits of superconductors

On June 24, a press release from CERN said that scientists and engineers working on upgrading the Large Hadron Collider (LHC) had “built and operated … the most powerful electrical transmission line … to date”. The transmission line consisted of four cables – two capable of transporting 20 kA of current and two, 7 kA.

The ‘A’ here stands for ‘ampere’, the SI unit of electric current. Twenty kilo-amperes is an extraordinary amount of current, nearly equal to the amount in a single lightning strike.

In the particulate sense: one ampere is the flow of one coulomb per second. One coulomb is equal to around 6.24 quintillion elementary charges, where each elementary charge is the charge of a single proton or electron (with opposite signs). So a cable capable of carrying a current of 20 kA can essentially transport 124.8 sextillion electrons per second.

According to the CERN press release (emphasis added):

The line is composed of cables made of magnesium diboride (MgB2), which is a superconductor and therefore presents no resistance to the flow of the current and can transmit much higher intensities than traditional non-superconducting cables. On this occasion, the line transmitted an intensity 25 times greater than could have been achieved with copper cables of a similar diameter. Magnesium diboride has the added benefit that it can be used at 25 kelvins (-248 °C), a higher temperature than is needed for conventional superconductors. This superconductor is more stable and requires less cryogenic power. The superconducting cables that make up the innovative line are inserted into a flexible cryostat, in which helium gas circulates.

The part in bold could have been more explicit and noted that superconductors, including magnesium diboride, can’t carry an arbitrarily higher amount of current than non-superconducting conductors. There is actually a limit for the same reason why there is a limit to the current-carrying capacity of a normal conductor.

This explanation wouldn’t change the impressiveness of this feat and could even interfere with readers’ impression of the most important details, so I can see why the person who drafted the statement left it out. Instead, I’ll take this matter up here.

An electric current is generated between two points when electrons move from one point to the other. The direction of current is opposite to the direction of the electrons’ movement. A metal that conducts electricity does so because its constituent atoms have one or more valence electrons that can flow throughout the metal. So if a voltage arises between two ends of the metal, the electrons can respond by flowing around, birthing an electric current.

This flow isn’t perfect, however. Sometimes, a valence electron can bump into atomic nuclei, impurities – atoms of other elements in the metallic lattice – or be thrown off course by vibrations in the lattice of atoms, produced by heat. Such disruptions across the metal collectively give rise to the metal’s resistance. And the more resistance there is, the less current the metal can carry.

These disruptions often heat the metal as well. This happens because electrons don’t just flow between the two points across which a voltage is applied. They’re accelerated. So as they’re speeding along and suddenly bump into an impurity, they’re scattered into random directions. Their kinetic energy then no longer contributes to the electric energy of the metal and instead manifests as thermal energy – or heat.

If the electrons bump into nuclei, they could impart some of their kinetic energy to the nuclei, causing the latter to vibrate more, which in turn means they heat up as well.

Copper and silver have high conductance because they have more valence electrons available to conduct electricity and these electrons are scattered to a lesser extent than in other metals. Therefore, these two also don’t heat up as quickly as other metals might, allowing them to transport a higher current for longer. Copper in particular has a higher mean free path: the average distance an electron travels before being scattered.

In superconductors, the picture is quite different because quantum physics assumes a more prominent role. There are different types of superconductors according to the theories used to understand how they conduct electricity with zero resistance and how they behave in different external conditions. The electrical behaviour of magnesium diboride, the material used to transport the 20 kA current, is described by Bardeen-Cooper-Schrieffer (BCS) theory.

According to this theory, when certain materials are cooled below a certain temperature, the residual vibrations of their atomic lattice encourages their valence electrons to overcome their mutual repulsion and become correlated, especially in terms of their movement. That is, the electrons pair up.

While individual electrons belong to a class of particles called fermions, these electron pairs – a.k.a. Cooper pairs – belong to another class called bosons. One difference between these two classes is that bosons don’t obey Pauli’s exclusion principle: that no two fermions in the same quantum system (like an atom) can have the same set of quantum numbers at the same time.

As a result, all the electron pairs in the material are now free to occupy the same quantum state – which they will when the material is supercooled. When they do, the pairs collectively make up an exotic state of matter called a Bose-Einstein condensate: the electron pairs now flow through the material as if they were one cohesive liquid.

In this state, even if one pair gets scattered by an impurity, the current doesn’t experience resistance because the condensate’s overall flow isn’t affected. In fact, given that breaking up one pair will cause all other pairs to break up as well, the energy required to break up one pair is roughly equal to the energy required to break up all pairs. This feature affords the condensate a measure of robustness.

But while current can keep flowing through a BCS superconductor with zero resistance, the superconducting state itself doesn’t have infinite persistence. It can break if it stops being cooled below a specific temperature, called the critical temperature; if the material is too impure, contributing to a sufficient number of collisions to ‘kick’ all electrons pairs out of their condensate reverie; or if the current density crosses a particular threshold.

At the LHC, the magnesium diboride cables will be wrapped around electromagnets. When a large current flows through the cables, the electromagnets will produce a magnetic field. The LHC uses a circular arrangement of such magnetic fields to bend the beam of protons it will accelerate into a circular path. The more powerful the magnetic field, the more accurate the bending. The current operational field strength is 8.36 tesla, about 128,000-times more powerful than Earth’s magnetic field. The cables will be insulated but they will still be exposed to a large magnetic field.

Type I superconductors completely expel an external magnetic field when they transition to their superconducting state. That is, the magnetic field can’t penetrate the material’s surface and enter the bulk. Type II superconductors are slightly more complicated. Below one critical temperature and one critical magnetic field strength, they behave like type I superconductors. Below the same temperature but a slightly stronger magnetic field, they are superconducting and allow the fields to penetrate their bulk to a certain extent. This is called the mixed state.

A hand-drawn phase diagram showing the conditions in which a mixed-state type II superconductor exists. Credit: Frederic Bouquet/Wikimedia Commons, CC BY-SA 3.0

Say a uniform magnetic field is applied over a mixed-state superconductor. The field will plunge into the material’s bulk in the form of vortices. All these vortices will have the same magnetic flux – the number of magnetic field lines per unit area – and will repel each other, settling down in a triangular pattern equidistant from each other.

An annotated image of vortices in a type II superconductor. The scale is specified at the bottom right. Source: A set of slides entitled ‘Superconductors and Vortices at Radio Frequency Magnetic Fields’ by Ernst Helmut Brandt, Max Planck Institute for Metals Research, October 2010.

When an electric current passes through this material, the vortices are slightly displaced, and also begin to experience a force proportional to how closely they’re packed together and their pattern of displacement. As a result, to quote from this technical (yet lucid) paper by Praveen Chaddah:

This force on each vortex … will cause the vortices to move. The vortex motion produces an electric field1 parallel to [the direction of the existing current], thus causing a resistance, and this is called the flux-flow resistance. The resistance is much smaller than the normal state resistance, but the material no longer [has] infinite conductivity.

1. According to Maxwell’s equations of electromagnetism, a changing magnetic field produces an electric field.

Since the vortices’ displacement depends on the current density: the greater the number of electrons being transported, the more flux-flow resistance there is. So the magnesium diboride cables can’t simply carry more and more current. At some point, setting aside other sources of resistance, the flux-flow resistance itself will damage the cable.

There are ways to minimise this resistance. For example, the material can be doped with impurities that will ‘pin’ the vortices to fixed locations and prevent them from moving around. However, optimising these solutions for a given magnetic field and other conditions involves complex calculations that we don’t need to get into.

The point is that superconductors have their limits too. And knowing these limits could improve our appreciation for the feats of physics and engineering that underlie achievements like cables being able to transport 124.8 sextillion electrons per second with zero resistance. In fact, according to the CERN press release,

The [line] that is currently being tested is the forerunner of the final version that will be installed in the accelerator. It is composed of 19 cables that supply the various magnet circuits and could transmit intensities of up to 120 kA!

§

While writing this post, I was frequently tempted to quote from Lisa Randall‘s excellent book-length introduction to the LHC, Knocking on Heaven’s Door (2011). Here’s a short excerpt:

One of the most impressive objects I saw when I visited CERN was a prototype of LHC’s gigantic cylindrical dipole magnets. Event with 1,232 such magnets, each of them is an impressive 15 metres long and weighs 30 tonnes. … Each of these magnets cost EUR 700,000, making the ned cost of the LHC magnets alone more than a billion dollars.

The narrow pipes that hold the proton beams extend inside the dipoles, which are strung together end to end so that they wind through the extent of the LHC tunnel’s interior. They produce a magnetic field that can be as strong as 8.3 tesla, about a thousand times the field of the average refrigerator magnet. As the energy of the proton beams increases from 450 GeV to 7 TeV, the magnetic field increases from 0.54 to 8.3 teslas, in order to keep guiding the increasingly energetic protons around.

The field these magnets produce is so enormous that it would displace the magnets themselves if no restraints were in place. This force is alleviated through the geometry of the coils, but the magnets are ultimately kept in place through specially constructed collars made of four-centimetre thick steel.

… Each LHC dipole contains coils of niobium-titanium superconducting cables, each of which contains stranded filaments a mere six microns thick – much smaller than a human hair. The LHC contains 1,200 tonnes of these remarkable filaments. If you unwrapped them, they would be long enough to encircle the orbit of Mars.

When operating, the dipoles need to be extremely cold, since they work only when the temperature is sufficiently low. The superconducting wires are maintained at 1.9 degrees above absolute zero … This temperature is even lower than the 2.7-degree cosmic microwave background radiation in outer space. The LHC tunnel houses the coldest extended region in the universe – at least that we know of. The magnets are known as cryodipoles to take into account their special refrigerated nature.

In addition to the impressive filament technology used for the magnets, the refrigeration (cryogenic) system is also an imposing accomplishment meriting its own superlatives. The system is in fact the world’s largest. Flowing helium maintains the extremely low temperature. A casing of approximately 97 metric tonnes of liquid helium surrounds the magnets to cool the cables. It is not ordinary helium gas, but helium with the necessary pressure to keep it in a superfluid phase. Superfluid helium is not subject to the viscosity of ordinary materials, so it can dissipate any heat produced in the dipole system with great efficiency: 10,000 metric tonnes of liquid nitrogen are first cooled, and this in turn cools the 130 metric tonnes of helium that circulate in the dipoles.

Featured image: A view of the experimental MgB2 transmission line at the LHC. Credit: CERN.

Dealing with plagiarism? Look at thy neighbour

Four doctors affiliated with Kathmandu University (KU) in Nepal are going to be fired because they plagiarised data in two papers. The papers were retracted last year from the Bali Medical Journal, where they had been published. A dean at the university, Dipak Shrestha, told a media outlet that the matter will be settled within two weeks. A total of six doctors, including the two above, are also going to be blacklisted by the journal. This is remarkably swift and decisive action against a problem that refuses to go away in India for many reasons. But I’m not an apologist; one of those reasons is that many teachers at colleges and universities seem to think “plagiarism is okay”. And for as long as that attitude persists, academicians are going to be able to plagiarise and flourish in the country.

One of the other reasons plagiarism is rampant in India is the language problem. As Praveen Chaddah, a former chairman of the University Grants Commission, has written, there is a form of plagiarism that can be forgiven – the form at play when a paper’s authors find it difficult to articulate themselves in English but have original ideas all the same. The unforgivable form is when the ideas are plagiarised as well. According to a retraction notice supplied by the Bali Medical Journal, the KU doctors indulged in plagiarism of the unforgivable kind, and were duly punished. In India, however, I’m yet to hear of an instance where researchers found to have been engaging in such acts were pulled up as swiftly as their Nepali counterparts were, or had sanctions imposed on their work within a finite period and in a transparent manner.

The production and dissemination of scientific knowledge should not have to suffer because some scientists aren’t fluent with a language. Who knows, India might already be the ‘science superpower’ everyone wants it to be if we’re able to account for information and knowledge produced in all its languages. But this does not mean India’s diversity affords it the license to challenge the use of English as the de facto language of science; that would be stupid. English is prevalent, dominant, even hegemonic (as K. VijayRaghavan has written). So if India is to make it to the Big League, then officials must consider doing these things:

  1. Inculcate the importance of communicating science. Writing a paper is also a form of communication. Teach how to do it along with technical skills.
  2. Set aside money – as some Australian and European institutions do1 – to help those for whom English isn’t their first, or even second, language write papers that will be appreciated for their science instead of rejected for their language (unfair though this may be).
  3. DO WHAT NEPAL IS DOING – Define reasonable consequences for plagiarising (especially of the unforgivable kind), enumerate them in clear and cogent language, ensure these sanctions are easily accessible by scientists as well as the public, and enforce them regularly.

Researchers ought to know better – especially the more prominent, more influential ones. The more well-known a researcher is, the less forgivable their offence should be, at least because they set important precedents that others will follow. And to be able to remind them effectively when they act carelessly, an independent body should be set up at the national level, particularly for institutions funded by the central government, instead of expecting the offender’s host institution to be able to effectively punish someone well-embedded in the hierarchy of the institution itself.

1. Hat-tip to Chitralekha Manohar.

Featured image credit: xmex/Flickr, CC BY 2.0.

Plagiarism is plagiarism

In a Nature article, Praveen Chaddah argues that textual plagiarism entails that the offending paper only carry a correction and not be retracted because that makes the useful ideas and results in the paper unavailable. On the face of it, this is an argument that draws a distinction between the writing of a paper and the production of its technical contents.

Chaddah proposes to preserve the distinction for the benefit of science by punishing plagiarists only for what they plagiarized. If they pinched text, then issue a correction and apology but let the results stay. If they pinched the hypothesis or results, then retract the paper. He thinks this line of thought is justifiable because, this way, one does not retard the introduction of new ideas into the pool of knowledge, because it does not harm the notion of “research as a creative enterprise” for as long as the hypothesis, method and/or results are original.

I disagree. Textual plagiarism is also the violation of an important creative enterprise that, in fact, has become increasingly relevant to science today: communication. Scientists have to use communication effectively to convince people that their research deserves tax-money. Scientists have to use communication effectively to make their jargon understandable to others. Plagiarizing the ‘descriptive’ part of papers, in this context, is to disregard the importance of communication, and copying the communicative bits should be tantamount to copying the results, too.

He goes on to argue that if textual plagiarism has been detected but if the hypothesis/results are original, the latter must be allowed to stand. His hypothesis appears to assume that scientific journals are the same as specialist forums that prioritize results over a full package: introduction, formulation, description, results, discussion, conclusion, etc. Scientific journals are not just the “guarantors of the citizen’s trust in science” (The Guardian) but also resources that people like journalists, analysts and policy-makers use to understand the extent of the guarantee.

What journalist doesn’t appreciate a scientist who’s able to articulate his/her research well, much less patronizing the publicity it will bring him/her?

In September 2013, the journal PLoS ONE retracted a paper by a group of Indian authors for textual plagiarism. This incident exemplifies a disturbing attitude toward plagiarism. One of the authors of the paper, Ram Dhaked, complained that it was the duty of PLoS ONE to detect their plagiarism before publishing it, glibly abdicating his guilt.

Like Chaddah argues, authors of a paper could be plagiarizing text for a variety of reasons – but somehow they believe lifting chunks of text from other papers during the paper-production process is allowable or will go unchecked. As an alternative to this, publishers could consider – or might already be considering – the ethics of ghost-writing.

He finally posits that papers with plagiarized text should be made available along with the correction, too. That would increase the visibility of the offense and over time, presumably, shame scientists into not plagiarizing – but that’s not the point. The point is to get scientists to understand why it is important to think about what they’ve done and communicate their thoughts. That journals retract both the text and the results if only the text was plagiarized is an important way to reinforce that point. If anything, Chaddah’s contention could have been to reduce the implications of having a retraction against one’s bio.