Citations and media coverage

According to a press release accompanying a just-published study in PLOS ONE:

Highly cited papers also tend to receive more media attention, although the cause of the association is unclear.

One reason I can think of is a confounding factor that serves as the hidden cause of both phenomena. Discoverability matters just as much as the quality of a paper, and conventional journals implicated in the sustenance of notions like ‘prestige’ (Nature, Science, Cell, The Lancet, etc.) have been known to prefer more sensational positive results. And among researchers that still value publishing in these journals, these papers are more noticed, which leads to a ‘buzz’ that a reporter can pick up on.

Second, sensational results also easily lend themselves to sensational stories in the press, which has often been addicted to the same ‘positivity bias’ that the scientific literature harboured for many decades. In effect, highly cited papers are simply highly visible, and highly visibilised, papers – both to other scientists and journalists.

The press release continues:

The authors add: “Results from this study confirm the idea that media attention given to scientific research is strongly related to scientific citations for that same research. These results can inform scientists who are considering using popular media to increase awareness concerning their work, both within and outside the scientific community.”

I’m not sure what this comment means (I haven’t gone through the paper and it’s possible the paper’s authors discuss this in more detail), but there is already evidence that studies for which preprints are available receive more citations than those published behind a paywall. So perhaps scientists expecting more media coverage of their work should simply make their research more accessible. (It’s also a testament to the extent to which the methods of ‘conventional’ publishers – including concepts like ‘reader pays’ and the journal impact factor, accentuated by notions like ‘prestige’ – have become entrenched that this common-sensical solution is not so common sense.)

On the flip side, journalists also need to be weaned away from ‘top’ journals – I receive a significantly higher number of pitches offering to cover papers published in Nature journals – and retrained to spot interesting results published in less well-known journals as well as, on a slightly separate note, to situate the results of one study in a larger context instead of hyper-focusing on one context-limited set of results.

The work seems interesting, perhaps one of you will like to give it a comb.

Preference for OA research by income group

Two researchers from Rwanda performed a “systematic computational analysis of the biomedical literature” and concluded in their paper that:

… papers with authors based in sub-Saharan Africa, papers with authors based in low income countries, and papers resulting from international collaboration are all much more likely to be made openly accessible than papers that don’t have these properties.

They analysed 547,404 papers indexed in PubMed, which is:

… a free resource developed and maintained by the National Center for Biotechnology Information (NCBI) at the National Library of Medicine (NLM). PubMed PubMed provides free access to MEDLINE, NLM’s database of citations and abstracts in the fields of medicine, nursing, dentistry, veterinary medicine, health care systems, and preclinical sciences.

Source

The researchers also found that after scientists from low-income countries, those in high-income countries exhibited the next highest preference for publishing in open-access (OA) journals and that scientists from lower and upper middle-income countries – such as India – came last. It is important to acknowledge here that while there exists a marked (inverse) correlation between GDP per capita and number of publications in OA journals, a causation might be harder to pin down because GDP figures are influenced by a large array of factors.

At the same time, given the strength of the correlation, their conclusion – about scientists from middle-income countries being associated with the fewest OA papers in their sample – seems curious. The article processing charge (APC) levied by some journals to make a paper openly accessible immediately after publishing is only marginally more affordable in middle-income countries than it is in low-income countries. However, the effects of technology and initiative seem to allay some of this confusion.

There are two popular ways, or routes, to publish OA papers. In the ‘gold’ route, the authors of a paper pay the APC to the journal, which in turn makes the paper openly accessible once it is published. A common example is PLOS One, whose APC is at the lower end, $1,595 (Rs 1.13 lakh). On the other hand Nature Communications charges a stunning EUR 4,290 (Rs 3.4 lakh) per paper for submissions from India. In the ‘green route’, the authors or publishers upload the paper to a publicly accessible repository apart from formally publishing it; common example: the arXiv preprints server, which is moderated by volunteers.

There is also ‘hybrid’ OA, whereby a part of the journal’s contents are openly available and the rest is behind a paywall. In one review published in February 2018, researchers also pointed out a ‘bronze’ route: “articles made free-to-read on the publisher website” but “without an explicit [OA] license”.

The authors of the current paper reason that researchers from high-income countries might be ranking higher in their preference for OA papers because the “‘green’ route of OA has been encouraged by an enormous growth in the number of OA repositories, particularly in Europe and North America”; they also note that Africa was home to only 4% of such repositories in 2018. In the same vein, they continue, “the vast majority of funding organizations with OA policies as of 2018 were based in Europe and North America, with less than 3% of total OA policies originating from organizations based in Africa”.

Additionally, many journals frequently waive APCs for submissions from authors in low-income countries, whereas those from lower- and upper-middle income countries – again, including India – do not qualify as frequently to have their papers published without a fee. A very conservative, back-of-the-envelope estimate suggests India spends at least Rs 600 crore every year as APCs.

It was to reduce this burden that K. VijayRaghavan, the principal scientific adviser to the Government of India, announced earlier this year that India was joining the Plan S coalition of research-funders, which aims to have all research funded by them openly accessible to the public by 2021. As a result, researchers funded by Plan S members will have to submit to journals that offer gold/green routes and/or journals will have to make exceptions for publishing research funded by Plan S members.

This is going to take a bit of hammering out because the Plan S concept has many problems. Perhaps the most frustrating among them is its Eurocentric priorities. Other commentators have acknowledged that this limits Plan S’s ability to serve meaningfully the interests of researchers from South/Southeast Asia, Africa and Latin America. In July, two Argentinian researchers lambasted just this aspect and accused Plan S of ignoring “the reality of Latin America”. They wrote that Plan S views “scientific publishing and scholarly publications … as a commodity prone to commercialization” whereas in Latin America, they “are conceived as the community sharing of public goods”.

The latter is more in line with the interests of the developing world as well as with the spirit of knowledge-sharing more generally. At present, a little over 50% of research articles are not openly accessible, although this is changing thanks to the increasing recognition of OA’s merits, including the debatable citation advantage. Research-funders devised Plan S to “accelerate this transition”, as Jon Tennant wrote, but its implementation guidelines need tweaking.

Another problem with Plan S is that it keeps the focus on the ‘gold’ OA route and does little to address many researchers’ bias against less prestigious, but no less credible, journals. For example, while Plan S specifies that it will have gold-OA journals cap their APCs, scientists have said that this would be unenforceable. So, as I wrote in February:

… if Plan S has to work, researcher-funders also have to help reform scientists’ and administrators’ attitude towards notions like prestige. A top-down mandate to publish only in certain journals won’t work if the institutions aren’t equipped, for example, to evaluate research based on factors other than ‘prestige’.

To this end, the study by the researchers in Rwanda offers a useful suggestion: that the presence or absence of policies might not be the real problem.

There was no clear relationship between the number of open access policies in a region and the percentage of open access publications in that region. … The finding that open access publication rates are highest in sub-Saharan Africa and low income countries suggests that factors other than open access policy strongly influence authors’ decisions to make their work openly accessible.

When you crack your knuckles, you’re creating bubbles

The next time you crack your knuckle, know that you’re actually creating little gas-filled bubbles in the fluid that lubricates your knuckle’s joints. The cavities appear because the bones at the joints separate rapidly, creating a low-pressure volume that’s filled by gas ‘pumped’ out of the higher-pressure fluid.

This is what Greg Kawchuk, a professor at the University of Alberta, Canada, and his colleagues discovered, by observing a participant crack knuckles under the gaze of an MRI scanner. Their findings are reported in a paper in PLOS ONE, published April 15. He attributed the noise specifically to the sudden formation of the cavity in the synovial fluid, “a little bit like forming a vacuum”.

For its apparent simplicity, the study actually seeks to lay to rest the question of what causes the sound when knuckles are cracked. Since the early 1900s, multiple explanations have been advanced. All of them agreed that a cavity was involved in the cracking, but the primary contention was if the cracking sound was caused by a cavity forming or a cavity collapsing.

Kawchuk’s use of the MRI rules in favor of cavity-formation. In fact, it also shows that the cavity persists well after the cracking is done, or as the paper puts it, “past the point of sound production”. So the cracking couldn’t have arisen from a collapsing cavity. Here’s a film – the first one of knuckle-cracking – showing what the MRI revealed.

By way of an application in diagnosis, the PLOS ONE paper also notes,

… cine MRI revealed a new phenomenon preceding joint cracking; a transient bright signal in the intra-articular space. While not likely visualized gas given the imaging parameters employed, we do not have direct evidence to explain this observation. We speculate this phenomenon may be related to changes in fluid organization between cartilaginous joint surfaces and specifically may result from evacuation of fluid out of the joint cartilage with increasing tension. If so, this sign may be indicative of cartilage health and therefore provide a non-invasive means of characterizing joint status.

The bit about “non-invasive means” is enticing, although these are still early days and Kawchuk & co.’s words are purely speculative. Another diagnostic avenue pursued in the past has sought to understand the link between knuckle-cracking and osteoarthritis. On this, the last-word remains elusive.

A 1989 study had found that the energy released during knuckle-cracking was more than enough to damage cartilage, while a 2011 study found that the habit didn’t actually affect the risk of acquiring osteoarthritis.

Plagiarism is plagiarism

In a Nature article, Praveen Chaddah argues that textual plagiarism entails that the offending paper only carry a correction and not be retracted because that makes the useful ideas and results in the paper unavailable. On the face of it, this is an argument that draws a distinction between the writing of a paper and the production of its technical contents.

Chaddah proposes to preserve the distinction for the benefit of science by punishing plagiarists only for what they plagiarized. If they pinched text, then issue a correction and apology but let the results stay. If they pinched the hypothesis or results, then retract the paper. He thinks this line of thought is justifiable because, this way, one does not retard the introduction of new ideas into the pool of knowledge, because it does not harm the notion of “research as a creative enterprise” for as long as the hypothesis, method and/or results are original.

I disagree. Textual plagiarism is also the violation of an important creative enterprise that, in fact, has become increasingly relevant to science today: communication. Scientists have to use communication effectively to convince people that their research deserves tax-money. Scientists have to use communication effectively to make their jargon understandable to others. Plagiarizing the ‘descriptive’ part of papers, in this context, is to disregard the importance of communication, and copying the communicative bits should be tantamount to copying the results, too.

He goes on to argue that if textual plagiarism has been detected but if the hypothesis/results are original, the latter must be allowed to stand. His hypothesis appears to assume that scientific journals are the same as specialist forums that prioritize results over a full package: introduction, formulation, description, results, discussion, conclusion, etc. Scientific journals are not just the “guarantors of the citizen’s trust in science” (The Guardian) but also resources that people like journalists, analysts and policy-makers use to understand the extent of the guarantee.

What journalist doesn’t appreciate a scientist who’s able to articulate his/her research well, much less patronizing the publicity it will bring him/her?

In September 2013, the journal PLoS ONE retracted a paper by a group of Indian authors for textual plagiarism. This incident exemplifies a disturbing attitude toward plagiarism. One of the authors of the paper, Ram Dhaked, complained that it was the duty of PLoS ONE to detect their plagiarism before publishing it, glibly abdicating his guilt.

Like Chaddah argues, authors of a paper could be plagiarizing text for a variety of reasons – but somehow they believe lifting chunks of text from other papers during the paper-production process is allowable or will go unchecked. As an alternative to this, publishers could consider – or might already be considering – the ethics of ghost-writing.

He finally posits that papers with plagiarized text should be made available along with the correction, too. That would increase the visibility of the offense and over time, presumably, shame scientists into not plagiarizing – but that’s not the point. The point is to get scientists to understand why it is important to think about what they’ve done and communicate their thoughts. That journals retract both the text and the results if only the text was plagiarized is an important way to reinforce that point. If anything, Chaddah’s contention could have been to reduce the implications of having a retraction against one’s bio.