On resource constraints and merit

In the face of complaints about how so few women have been awarded this year’s Swarnajayanti Fellowships in India, some scientists pushed back asking which of the male laureates who had been selected should have been left out instead.

This is a version of the merit argument commonly applied to demands for reservation and quota in higher education – and it’s also a form of an argument that often raises its head in seemingly resource-constrained environments.

India is often referred to as a country with ‘finite’ resources, often when people are discussing how best to put these resources to use. There are even romantic ideals associated with working in such environments, such as doing more with less – as ISRO has been for many decades – and the popular concept of jugaad.

But while fixing one variable while altering the other would make any problem more solvable, it’s almost always the resource variable that is presumed to be fixed in India. For example, a common refrain is that ISRO’s allocation is nowhere near that of NASA, so ISRO must figure how best to use its limited funds – and can’t afford luxuries like a full-fledged outreach team.

There are two problems in the context of resource availability here: 1. an outreach team proper is implied to be the product of a much higher allocation than has been made, i.e. comparable to that of NASA, and 2. incremental increases in allocation are precluded. Neither of these is right, of course: ISRO doesn’t have to wait for NASA’s volume of resources in order to set up an outreach team.

The deeper issue here is not that ISRO doesn’t have the requisite funds but that it doesn’t feel a better outreach unit is necessary. Here, it pays to acknowledge that ISRO has received not inconsiderable allocations over the years, as well as has enjoyed bipartisan support and (relative) freedom from bureaucratic interference, so it cops much of the blame as well. But in the rest of India, the situation is flipped: many institutions, and their members, have fewer resources than they have ideas and that affects research in a way of its own.

For example, in the context of grants and fellowships, there’s the obvious illusory ‘prestige constraint’ at the international level – whereby award-winners and self-proclaimed hotshots wield power by presuming prestige to be tied to a few accomplishments, such as winning a Nobel Prize, publishing papers in The Lancet and Nature or maintaining an h-index of 150. These journals and award-giving committees in turn boast of their selectiveness and elitism. (Note: don’t underestimate the influence of these journals.)

Then there’s the financial constraint for Big Science projects. Some of them may be necessary to keep, say, enthusiastic particle physicists from being carried away. But more broadly, a gross mismatch between the availability of resources and the scale of expectations may ultimately be detrimental to science itself.

These markers of prestige and power are all essentially instruments of control – and there is no reason this equation should be different in India. Funding for science in India is only resource-constrained to the extent to which the government, which is the principal funder, deems it to be.

The Indian government’s revised expenditure on ‘scientific departments’ in 2019-2020 was Rs 27,694 crore. The corresponding figure for defence was Rs 3,16,296 crore. If Rs 1,000 crore were moved from the latter to the former, the defence spend would have dropped only by 0.3% but the science spend would have increased by 3.6%. Why, if the money spent on the Statue of Unity had instead been diverted to R&D, the hike would have nearly tripled.

Effectively, the argument that ‘India’s resources are limited’ is tenable only when resources are constrained on all fronts, or specific fronts as determined by circumstances – and not when it seems to be gaslighting an entire sector. The determination of these circumstances in turn should be completely transparent; keeping them opaque will simply create more ground for arbitrary decisions.

Of course, in a pragmatic sense, it’s best to use one’s resources wisely – but this position can’t be generalised to the point where optimising for what’s available becomes morally superior to demanding more (even as we must maintain the moral justification of being allowed to ask how much money is being given to whom). That is, constantly making the system work more efficiently is a sensible aspiration, but it shouldn’t come – as it often does at the moment, perhaps most prominently in the case of CSIR – at the cost of more resources. If people are discontented because they don’t have enough, their ire should be directed at the total allocation itself more than how a part of it is being apportioned.

In a different context, a physicist had pointed out a few years ago that when the US government finally scrapped the proposed Superconducting Supercollider in the early 1990s, the freed-up funds weren’t directed back into other areas of science, as scientists thought they would be. (I couldn’t find the link to this comment nor recall the originator – but I think it was either Sabine Hossenfelder or Sean Carroll; I’ll update this post when I do.) I suspect that if the group of people that had argued thus had known this would happen, it might have argued differently.

I don’t know if a similar story has played out in India; I certainly don’t know if any Big Science projects have been commissioned and then scrapped. In fact, the opposite has happened more often: whereby projects have done more with less by repurposing an existing resource (examples herehere and here). (Having to fight so hard to realise such mega-projects in India could be motivating those who undertake one to not give up!)

In the non-Big-Science and more general sense, an efficiency problem raises its head. One variant of this is about research v. teaching: what does India need more of, or what’s a more efficient expense, to achieve scientific progress – institutions where researchers are free to conduct experiments without being saddled with teaching responsibilities or institutions where teaching is just as important as research? This question has often been in the news in India in the last few years, given the erstwhile HRD Ministry’s flip-flops on whether teachers should conduct research. I personally agree that we need to ‘let teachers teach’.

The other variant is concerned with blue-sky research: when are scientists more productive – when the government allows a “free play of free intellects” or if it railroads them on which problems to tackle? Given the fabled shortage of teachers at many teaching institutions, it’s easy to conclude that a combination of economic and policy decisions have funnelled India’s scholars into neglecting their teaching responsibilities. In turn, rejigging the fraction of teaching or teaching-cum-research versus research-only institutions in India in favour of the former, which are less resource-intensive, could free up some funds.

But this is also more about pragmatism than anything else – somewhat like untangling a bundle of wires before straightening them out instead of vice versa, or trying to do both at once. As things stand, India’s teaching institutions also need more money. Some reasons there is a shortage of teachers include the fact that they are often not paid well or on time, especially if they are employed at state-funded colleges; the institutions’ teaching facilities are subpar (or non-existent); if jobs are located in remote places and the institutions haven’t had the leeway to consider upgrading recreational facilities; etc.

Teaching at the higher-education level in India is also harder because of the poor state of government schools, especially outside tier I cities. This brings with it a separate raft of problems, including money.

Finally, a more ‘local’ example of prestige as well as financial constraints that also illustrates the importance of this PoV is the question of why the Swarnajayanti Fellowships have been awarded to so few women, and how this problem can be ‘fixed’.

If the query about which men should be excluded to accommodate women sounds like a reasonable question – you’re probably assuming that the number of fellows has to be limited to a certain number, dictated in turn by the amount of money the government has said can be awarded through these fellowships. But if the government allocated more money, we could appreciate all the current laureates as well as many others, and arguably without diluting the ‘quality’ of the competition (given just how many scholars there are).

Resource constraints obviously can’t explain or resolve everything that stands in the way of more women, trans-people, gender-non-binary and gender-non-conforming scholars receiving scholarships, fellowships, awards and prominent positions within academia. But axiomatically, it’s important to see that ‘fixing’ this problem requires action on two fronts, instead of just one – make academia less sexist and misogynistic and secure more funds. The constraints are certainly part of the problem, particularly when they are wielded as an excuse to concentrate more resources, and more power, in the hands of the already privileged, even as the constraints may not be real themselves.

In the final analysis, science doesn’t have to be a powerplay, and we don’t have to honour anyone at the expense of another. But deferring to such wisdom could let the fundamental causes of this issue off the hook.

The scientist as inadvertent loser

Twice this week, I’d had occasion to write about how science is an immutably human enterprise and therefore some of its loftier ideals are aspirational at best, and about how transparency is one of the chief USPs of preprint repositories and post-publication peer-review. As if on cue, I stumbled upon a strange case of extreme scientific malpractice that offered to hold up both points of view.

In an article published January 30, three editors of the Journal of Theoretical Biology (JTB) reported that one of their handling editors had engaged in the following acts:

  1. “At the first stage of the submission process, the Handling Editor on multiple occasions handled papers for which there was a potential conflict of interest. This conflict consisted of the Handling Editor handling papers of close colleagues at the Handling Editor’s own institute, which is contrary to journal policies.”
  2. “At the second stage of the submission process when reviewers are chosen, the Handling Editor on multiple occasions selected reviewers who, through our investigation, we discovered was the Handling Editor working under a pseudonym…”
  3. Many forms of reviewer coercion
  4. “In many cases, the Handling Editor was added as a co-author at the final stage of the review process, which again is contrary to journal policies.”

On the back of these acts of manipulation, this individual – whom the editors chose not to name for unknown reasons but one of whom all but identified on Twitter as a Kuo-Chen Chou (and backed up by an independent user) – proudly trumpets the following ‘achievement’ on his website:

The same webpage also declares that Chou “has published over 730 peer-reviewed scientific papers” and that “his papers have been cited more than 71,041 times”.

Without transparencya and without the right incentives, the scientific process – which I use loosely to denote all activities and decisions associated with synthesising, assimilating and organising scientific knowledge – becomes just as conducive to misconduct and unscrupulousness as any other enterprise if only because it allows people with even a little more power to exploit others’ relative powerlessness.

a. Ironically, the JTB article lies behind a paywall.

In fact, Chen had also been found guilty of similar practices when working with a different journal, called Bioinformatics, and an article its editors published last year has been cited prominently in the article by JTB’s editors.

Even if the JTB and Bioinformatics cases are exceptional for their editors having failed to weed out gross misconduct shortly after its first occurrence – it’s not; but although there many such exceptional cases, they are still likely to be in the minority (an assumption on my part) – a completely transparent review process eliminates such possibilities as well as, and more importantly, naturally renders the process trustlessb. That is, you shouldn’t have to trust a reviewer to do right by your paper; the system itself should be designed such that there is no opportunity for a reviewer to do wrong.

b. As in trustlessness, not untrustworthiness.

Second, it seems Chou accrued over 71,000 citations because the number of citations has become a proxy for research excellence irrespective of whether the underlying research is actually excellent – a product of the unavoidable growth of a system in which evaluators replaced a complex combination of factors with a single number. As a result, Chou and others like him have been able to ‘hack’ the system, so to speak, and distort the scientific literature (which you might’ve seen as the stack of journals in a library representing troves of scientific knowledge).

But as long as the science is fine, no harm done, right? Wrong.

If you visualised the various authors of research papers as points and the lines connecting them to each other as citations, an inordinate number would converge on the point of Chou – and they would be wrong, led there not by Chou’s prowess as a scientist but misled there by his abilities as a credit-thief and extortionist.

This graphing exercise isn’t simply a form of visual communication. Imagine your life as a scientist as a series of opportunities, where each opportunity is contested by multiple people and the people in charge of deciding who ‘wins’ at each stage aren’t some or all of well-trained, well-compensated or well-supported. If X ‘loses’ at one of the early stages and Y ‘wins’, Y has a commensurately greater chance of winning a subsequent contest and X, lower. Such contests often determine the level of funding, access to suitable guidance and even networking possibilities, so over multiple rounds, by virtue of the evaluators at each step having more reasons to be impressed by Y‘s CV because, say, they had more citations, and fewer reasons to be impressed with X‘s, X ends up with more reasons to exit science and switch careers.

Additionally, because of the resources that Y has received opportunities to amass, they’re in a better position to conduct even more research, ascend to even more influential positions and – if they’re so inclined – accrue even more citations through means both straightforward and dubious. To me, such prejudicial biasing resembles the evolution of a Lorenz attractor: the initial conditions might appear to be the same to some approximation, but for a single trivial choice, one scientist ends up being disproportionately more successful than another.

The answer of course is many things, including better ways to evaluate and reward research, and two of them in turn have to be to eliminate the use of numbers to denote human abilities and to make the journey of a manuscript from the lab to the wild as free of opaque, and therefore potentially arbitrary, decision-making as possible.

Featured image: A still from an animation showing the divergence of nearby trajectories on a Lorenz system. Caption and credit: MicoFilós/Wikimedia Commons, CC BY-SA 3.0.

Another controversy, another round of blaming preprints

On February 1, Anand Ranganathan, the molecular biologist more popular as a columnist for Swarajya, amplified a new preprint paper from scientists at IIT Delhi that (purportedly) claims the Wuhan coronavirus’s (2019 nCoV’s) DNA appears to contain some genes also found in the human immunodeficiency virus but not in any other coronaviruses. Ranganathan also chose to magnify the preprint paper’s claim that the sequences’ presence was “non-fortuitous”.

To be fair, the IIT Delhi group did not properly qualify what they meant by the use of this term, but this wouldn’t exculpate Ranganathan and others who followed him: to first amplify with alarmist language a claim that did not deserve such treatment, and then, once he discovered his mistake, to wonder out loud about whether such “non-peer reviewed studies” about “fast-moving, in-public-eye domains” should be published before scientific journals have subjected them to peer-review.

https://twitter.com/ARanganathan72/status/1223444298034630656
https://twitter.com/ARanganathan72/status/1223446546328326144
https://twitter.com/ARanganathan72/status/1223463647143505920

The more conservative scientist is likely to find ample room here to revive the claim that preprint papers only promote shoddy journalism, and that preprint papers that are part of the biomedical literature should be abolished entirely. This is bullshit.

The ‘print’ in ‘preprint’ refers to the act of a traditional journal printing a paper for publication after peer-review. A paper is designated ‘preprint’ if it hasn’t undergone peer-review yet, even though it may or may not have been submitted to a scientific journal for consideration. To quote from an article championing the use of preprints during a medical emergency, by three of the six cofounders of medRxiv, the preprints repository for the biomedical literature:

The advantages of preprints are that scientists can post them rapidly and receive feedback from their peers quickly, sometimes almost instantaneously. They also keep other scientists informed about what their colleagues are doing and build on that work. Preprints are archived in a way that they can be referenced and will always be available online. As the science evolves, newer versions of the paper can be posted, with older historical versions remaining available, including any associated comments made on them.

In this regard, Ranganathan’s ringing the alarm bells (with language like “oh my god”) the first time he tweeted the link to the preprint paper without sufficiently evaluating the attendant science was his decision, and not prompted by the paper’s status as a preprint. Second, the bioRxiv preprint repository where the IIT Delhi document showed up has a comments section, and it was brimming with discussion within minutes of the paper being uploaded. More broadly, preprint repositories are equipped to accommodate peer-review. So if anyone had looked in the comments section before tweeting, they wouldn’t have had reason to jump the gun.

Third, and most important: peer-review is not fool-proof. Instead, it is a legacy method employed by scientific journals to filter legitimate from illegitimate research and, more recently, higher quality from lower quality research (using ‘quality’ from the journals’ oft-twisted points of view, not as an objective standard of any kind).

This framing supports three important takeaways from this little scandal.

A. Much like preprint repositories, peer-reviewed journals also regularly publish rubbish. (Axiomatically, just as conventional journals also regularly publish the outcomes of good science, so do preprint repositories; in the case of 2019 nCoV alone, bioRxiv, medRxiv and SSRN together published at least 30 legitimate and noteworthy research articles.) It is just that conventional scientific journals conduct the peer-review before publication and preprint repositories (and research-discussion platforms like PubPeer), after. And, in fact, conducting the review after allows it to be continuous process able to respond to new information, and not a one-time event that culminates with the act of printing the paper.

But notably, preprint repositories can recreate journals’ ability to closely control the review process and ensure only experts’ comments are in the fray by enrolling a team of voluntary curators. The arXiv preprint server has been successfully using a similar team to carefully eliminate manuscripts advancing pseudoscientific claims. So as such, it is easier to make sure people are familiar with the preprint and post-publication review paradigm than to take advantage of their confusion and call for preprint papers to be eliminated altogether.

B. Those who support the idea that preprint papers are dangerous, and argue that peer-review is a better way to protect against unsupported claims, are by proxy advocating for the persistence of a knowledge hegemony. Peer-review is opaque, sustained by unpaid and overworked labour, and dispenses the same function that an open discussion often does at larger scale and with greater transparency. Indeed, the transparency represents the most important difference: since peer-review has traditionally been the demesne of journals, supporting peer-review is tantamount to designating journals as the sole and unquestionable arbiters of what knowledge enters the public domain and what doesn’t.

(Here’s one example of how such gatekeeping can have tragic consequences for society.)

C. Given these safeguards and perspectives, and as I have written before, bad journalists and bad comments will be bad irrespective of the window through which an idea has presented itself in the public domain. There is a way to cover different types of stories, and the decision to abdicate one’s responsibility to think carefully about the implications of what one is writing can never have a causal relationship with the subject matter. The Times of India and the Daily Mail will continue to publicise every new paper discussing whatever coffee, chocolate and/or wine does to the heart, and The Hindu and The Wire Science will publicise research published in preprint papers because we know how to be careful and of the risks to protect ourselves against.

By extension, ‘reputable’ scientific journals that use pre-publication peer-review will continue to publish many papers that will someday be retracted.

An ongoing scandal concerning spider biologist Jonathan Pruitt offers a useful parable – that journals don’t always publish bad science due to wilful negligence or poor peer-review alone but that such failures still do well to highlight the shortcomings of the latter. A string of papers the work on which Pruitt led were found to contain implausible data in support of some significant conclusions. Dan Bolnick, the editor of The American Naturalist, which became the first journal to retract Pruitt’s papers that it had published, wrote on his blog on January 30:

I want to emphasise that regardless of the root cause of the data problems (error or intent), these people are victims who have been harmed by trusting data that they themselves did not generate. Having spent days sifting through these data files I can also attest to the fact that the suspect patterns are often non-obvious, so we should not be blaming these victims for failing to see something that requires significant effort to uncover by examining the data in ways that are not standard for any of this. … The associate editor [who Bolnick tasked with checking more of Pruitt’s papers] went as far back as digging into some of Pruitt’s PhD work, when he was a student with Susan Riechert at the University of Tennessee Knoxville. Similar problems were identified in those data… Seeking an explanation, I [emailed and then called] his PhD mentor, Susan Riechert, to discuss the biology of the spiders, his data collection habits, and his integrity. She was shocked, and disturbed, and surprised. That someone who knew him so well for many years could be unaware of this problem (and its extent), highlights for me how reasonable it is that the rest of us could be caught unaware.

Why should we expect peer-review – or any kind of review, for that matter – to be better? The only thing we can do is be honest, transparent and reflexive.

The virtues and vices of reestablishing contact with Vikram

There was a PTI report yesterday that the Indian Space Research Organisation (ISRO) is still trying to reestablish contact with the Vikram lander of the Chandrayaan 2 mission. The lander had crashed onto the lunar surface on September 7 instead of touching down. The incident severed its communications link with ISRO ground control, leaving the org. unsure about the lander’s fate although all signs pointed to it being kaput.

Subsequent attempts to photograph the designated landing site using the Chandrayaan 2 orbiter as well as the NASA Lunar Reconnaissance Orbiter didn’t provide any meaningful clues about what could’ve happened except that the crash-landing could’ve smashed Vikram to pieces too small to be observable from orbit.

When reporting on ISRO or following the news about developments related to it, the outside-in view is everything. It’s sort of like a mapping between two sets. If the first set represents the relative significance of various projects within ISRO and the second the significance as perceived by the public according to what shows up in the news, then Chandrayaan 2, human spaceflight and maybe the impending launch of the Small Satellite Launch Vehicle are going to look like moderately sized objects in set 1 but really big in set 2.

The popular impression of what ISRO is working on is skewed towards projects that have received greater media coverage. This is a pithy truism but it’s important to acknowledge because ISRO’s own public outreach is practically nonexistent, so there are no ‘normalising’ forces working to correct the skew.

This is why it seems like a problem when ISRO – after spending over a week refusing to admit that the Chandrayaan 2 mission’s surface component had failed and its chairman K. Sivan echoing an internal review’s claim that the mission had in fact succeeded to the extent of 98% – says it’s still trying to reestablish contact without properly describing what that means.

It’s all you hear about vis-à-vis the Indian space programme in the news these days, if not about astronaut training or that the ‘mini-PSLV’ had a customer even before it had a test flight, potentially contribute to the unfortunate impression that these are ISRO’s priorities at the moment when in fact the relative significance of these missions – i.e. their size within set 1 – is arranged differently.

For example, the idea of trying to reestablish contact with the Vikram lander has been featured in at least three news reports in the last week, subsequently amplified through republishing and syndication, whereas the act of reestablishing contact could be as simple as one person pointing an antenna in the general direction of the Vikram lander, blasting a loud ‘what’s up’ message in the radio frequency and listening intently for a ‘not much’ reply. On the other hand, there’s a bunch of R&D, manufacturing practices and space-science discussions ISRO’s currently working on but which receive little to no coverage in the mainstream press.

So when Sivan repeatedly states across many days that they’re still trying to reestablish contact with Vikram, or when he’s repeatedly asked the same question by journalists with no imagination about ISRO’s breadth and scope, it may not necessarily signal a reluctance to admit failure in the face of overwhelming evidence that the mission has in fact failed (e.g., apart from not being able to visually spot the lander, the lander’s batteries aren’t designed to survive the long and freezing lunar night, so it’s extremely unlikely that it has power to respond to the ‘what’s up’). It could just be that either Sivan, the journalists or both – but it’s unlikely to be the journalists unless they’re aware of the resources it takes to attempt to reestablish contact – are happy to keep reminding the people that ISRO’s going to try very, very hard before it can abandon the lander.

Such metronymic messaging is politically favourable as well to maintain the Chandrayaan 2 mission’s place in the nationalist techno-pantheon. But it should also be abundantly clear at this point that Sivan’s decision to position himself as the organisation’s sole point of contact for media professionals at the first hint of trouble, his organisation’s increasing opacity to public view, if not scrutiny, and many journalists’ inexplicable lack of curiosity about things to ask the chairman all feed one another, ultimately sidelining other branches of ISRO and the public interest itself.

Alibaba IPO – A vindication of China’s Internet?

This is a guest post contributed by Anuj Srivas, tech. journalist and blogger, until recently the author of Hypertext, The Hindu.

The differences between Jack Ma – the founder of Chinese e-commerce giant Alibaba – and an average Silicon Valley CEO are numerous and far-reaching. Mr. Ma’s knowledge of mathematics, for instance, was once so poor that it almost prevented him from attending college. Contrast this to the technological genius of Apple co-founder Steve Wozniak or the academic-based origins of Google’s search algorithm.

His background as an English teacher, who dabbled in a number of different sectors before being fascinated by the Internet industry, is more characteristic of the average American investor that was duped by the dot-com bubble than it is of a Bill Gates or a Mark Zuckerberg.

And yet, today, Alibaba stands shoulder-to-shoulder with much of Silicon Valley. Its recently launched initial public offering (IPO) raked in a little over $20 billion, turning it into the world’s biggest technology flotation.

Is this event an inflection point? To some, it may seem to be a natural course of affairs after Yahoo! threw Alibaba a lifeline back in 2005. But is there something else to take away from it other than the obvious comparisons with India’s fledgling Internet industry?

Foremost, it is enormously pleasing to see Jack Ma, like Lenovo’s YY, clearly avoid subscribing to the Silicon Valley ideology of ‘transparency through opacity’. The CEOs of Google, Yahoo!, Facebook and Microsoft paint a picture of openness, sharing, and transparency wherever they go. The world of the cloud seems to make life easier (“look, no wires!”) but in fact wraps its users in an opaque black box. We have no tools that allow us to track our information and data, let alone allow us to take charge.

Of course, Mr. Ma (who sticks to doling out life and management tips in his speeches) is clearly constrained by the circumstances that allowed Alibaba to become what it is today: namely, the way China views, approaches and governs its Internet. This brings us to one of the more interesting implications of Alibaba’s IPO.

For decades now, China has been the poster-boy for how the Internet would look if we stopped fighting for a transparent, open and censorship-free system. The Great Firewall of China has continued to stand, quite proudly, in the face of international criticism.

The country itself has managed to make more than one U.S technology company come around to its way of thinking. As US government official Tom Lantos commented after Yahoo actively helped China in its censorship efforts, “While technologically and financially you [Yahoo!] are giants, morally you are pygmies.”

What are we to take away from the fact that China is in the process of undergoing one of its harshest ever Internet censorship/crackdown periods since 2003 (when it started construction of its Firewall) while Alibaba may yet go down in history as the biggest technology IPO ever? China’s approach to the Internet is a deadly mixture of censorship, propaganda and protectionism. The victory of Alibaba at the New York Stock Exchange will prove to be fodder for three takeaways.

First, that China’s protectionism-censorship stance (there cannot be one without the other) works. Despite years of criticism and threatened sanctions, China currently houses three of the world’s ten most valuable technology companies. After Alibaba’s IPO, how can Beijing look at its Internet governance approach with anything but approval? This is a moment of triumph for the country’s Internet regulators.

Second, that investors do not, and will not ever, care about censorship.

Third: will other countries, already outraged by the NSA and the Snowden incident, be emboldened to take China-like steps when it comes to governing their local Internet industries? There is little doubt that most countries that need to be build their own digital infrastructure, but China and Russia have shown us that their version of digital sovereignty comes with a lack of privacy and the introduction of a censorship regime. Asian, African and Latin American countries will have to escape this trap; the success of Alibaba does not help this.

On the other hand, this will also prove to be the biggest challenge for China’s Internet. If the country wants its Internet firms to go international, it will find it tough to take refuge behind its current Internet governance policies. Companies like Huawei and ZTE, which are in the telecommunication business, have to constantly defend themselves every time they enter a new country. Alibaba, which of course will not be plagued with national security issues, will have to consciously and unconsciously defend the Chinese Internet wherever it goes.

It would be instructive to monitor Mr. Ma and whichever ideology he chooses to adopt and market in the near future. I have a feeling it will tell us quite a bit about the fate of China’s Internet.

More by Anuj Srivas:

And now, a tweet from our sponsor