Getting rid of the GRE

An investigation by Science has found that, today, just 3% of “PhD programs in eight disciplines at 50 top-ranked US universities” require applicants’ GRE scores, “compared with 84% four years ago”. This is good news about a test whose purpose I could never understand: first as a student who had to take it to apply to journalism programmes, then as a journalist who couldn’t unsee the barriers the test imposed on students from poorer countries with localy tailored learning systems and, yes, not fantastic English. (Before the test’s format was changed in 2011, taking the test required takers to memorise long lists of obscure English words, an exercise that was devoid of purpose because takers would never remember most of those words.) Obviously many institutes still require prospective students to take the GRE, but the fact that many others are alive to questions about the utility of standardised tests and the barriers they impose on students from different socioeconomic backgrounds is heartening. The Science article also briefly explored what proponents of the GRE have to say, and I’m sure you’ll see (below) as I did that the reasons are flimsy – either because this is the strength of the arguments on offer or because Science hasn’t sampled all the available arguments in favour, which seems to me to be more likely. This said, the reason offered by a senior member of the company that devises and administers the GRE is instructive.

“I think it’s a mistake to remove GRE altogether,” says Sang Eun Woo, a professor of psychology at Purdue University. Woo is quick to acknowledge the GRE isn’t perfect and doesn’t think test scores should be used to rank and disqualify prospective students – an approach many programs have used in the past. But she and some others think the GRE can be a useful element for holistic reviews, considered alongside qualitative elements such as recommendation letters, personal statements, and CVs. “We’re not saying that the test is the only thing that graduate programs should care about,” she says. “This is more about, why not keep the information in there because more information is better than less information, right?”

Removing test scores from consideration could also hurt students, argues Alberto Acereda, associate vice president of global higher education at the Educational Testing Service, the company that runs the GRE. “Many students from underprivileged backgrounds so often don’t have the advantage of attending prestigious programs or taking on unpaid internships, so using their GRE scores serves [as a] way to supplement their application, making them more competitive compared to their peers.”

Both arguments come across as reasonable – but they’re both undermined by the result of an exercise that the department of Earth and atmospheric sciences at Cornell University conducted in 2020: A group evaluated prospective students’ applications for MS and PhD programmes while keeping the GRE scores hidden. When the scores were revealed, the evaluations weren’t “materially affected”. Obviously the department’s findings are not generalisable – but they indicate the GRE’s redundancy, with the added benefit for evaluators to not have to consider the test’s exorbitant fee on the pool of applicants (around Rs 8,000 in 2014 and $160 internationally, up to $220 today) and the other pitfalls of using the GRE to ‘rank’ students’ suitability for a PhD programme. Some others quoted in the Science article vouched for “rubric-based holistic reviews”. The meaning of “rubric” in context isn’t clear from the article itself but the term as a whole seems to mean considering students on a variety of fronts, one of which is their performance on the GRE. This also seems reasonable, but it’s not clear what GRE brings to the table. One 2019 study found that GRE scores couldn’t usefully predict PhD outcomes in biomedical sciences. In this context, including the GRE – even as an option – in the application process could disadvantage some students from applying and/or being admitted due to the test’s requirements (including the fee) as well as, and as a counterexample to Acereda’s reasoning, due to their scores on the test not faithfully reflecting their ability to complete a biomedical research degree. But in another context – of admissions to the Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences (GSBS) – researchers reported in 2019 that the GRE might be useful to “extract meaning from quantitative metrics” and when employed as part of a “multitiered holistic” admissions process, but which by itself could disproportionately triage Black, Native and Hispanic applicants. Taken together, more information is not necessarily better than less information, especially when there are other barriers to acquiring the ‘more’ bits.

Finally, while evaluators might enjoy the marginal utility of redundancy, as a way to ‘confirm’ their decisions, it’s an additional and significant source of stress and consumer of time to all test-takers. This is in addition to a seemingly inescapable diversity-performance tradeoff, which strikes beyond the limited question of whether one standardised test is a valid predictor of students’ future performance and at the heart of what the purpose of a higher-education course is. That is, should institutes consider diversity at the expense of students’ performance? The answer depends on the way each institute is structured, what its goal is and what it measures to that end. One that is focused on its members publishing papers in ‘high IF’ journals, securing high-value research grants, developing high h-indices and maintaining the institute’s own glamourous reputation is likely to see a ‘downside’ to increasing diversity. An institute focused on engendering curiosity, adherence to critical thinking and research methods, and developing blue-sky ideas is likely to not. But while the latter sounds great (strictly in the interests of science), it may be impractical from the point of view of helping tackle society’s problems and of fostering accountability on the scientific enterprise at large. The ideal institute lies somewhere in between these extremes: its admission process will need to assume a little more work – work that the GRE currently abstracts off into a single score – in exchange for the liberty to decouple from university rankings, impact factors, ‘prestige’ and other such preoccupations.

A question about India’s new science prizes

really deserving candidates

In a meeting chaired by Union home secretary Ajay Bhalla on September 16 and attended by senior members of the various science departments of the national government (DST, DBT, etc.), the Union government eliminated hundreds of awards given to the country’s scientists for achievements on various fronts and fields. Governing a country the size of India is bound to result in bloat, so it wouldn’t be possible to dismiss this move by the government out of hand. However, the three words above make an appearance among Bhalla’s many utterances in the meeting and they are worthy of suspicion.

The Indian government under Narendra Modi has regularly used vague adjectives to accommodate a diversity of possibilities instead of committing to one course of action over another. Perhaps the best known example is its use of the “national security” excuse to refuse answers to questions under the RTI Act, such as what the scientific payloads of the Chandrayaan 2 and 3 missions were or why the FCR Act was amended. Other examples include any assurance made by Prime Minister Modi, such as on the occasion he was forced to repeal the regrettable farm laws.

In December 2019, physicist Brian Skinner uploaded a preprint paper to the arXiV server in which he quantified the effect of a “prestige bias” on the professional trajectories of scientists who are subjected to multiple rounds of evaluation. I’ve had occasion to return to this analysis on multiple occasions because, to me, it arrives at an essential, irreducible truth of the world: that keeping the conditions of entry to some space vague doesn’t just allow for arbitrary decision-making but inevitably causes such decision-making. As Skinner wrote:

For example, two applicants for graduate school may have similar grades and exam scores, but if one candidate comes from a more prestigious university then their application will, in general, be evaluated more highly. This ‘prestige bias’ arises naturally, since metrics like grades and exam scores are imprecise measures of a student’s ability, and thus the evaluator looks for any other information available to help with their decision. Belonging to a prestigious group suggests that the candidate was ranked highly by some other evaluator in the past, and this provides a prior expectation (like a second opinion) that biases the decision in their favor.

Vagueness when the stakes are high can’t be innocent, especially once it has been identified, because the more powerful can and will use the resulting uncertainty to their advantage. Here as well, when Bhalla has determined that a small number of new prizes should replace the plethora of the now-extinct prizes and that they ought to be given to “really deserving candidates”, it brings to mind the “really deserving” corporations that are winning contracts for mines, ports and defence manufacturing, the “really deserving” businessmen whose wealth has increased disproportionately to that of their peers, and the “really deserving” ministries and departments that are receiving an increasing fraction of the Union government’s budgetary allocations.

Granted, drafting and holding a fixed definition of the term ‘deserving’ can only be bad for the people and the government both. But when any doubts or uncertainties about its ambit are likely to be abused by the government – awarding India’s top honour for scientific work to, say, Appa Rao Podile or M. Jagadesh Kumar over Gagandeep Kang or Rakesh Mishra – our options are limited to a meaningless science prize that represents, above all else, the BJP’s successful subversion of another science-related space (after the IITs) for the nationalist project versus a prize that is much more meaningful but whose terms are rigid and unresponsive to the times.

The Merge

Earlier this month, a major event happened in the cryptocurrency space called the ‘Merge’. In this event, the ethereum blockchain changed the way it achieves consensus – from using a proof-of-work mechanism to a proof-of-stake mechanism.

A blockchain is a spreadsheet that maintains a record of all the transactions between users using the same blockchain. Every user on a blockchain basically possesses an up-to-date copy of that spreadsheet and helps validate others’ transactions on the blockchain. The rewards that the blockchain produces for desirable user behaviour are called its tokens. For example, tokens on the ethereum blockchain are called ether and those on the bitcoin blockchain are called… well, bitcoins. This is what the users also transact with on the blockchain.

(See here for a more thorough yet accessible intro to blockchains and NFTs.)

As a result of the ‘Merge’, according to the foundation that manages the cryptocurrency, the blockchain’s energy consumption dropped by 99.95%.

The blockchain on which users transact ethereum tokens plus the network is called the ethereum mainnet. During the ‘Merge’, the existing mainnet was replaced with another called the Beacon Chain.

Imagine the blockchain to be a bridge that moves traffic across a river. Ahead of the ‘Merge’, operators erected a parallel bridge and allowed traffic over it as well. Then, on September 15, 2022, they merged traffic from the first bridge with the traffic on the new one. Once all the vehicles were off the old bridge, it was destroyed.

Source: ethereum.org

Each of the vehicles here was an ethereum transaction. During the ‘Merge’, the operators had to ensure that all the vehicles continued to move, none got hijacked and none of them broke down.

(Sharding – which is expected to roll out in 2023 – is the act of splitting the blockchain up into multiple pieces that different parts of the network use. This way, each part will require fewer resources to use the blockchain even as the network as a whole will be using the blockchain as a whole.)

Blockchains like those of bitcoin and ethereum need a ‘proof of x’ because they are decentralised: they have no central authority that decides whether a transaction is legitimate. Instead, the validation mechanisms are baked into the processes by which users mine and exchange the coins. Proof-of-work and proof-of-stake are two flavours of one such mechanism. To understand what it does, let’s consider one of the problems it protects a blockchain against: double-spending.

Say Selvi wants to send 100 rupees to Gokul. Double-spending is the threat of sending the same 100 rupees to Gokul twice, thus converting 100 rupees to 200 rupees. When Selvi uses a bank: she logs into her netbanking account and transfers the funds or she withdraws some cash from the ATM and gives Gokul the notes. Either way, once she’s withdrawn the money from her account, the bank records it and she can’t withdraw the same funds again.

When she takes the cryptocurrency route: Selvi transfers some ethereum tokens to Gokul over the blockchain. Here, the blockchain requires some way to verify and record the transaction so that it doesn’t recur. If it used proof-of-work, it would require users on the network to share their computing power to solve a complex mathematical problem. The operation produces a numeric result that uniquely identifies the transaction as well as appends the transaction’s details to the blockchain. A copy of the updated blockchain is shared with all the users so that they are all on the same page. If Selvi tries to spend the same coins again – to transfer it to someone else, say – she won’t be able to: the blockchain ‘knows’ now that Selvi no longer has the funds in her wallet.

The demand for computing power to acknowledge a transaction and add it to the blockchain constitutes proof-of-work: when you supply that power, which is used to do work, you have provided that proof. In exchange, the blockchain rewards you with a coin. (If many people provided computing power, they split the coins released by the blockchain.)

The reason the Ethereum folks claim their post-Merge blockchain consumes 99.95% less energy is because it doesn’t use proof-of-work to verify transactions. Instead, it uses proof-of-stake: users stake their ethereum tokens for each transaction. Put another way, proof-of-work requires users to prove they have computing power to lose; proof-of-stake requires users to prove they have coins – or wealth – to lose.

Before each transaction, a validator places some coins as collateral in a ‘smart contract’. This is essentially an algorithm that will not return the coins to the validator if they don’t perform their task properly. Right now, aspiring validators need to deposit 32 ethereum tokens to qualify and join a queue. The network limits the rate at which new validators are added to the network.

Once a validator is admitted, they are allotted blocks (transactions to be verified) at regular intervals. If a block checks out, the validator casts a vote in favour of that block that is transmitted across the network. Once every 12 seconds, the network randomly chooses a group of validators whose votes are used to make a final determination on whether a block is valid.

Proof-of-stake is less energy-intensive than proof-of-work but it keeps the ethereum blockchain tethered to the same requirement: the proof of preexisting wealth. In the new paradigm, the blockchain releases new coins as reward when transactions are verified, and those who have staked more stand to gain more – i.e. the rich get richer.

Note that when the blockchain used the proof-of-work consensus mechanism, a big problem was that a very small number of users provided a very large fraction of the computing power (contrary to cryptocurrencies’ promise to decentralise finance). Proof-of-stake is expected to increase this centralisation of validatory power because the blockchain now favours validators who have more to stake, and rewards them more. Over time, as richer validators stake more, the cost of validating a transaction will also go up – and the ‘poorer’ validators will be forced to drop out.

Second, the proof-of-stake system requires problematic transactions to be flagged when the validators have staked their ethereum. Once they have withdrawn their stakes, they can’t be penalised. This in turn revives the risk of the double-spending problem, as set out in some detail here.

The energy consumption of cryptocurrency transactions was and remains a major bit of criticism against this newfangled technological solution to a problem that the world doesn’t have – and that’s the point that sticks with me. The ‘Merge’ was laudable to the extent that it reduced the consumption of energy and mining hardware in a time when the wealthy desperately need to reduce all forms of consumption, but while the ‘cons’ column is one row shorter, the ‘pros’ column remains just as empty.

Awaiting more info on WP.com’s new pricing

After my last blog post on WordPress.com’s bizarre paid-plans rejig, which stayed on top of Hacker News for a few hours and eventually caught the attention of the CEOs of WordPress.com and Automattic, the former, Dave Martin, said the company was listening to bloggers’ feedback and would incorporate it into the new options. I also suggested on the same day, Sunday, that they publish a post on the WP.com blog allaying many fears about the new Pro plan’s adjustments for “just there to blog” bloggers and for those in non-Western markets, including India.

This post appeared on the WP.com blog yesterday but, disappointingly, it only repeated what Dave had said on the HN forum; in fact, 75% of it is advertisement for the new Pro plan, and the remaining 25% rephrases Dave’s words – that the adjustments in question are coming soon.

Both Dave and Matt Mullenweg, the CEO of Automattic, which owns WP.com, have said that it’s customary for them to roll out the changes first before issuing any kind of statement formally announcing them, so that they have time to fix any bugs in production. I suppose this makes sense for technological fixes, but it doesn’t for one that will ultimately determine whether a person is able to WP.com at all.

I like that the statement has increased the free plan’s storage limit to 1 GB and has removed the ill-conceived traffic limit from both the free and the Pro plans. To me these changes also indicate a deeper possibility: if these settings (more storage, no traffic limit) are feasible now, why weren’t they feasible earlier? They must have been. So why weren’t they implemented at that stage? WP.com says in its statement, as Dave Martin did as well, that the Pro plan was the product of listening to users’ feedback. I doubt this bit – or at least, WP.com only incorporated feedback that was in line with its own sensibilities, sensibilities that I suspect continue to ignore the needs of those for whom paying $5-8 a month is more feasible than to pay $180 a year. Second, the post promises: (quoted verbatim)

  • Additional storage will be available for purchase at a very reasonable price, very soon.
  • As-you-need them add-ons for both plans, to give you a la carte upgrades. Coming soon.

Considering one of the stated reasons for introducing the Pro plan is to reduce the number of options and enable users to make easier descisions, wouldn’t the availability of “a la carte” options reintroduce the same ‘complexity’? I realise these options won’t be available for all features but depending on which ones they are, WP.com might as well retain the personal and premium plans of old (the ones, along with the business plan, that the Pro plan has replaced).

Second, how will these “a la carte upgrades” be priced? A related issue here is that WP.com recently started allowing users to attach images to their posts without leaving the WP.com editor, through the Pexels and Openverse integrated photo libraries. When you select an image to add to your post, WP.com imports it into your blog’s media gallery. Most of the photos on my blog were imported this way. If WP.com isn’t going to compress these images in any way, then they will hasten the user’s consumption of the 1 GB of free space. If WP.com is going to charge more for storage (it did earlier as well but then the personal and premium plans existed), it should also provide image compression or downsizing measures.

And third, the post doesn’t even mention if WP.com plans to tweak its prices for the India market. This was and is a big thorn in my side, because the India rates differ significantly from the rates in the US. The business plan, which the new Pro plan is by and large, cost Rs 7,680 a year in India – or 101.80, whereas the Pro plan costs $80, or Rs 6,033, more. These aren’t small sums of money in India. Dave had said in his HN post that his team had missed out on adjusting the Pro rates for India (and Brazil) and would do so. But the statement doesn’t mention anything about this, even as superficially as it has touched on the other issues.

While Dave’s, and Mullenweg’s, words in response to my blog post seemed reassuring at the time, I’m yet to be convinced that WP.com still cares about its “just there to blog” bloggers. We still need more clarity and information.

Intro to NFTs

First

I wrote this piece for a friend who wanted to understand what NFTs were. I have considerably simplified many points and omitted many others to keep the explanation below (relatively) short. If you’re interested, you can read the following articles/sites as well as find links to more discussion on this topic from there.

  1. https://digiconomist.net/bitcoin-versus-gold
  2. https://rpr2.wordpress.com/tag/nft/
  3. https://blog.dshr.org/2022/02/ee380-talk.html (I left out talking about scammers – this post has great explanations and additional learning resources on this front)
  4. https://caesuramag.org/posts/laurie-rojas-why-no-good-nft-yet

Background info

What is an NFT?

To understand NFTs, we need to understand the ‘T’ first: tokens.

And to understand the Ts, we need to understand the reason they exist: the blockchain.

The blockchain is widely touted to be a ledger of transactions. But I – a person who has struggled to understand banking and finance terminologies – have found it more useful to understand this technology in terms of the fundamentally new thing it facilitates.

In ‘conventional’ banking, banks – state-owned and otherwise – validate financial transactions. If I transfer money from my wallet to yours online, the bank knows a) whether money has been deducted from my wallet, b) whether money has been credited to your wallet, and c) whether I, the wallet’s owner, performed the transaction in question.

The blockchain is a database that, together with a bunch of algorithms, offers a way to perform these tasks without requiring a centralised authority. Instead, it helps the people who are transacting with each other to ensure the security and integrity of their transactions.

Say 10 people have already been using a blockchain to validate their transactions. Each row in this database is called a block. When one of the 10 performs a new transaction, it is added as a new block in the database along with some data pertaining to the previous block. This bit of data is called a cryptographic hash. Using the hash, all the blocks in the database are linked together: every new block contains a cryptographic hash of the previous block, all the way back to the very first block. This chain of blocks is called the blockchain.

Every time a new transaction is performed, and a new block has to be added to the blockchain, some algorithms kick in to validate the transaction. Once it has been validated, the block is added, a timestamp is affixed to the operation, and a copy of the blockchain in that instance is shared with all the 10 people using it.

This validation process doesn’t happen in a vacuum. You need computing power to perform it, drawn from the machines owned and operated by some or all of the 10 people. To incentivise these people to donate their computing power, the blockchain releases some files at periodic intervals. These files denote value on the blockchain, and the people who get them can use them gainfully. These files are called tokens.

Different blockchains have different validation incentives. For example, the bitcoin blockchain releases its tokens, the bitcoins, as rewards to those who have provided computing power to validate new transactions.

The bitcoin protocol states that the number of bitcoins released drops by half for every 210,000 blocks added. In May 2020, this reward stood at 6.25 bitcoins per block. The blockchain will also stop releasing new bitcoins once it has released 21 million of them.

Technically speaking, both centralised and decentralised validation systems use blockchains. The one that uses a central authority is called a permissioned blockchain. The one without a centralised authority is called a permissionless blockchain.

This is useful to know if only to understand two things:

  1. The concept of blockchains has existed since the early 1980s in the form of permissioned systems, and
  2. Permissionless blockchains need tokens to incentivise users to share computing power whereas permissioned blockchains don’t need tokens

The demand for bitcoins has caused the price of each such token to rise to $43,925, or Rs 33.47 lakh, today (March 25, 2022, 9:06 am).

The tokens on a blockchain can be fungible or non-fungible. An example of a fungible token is bona fide currency: one one-rupee note can be replaced by another (equally legitimate) one-rupee note and not make any difference to a transaction. Bitcoins are also fungible tokens for the same reason. On the other hand, NFTs are tokens that can’t be interchanged. Each NFT is unique – it has to be because this characteristic defines NFTs. They are non-fungible tokens.

Bitcoins are basically files. You write an article and store it as a docx file. This file contains text. A bitcoin is a file that contains alphanumeric data and is stored in a certain way. You can save a docx file on your laptop’s hard-disk or on Google Drive, and you can only open it with software that can read docx files. Similarly, you can store bitcoins in wallets on the internet, and they can be ‘read’ only by special software that work with blockchains.

Similarly, NFTs are also files. The alphanumeric code they contain are linked in a unique way to another file. These other files can be pictures, videos, docx files, bits of text, anything at all that can be stored as digital data.

When one person transfers an NFT to another person over a blockchain, they are basically transferring ownsership of the file to which the NFT is linked. Put another way, NFTs facilitate the trade of goods and value that can’t directly be traded over blockchains by tokenising these goods/value. This is what NFTs fundamentally offer.

Emergent facts

This background info leads to some implications:

  • Bitcoins have been exploding in value because a) their supply is limited, b) investors in bitcoins and/or blockchain technology have built hype around this technology, and c) taken together, the rising value of each bitcoin has encouraged the rise of many Ponzi schemes that require more people to get in on cryptocurrencies, forcing demand to rise, which further pushes up the coin value, allowing investors to buy low and sell high.
  • The demand for bitcoins, and other cryptocurrencies more broadly, has obscured the fact that a) permissionless blockchains need tokens to exist, b) these tokens in turn need to be convertable to bona fide currencies, and c) there needs to be speculative valuation of these tokens in order for their value over time to increase. Otherwise, the tokens hold no value – especially to pay for the real-world costs of computing power.
  • This computing power is very costly. It is highly energy-intensive – if it weren’t, anybody could validate any transaction and add it to the blockchain. In fact, one of the purposes of the compute cost is to prevent a hack called the Sybil attack. A copy of the blockchain is shared with all members participating in the chain. Say my copy gets corrupted for some reason; when the system encounters it, it will check it against the copy that exists on the majority of computers on the network. When it doesn’t match, I will have forked out of the blockchain and no longer be a part of it. A Sybil attack happens when multiple users work together to modify their copies of the blockchain (to, say, give themselves more money), confusing the system into believing the corrupted version is the actual version. A high computing power demand would ensure that the cost of mounting a Sybil attack is higher than the benefits it will reap. This power is also what leads to the cryptocurrencies’ enormous carbon footprint.
  • If you provide more computing power to the pool of power available to validate transactions, you have provided the system with proof of work. Another way to validate transactions is through proof of stake: the more value you have transacted using the blockchain, the more stake you are said to have in its proper operation, and therefore the likelier it will be for your transactions to be validated. Proof of stake is less energy-intensive, but its flaw is that it’s a ‘rich get richer’ paradigm. From a social justice point of view, both proof of work and proof of stake have the same outcome: wealth inequality. Indeed, a principal failing of the ethereum and bitcoin blockchains today is that a very small number of individuals around the world own more than half of all the computing power available to these networks – a fact that directly undermines the existential purpose of these networks: decentralisation.
  • NFTs differ in their uniqueness, but other than that, they also require the use of blockchains and thus inherit all of the problems of permissionless blockchains.
  • NFTs also have two problems that are specific to their character: a) they have to be scarce in order to be valuable, and this scarcity is artificially imposed – by investors but more broadly by tech-bros and their capitalist culture, in order to keep NFTs exclusive and valuable; b) the items that NFTs currently tokenise are simple crap made with conventional software. For example, the user named Metakovan purchased last year an NFT associated with a big collage by an artist named Tweeple for 500 ether ($69 million). This collage was just a collage, nothing special, made with Photoshop (or similar). Now, if I uploaded an image on a server and linked it to an NFT, and one day the server goes down, the NFT will exist but it will point to nothing, and thus be useless. This vacuity at the heart of NFTs – that they contain no value of their own and that whatever value they contain is often rooted in conventional systems – is emblematic of a bigger issue with cryptocurrencies: they have no known application. They are a solution in search of a problem.
  • For example, Metakovan said last year that using cryptocurrencies to trade in art was a way to use the anonymity afforded by cryptocurrencies to evade the gatekeepers of the art world, who, in his words, had thus far kept out the non-white, non-rich from owning the masters’ paintings. But many, many art critics have ridiculed this. I like to quote Laurie Rojas: “Even with all the financial speculation around NFTs, the point that Art’s value is determined within the parameters of a society in which commodification is the dominant form of social relations (i.e., capitalism) has too easily been abandoned for poorly defined neologisms. … NFTs are the latest phenomenon to express this.”
  • NFTs’ newfound association with artistic works is something for NFTs to do, otherwise they have no purpose. In addition, small-time and/or indie artists have criticised NFTs because they don’t solve the more fundamental problem of people not funding artists like them or protecting their work from copyright violations in the first place – much less because potential funders don’t have the requisite technologies. This criticism also speaks to the criticism of the bitcoin network itself: to quote Alex De Vries, “One bitcoin transaction requires … several thousands of times more than what’s required by traditional payment systems” to perform a transaction of the same value. Therefore it can’t be a functional substitute for the world’s existing banking system either. And we’ve seen in a previous point that they’re not decentralised either.

Two last issues – one about a new way in which blockchain tech is trying to find relevance and one about a pernicious justification to allow this technology to persist.

  • The first is what has come to be called “web3”. The current iteration of our web is known as web2, supposed to have begun around the mid-2000s. Web1 was the first iteration, when the web was full of websites that offered content for us to consume. Web2 was about content production – social media, blogs, news sites, etc. Web3 is supposed to be about participation – based on Metakovan’s logic. In this paradigm, web3 is to be powered by blockchains. This is a stupid idea for all the reasons permissionless blockchains and NFTs are stupid ideas, and others besides.
  • Second, some entrepreneurs have started to buy carbon credits from various parts of the world and offer them for a price to blockchain entrepreneurs, to help ‘neutralise’ the carbon footprint of the latter’s efforts. This is wrong and evil because it’s a wasteful use of carbon credits that diverts them away from more socially responsible uses. It’s also evil because, in this paradigm, cryptocurrencies and NFTs foster two paths towards greater inequality. First, as mentioned before, they impose a prohibitive energy cost to use them. Second, developed countries need to cut down on their carbon emissions right away – but many developing countries and most under-developed countries (in the economic sense) still have room to emit some more before they can peak. Carbon credits, the demand for which cryptocurrencies are increasing, reverse these outcomes – allowing the former to keep emitting while purchasing ‘room to emit’ from less developed nations, and thus lowering the latter’s emissions ceiling.
  • Finally, a fundamental flaw of the carbon credits system is that it assumes that emissions over one part of the world can be compensated by supporting forests in another. So carbon credits may in fact make the problem worse by allowing cryptocurrency folks to keep kicking the can down the road.

Ads on The Wire Science

Sometime this week, but quite likely tomorrow, advertisements will begin appearing on The Wire Science. The Wire‘s, and by extension The Wire Science‘s, principal source of funds is donations from our readers. We also run ads as a way to supplement this revenue; they’re especially handy to make up small shortfalls in monthly donations. Even so, many of these ads look quite ugly – individually, often with a garish choice of colours, but more so all together, by the very fact that they’re advertisements, representing a business model often rightly blamed for the dilution of good journalism published on the internet.

But I offer all of these opinions as caveats because I’m quite looking forward to having ads on The Wire Science. At least one reason must be obvious: while The Wire‘s success itself, for being an influential and widely read, respected and shared publication that runs almost entirely on readers’ donations, is inspiring, The Wire Science as a niche publication focusing on science, health and the environment (in its specific way) has a long way to go before it can be fully reader funded. This is okay if only because it’s just six months old – and The Wire got to its current pride of place after more than four years, with six major sections and millions of loyal readers.

As things stand, The Wire Science receives its funds as a grant of sorts from The Wire (technically, it’s a section with a subdomain). We don’t yet have a section-wise breakdown of where on the site people donate from, so while The Wire Science also solicits donations from readers (at the bottom of every article), it’s perhaps best to assume it doesn’t funnel much. Against this background, the fact that The Wire Science will run ads from this week is worth celebrating for two reasons: 1. that it’s already a publication where ads are expected to bring in a not insubstantial amount of money, and 2. that a part of this money will be reinvested in The Wire Science.

I’m particularly excited about reason no. 1. Yes, ads suck, but I think that’s truer in the specific context of ads being the principal source of funds – when editors are subordinated to business managers and editorial decisions serve the bottomline. But our editorial standards won’t be diluted by the presence of ads because of ads’ relative contribution to our revenue mix. (I admit that psychologically it’s going to take some adjusting.) The Wire Science is already accommodated in The Wire‘s current outlay, which means ad revenue is opportunistic, and an opportunity in itself to commission an extra story now and then, get more readers to the site and have a fraction of them donate.

I hope you’ll be able to see it the same way, and skip the ad-blocker if you can. 🙂

The weekly linklist – July 25, 2020

I’ve decided to publish this linklist via Substack. Next weekend onwards, it will only be available on https://linklist.substack.com. And this is why the list exists and what kind of articles you can find in it.

  • Want to buy a parrot? Please login via Facebook. – “F-commerce emerged in Bangladesh largely because there was no major e-commerce platform to absorb all the business. But although it’s biggest there, this form of selling isn’t exclusive to the country, or even the region: globally, 160 million small stores operate on Facebook, and in countries like Thailand, almost half of all online sales happen through social media.”
  • The history of climate science – “The fact that carbon dioxide is a ‘greenhouse gas’ – a gas that prevents a certain amount of heat radiation escaping back to space and thus maintains a generally warm climate on Earth, goes back to an idea that was first conceived, though not specifically with respect to CO2, nearly 200 years ago. The story of how this important physical property was discovered, how its role in the geological past was evaluated and how we came to understand that its increased concentration, via fossil fuel burning, would adversely affect our future, covers about two centuries of enquiry, discovery, innovation and problem-solving.”
  • The story of cryptomining in Europe’s most disputed state – “In early 2018, millions of digital clocks across Europe began falling behind time. Few took notice at first as slight disruptions in the power supply caused bedside alarms and oven timers running on the frequency of electric current to begin lagging. … European authorities soon traced the power fluctuations to North Kosovo, a region commonly described as one of Europe’s last ganglands. Since 2015, its major city, Mitrovica, has been under the control of Srpska Lista, a mafia masquerading as a political party. Around the time Srpska came to power, North Kosovo’s electricity consumption surged. Officials at the Kosovo Electricity Supply Company in Prishtina, Kosovo’s capital city, told me that the region now requires 20 percent more power than it did five years ago. Eventually, it became clear why: across the region, from the shabby apartment blocks of Mitrovica to the cellars of mountain villages, Bitcoin and Ethereum rigs were humming away, fueling a shadow economy of cryptocurrency manufacturing.”
  • Electromagnetic pulses are the last thing you need to worry about in a nuclear explosion – “The electromagnetic pulse that comes from the sundering of an atom, potentially destroying electronics within the blast radius with some impact miles away from ground zero, is just one of many effects of every nuclear blast. What is peculiar about these pulses, often referred to as EMPs, is the way the side effect of a nuclear blast is treated as a special threat in its own right by bodies such as the Task Force on National and Homeland Security, which, despite the official-sounding name, is a privately funded group. These groups continue a decadelong tradition of obsession over EMPs, one President Donald Trump and others have picked up on.”
  • India’s daunting challenge: There’s water everywhere, and nowhere – “I am walking across the world. Over the past seven years I have retraced the footsteps of Homo sapiens, who roamed out of Africa in the Stone Age and explored the primordial world. En route, I gather stories. And nowhere on my foot journey—not in any other nation or continent—have I encountered an environmental reckoning on the scale of India’s looming water crisis. It is almost too daunting to contemplate.”
  • Here be black holes – “During the 15th and 16th centuries, when oceans were the spaces between worlds, marine animals, often so prodigious that they were termed sea monsters, were difficult to see and even harder to analyse, their very existence uncertain. Broadly construed, the history of space science is also a story of looking across and into the ocean – that first great expanse of space rendered almost unknowable by an alien environment. Deep space, like the deep sea, is almost inaccessible, with the metaphorical depth of space echoing the literal depth of oceans. These cognitive and psychic parallels also have an analogue in the practicalities of survival, and training for space missions routinely includes stints under water.”
  • Birds bear the warnings but humans are responsible for the global threat – “Bird omens of a sort are the subject of two recent anthropological studies of avian flu preparedness in Asia. Both Natalie Porter, in Viral Economies, and Frédéric Keck, in Avian Reservoirs, convey the ominousness suffusing poultry farming, using birds as predictors. As both demonstrate, studying how birds interact with human agriculture can provide early warnings of a grim future. Indeed, Keck in Avian Reservoirs explicitly compares public-health surveillance (which he studies in the book) to augury, tracing ‘the idea that birds carry signs of the future that humans should learn to read … back to Roman divination.'”
  • Fiction as a window into the ethics of testing the Bomb – “The stuff that surprised me was on the American side. For example, the assessment by Curtis LeMay [the commander who led US air attacks on Japan] where he basically says, “We’ve bombed the shit out of Japan. Hurry up with your atomic bomb, because there’s going to be nothing left if you don’t.” That shocked me, and also that they deliberately left those cities pristine because they wanted to show the devastation. They wanted, I believe, to kill innocent people, because they were already moving on to the Cold War.”
  • The idea of entropy has led us astray – “Perhaps physics, in all its rigors, is deemed less susceptible to social involvement. In truth, though, Darwinian and thermodynamic theories served jointly to furnish a propitious worldview—a suitable ur-myth about the universe—for a society committed to laissez-faire competition, entrepreneurialism, and expanding industry. Essentially, under this view, the world slouches naturally toward a deathly cold state of disorder, but it can be salvaged—illuminated and organized—by the competitive scrabble of creatures fighting to survive and get ahead.”
  • How massive neutrinos broke the Standard Model – “Niels Bohr … had the radical suggestion that maybe energy and momentum weren’t really conserved; maybe they could somehow be lost. But Wolfgang Pauli had a different — arguably, even more radical — thought: that perhaps there was a novel type of particle being emitted in these decays, one that we simply didn’t yet have the capacity to see. He named it “neutrino,” which is Italian for “little neutral one,” and upon hypothesizing it, remarked upon the heresy he had committed: ‘I have done a terrible thing, I have postulated a particle that cannot be detected.'”
  • How a small Arab nation built a Mars mission from scratch in six years – “When the UAE announced in 2014 that it would send a mission to Mars by the country’s 50th birthday in December 2021, it looked like a bet with astronomically tough odds. At the time, the nation had no space agency and no planetary scientists, and had only recently launched its first satellite. The rapidly assembled team of engineers, with an average age of 27, frequently heard the same jibe. ‘You guys are a bunch of kids. How are you going to reach Mars?’ says Sarah Al Amiri, originally a computer engineer and the science lead for the project.”
  • The pandemic has made concentrated reading difficult. How are book reviewers dealing with this? – “To read good and proper, I needed to disconnect from the terrible reality of the present – wishful thinking with the always-on-alert mode that the pandemic thrust upon us. A few pages in, my mind would wander, snapping out of the brief, quiet moment and I’d find myself reaching for my phone. … But as neuroscientists world over have told us, it’s been hard for most people to focus, with our brain in fight-or-flight mode to the threat of the virus. An activity like deep reading is especially difficult because it requires a high level of engagement and quiet. So it wasn’t just me.”
  • Facebook’s employees reckon with the social network they’ve built – “Why was Zuckerberg only talking about whether Trump’s comments fit the company’s rules, and not about fixing policies that allowed for threats that could hurt people in the first place, he asked. ‘Watching this just felt like someone was sort of slowly swapping out the rug from under my feet,’ Wang said. ‘They were swapping concerns about morals or justice or norms with this concern about consistency and logic, as if it were obviously the case that ‘consistency’ is what mattered most.'”

The occasional linklist – July 19, 2020

I have been pondering creating a column on my blog where I share links to articles I read and liked. I perform this function on Twitter at the moment, but the attention some links attract are rubbish, and I reflexively share only relatively bland things there these days as a result. I’m also starting to relish the privilege of not having a shitstorm erupt in my notifications just because I shared something – a link or a viewpoint – that someone disagreed with, and is now giving me headaches because I no longer have the option of ignoring them.

So here goes, the first instalment of articles I recently read and liked. 🙂

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. … Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

The sea of metal

Two of the most decisive moments of the Second World War that I can’t get enough of are the Battle of Stalingrad and the D-Day landings. In the Battle of Stalingrad, Adolf Hitler’s army suffered its first major defeat, signalling to Nazi Germany that it was just as capable of bleeding as any other regime, that its forces – despite the individual formidability of each German soldier – were capable of defeat. The D-Day landings were the proximate beginning of the end, allowing Allied forces to penetrate Hitler’s Atlantic Wall and, in due course, bring the fight to Germany.

These two battles played out differently in one way (among others, of course). The Battle of Stalingrad began on German initiative but turned into a Soviet siege that slowly but continuously drove home the point to German soldiers trapped in the Soviet city that they couldn’t possibly win. Eventually, on January 31, 1943, the Germans surrendered together with their leader, Friedrich Paulus, who also became the first Field Marshal of the Nazi armed forces to be captured by the enemy during the war. Operation Overlord – of which the D-Day landings were part – on the other hand hinged on a single, potentially decisive event: of blowing a hole in the Atlantic Wall at Normandy on June 6, 1944, and securing it for long enough for more Allied troops to land ashore as well as for those already inside France to assemble and establish communications.

The Allies succeeded of course, although slower than planned at first, but in all successfully marching from there to liberate France and then take Berlin on May 2, 1945 (Hitler would commit suicide on April 30 to avoid capture), effectively ending the war.

Operation Overlord is well-documented, particularly so from the Allied point of view, with records as well as video footage describing the great lengths to which American, Australian, Belgian, British, Canadian, Czech, Danish, Dutch, French, Greek, Luxembourger, New Zealander, Norwegian and Polish forces went to ensure it was a success. The Allies had to do five things: keep Hitler in the dark, or at least confused, about where the Allies were going to attack the Atlantic Wall; sabotage the Germans’ ability to respond quickly to wherever the Allies attacked; transport an army across the English Channel and land it ashore on a heavily fortified beach; establish and then link five beachheads; and capture the city of Caen. The documentary Greatest Events of WWII in Colour narrates these events to the accompaniment of riveting visual detail – a must-watch for anyone interested in military history, especially the Second World War.

I enjoyed some bits of it more than others, one of them about Operation Overlord itself. The Allied beach-landing at Normandy is perhaps the most important event of the Second World War, and it’s quite easy to find popular historical material about it; the opening scenes of Saving Private Ryan (1998) come to mind. However, I’ve always wondered how the German soldiers sitting in their bunkers and pill-boxes on the shores of Normandy might have felt. To behold one of the largest armies in modern history rise unexpectedly out of the horizon is no trivial thing. Greatest Events of WWII in Colour documents this.

Narrator: As the dawn breaks, it’s the German soldiers in Normandy, not Calais [where Hitler et al were made to believe the Allies would attack], who witness the enormity of the Allied invasion fleet for the first time.

Peter Lieb, historian: For the Germans sitting in their bunkers in Normandy, the sight of the Allied armada must have been terrifying. A sea full of metal.

Geoffrey Wawro, professor of military history: Witnesses recall just absolute stunned disbelief. This was the greatest armada assembled in world history, and this thing suddenly appears out of the darkness off the coast of Normandy.

A sea of metal!

There’s a certain masculinity imbibed in the picture, a grand combination of brawn, self-righteousness and exhibition that wartime rhetoric prizes because its adrenaline elides the tragedy of war itself. The Second World War was a particularly brutal affair with crimes against humanity perpetrated by the Allied and Axis powers both, and continuing even after 1945 across multiple continents. However, it is also tempting to believe that the start of Operation Overlord, by striking fear in the Germans and bearing down upon a fascist government that had to be destroyed, is one of those rare acts of war that deserves to be recounted with this rousing rhetoric. Greatest Events of WWII in Colour is only shrewd enough to play along.

Distracting from the peer-review problem

From an article entitled ‘The risks of swiftly spreading coronavirus research‘ published by Reuters:

A Reuters analysis found that at least 153 studies – including epidemiological papers, genetic analyses and clinical reports – examining every aspect of the disease, now called COVID-19 – have been posted or published since the start of the outbreak. These involved 675 researchers from around the globe. …

Richard Horton, editor-in-chief of The Lancet group of science and medical journals, says he’s instituted “surge capacity” staffing to sift through a flood of 30 to 40 submissions of scientific research a day to his group alone.

… much of [this work] is raw. With most fresh science being posted online without being peer-reviewed, some of the material lacks scientific rigour, experts say, and some has already been exposed as flawed, or plain wrong, and has been withdrawn.

“The public will not benefit from early findings if they are flawed or hyped,” said Tom Sheldon, a science communications specialist at Britain’s non-profit Science Media Centre. …

Preprints allow their authors to contribute to the scientific debate and can foster collaboration, but they can also bring researchers almost instant, international media and public attention.

“Some of the material that’s been put out – on pre-print servers for example – clearly has been… unhelpful,” said The Lancet’s Horton.

“Whether it’s fake news or misinformation or rumour-mongering, it’s certainly contributed to fear and panic.” …

Magdalena Skipper, editor-in-chief of Nature, said her group of journals, like The Lancet’s, was working hard to “select and filter” submitted manuscripts. “We will never compromise the rigour of our peer review, and papers will only be accepted once … they have been thoroughly assessed,” she said.

When Horton or Sheldon say some of the preprints have been “unhelpful” and that they cause panic among the people – which people do they mean? No non-expert person is hitting up bioRxiv looking for COVID-19 papers. They mean some lazy journalists and some irresponsible scientists are spreading misinformation, and frankly their habits represent a more responsible problem to solve instead of pointing fingers at preprints.

The Reuters analysis also says nothing about how well preprint repositories as well as scientists on social media platforms are conducting open peer-review, instead cherry-picking reasons to compose a lopsided argument against greater transparency in the knowledge economy. Indeed, crisis situations like the COVID-19 outbreak often seem to become ground zero for contemplating the need for preprints but really, no one seems to want to discuss “peer-reviewed” disasters like the one recently publicised by Elisabeth Bik. To quote from The Wire (emphasis added),

[Elisabeth] Bik, @SmutClyde, @mortenoxe and @TigerBB8 (all Twitter handles of unidentified persons), report – as written by Bik in a blog post – that “the Western blot bands in all 400+ papers are all very regularly spaced and have a smooth appearance in the shape of a dumbbell or tadpole, without any of the usual smudges or stains. All bands are placed on similar looking backgrounds, suggesting they were copy-pasted from other sources or computer generated.”

Bik also notes that most of the papers, though not all, were published in only six journals: Artificial Cells Nanomedicine and BiotechnologyJournal of Cellular BiochemistryBiomedicine & PharmacotherapyExperimental and Molecular PathologyJournal of Cellular Physiology, and Cellular Physiology and Biochemistry, all maintained reputed publishers and – importantly – all of them peer-reviewed.