The ‘Climate Overshoot’ website

Earlier this evening, as I was working on my laptop, I noticed that it was rapidly heating up, to the extent that it was burning my skin through two layers of cloth. This is a device that routinely runs a half-dozen apps simultaneously without breaking a sweat, and the browser (Firefox) also seldom struggles to handle the few score tabs I have open at all times. Since I’d only been browsing until then, I checked about:processes to find if any of the tabs could be the culprit, and it was: the Climate Overshoot Commission (COC) website. Which is ironic because the COC was recently in the news for a report in which it detailed its (prominent) members’ deliberations on how the world’s countries could accelerate emission cuts and not overshoot emissions thresholds.

The world can reduce the risk of temperature overshoot by, among other things, building better websites. What even is the video of the random forest doing in the background?

The COC itself was the product of deliberations among some scientists who wished to test solar geoengineering but couldn’t. And though the report advises against deploying this risky technology without thoroughly studying it first, it also insists that it should remain on the table among other climate mitigation strategies, including good ol’ cutting emissions. Irrespective of what its support for geoengineering implies for scientific and political consensus on the idea, the COC can also help by considerably simplifying its website so it doesn’t guzzle more computing power than all the 56 other tabs combined, and around 3 W just to stay open. The findings aren’t even that sensible.

What the bitcoin price drop reveals about ‘crypto’

One of the definitive downsides of cryptocurrencies raised its head this week when the nosediving price of bitcoin – brought on by the Luna/Terra crash and subsequent cascading effects – rendered bitcoin mining less profitable. One bitcoin today costs $19,410, so it’s hard to imagine this state of affairs has come to pass – but this is why understanding the ‘permissionless’ nature of cryptocurrency blockchains is important.

Verifying bitcoin transactions requires computing power. Computing power (think of processing units on your CPU) costs money. So those bitcoin users who provide this power need to be compensated for this expense or the bitcoins ecosystem will make no financial sense. This is why the bitcoin blockchain generates a token when users provide computing power to verify transactions. This process is called mining: the computing power verifies each transaction by solving a complex math problem whose end result adds the transaction to the blockchain, in return for which the blockchain spits out a token (or a fraction of it, averaged over time).

The idea is that these users should be able to use this token to pay for the computing power they’re providing. Obviously this means these tokens should have real value, like dollar value. And this is why bitcoin’s price dropping below a certain figure is bad news for those providing the computing power – i.e. the miners.

Bitcoin mining today is currently the preserve of a few mining conglomerates, instead of being distributed across thousands of individual miners, because these conglomerates sought to cash in on bitcoin’s dollar value. So if they quit the game or reduce their commitment to mining, the rate of production of new bitcoins will slow, but that’s a highly secondary outcome; the primary outcome will be less power being available to verify transactions, which will considerably slow the ability to use bitcoins to do cryptocurrency things.

Bitcoin’s dropping value also illustrates why so many cryptocurrency investment schemes – including those based on bitcoin – are practically Ponzi schemes. In the real world (beyond blockchains), the cost of computing power will but increase over time. This is because of inflation, because of the rising cost of the carbon footprint and because the blockchain produces tokens less often over time. So to keep the profits from mining from declining, the price of bitcoin has to increase, which implies the need for speculative valuation, which then paves the way for pump-and-dump and Ponzi schemes.

permissioned blockchain, as I have written before, does not provide rewards for contributing computing power because it doesn’t need to constantly incentivise its users to continue using the blockchain and verify transactions. Specifically, a permissioned blockchain uses a central authority that verifies all transactions, whereas a permissionless blockchain seeks to delegate this responsibility to the users themselves. Think of millions of people exchanging money with each other through a bank – the bank is the authority and the system is a permissioned blockchain; in the case of cryptocurrencies, which are defined by permissionless blockchains, the people exchanging the money also verify each other’s transactions.

This is what leads to the complexity of cryptocurrencies and, inevitably, together with real-world cynicism, an abundance of opportunities to fail. Or, as Robert Reich put it, “all Ponzi schemes topple eventually”.

Note: The single-quotation marks around ‘crypto’ in the headline is because I think the term ‘crypto’ belongs to ‘cryptography’, not ‘cryptocurrency’.

An Indian supercomputer by 2017. Umm…

This is a tricky question. And for background, here’s the tweet from IBN Live that caught my eye.

(If you didn’t read the IBN piece, this is the gist. India, rather Kapil Sibal, our present telecom minister, will have a state-of-the-art supercomputer, 61 times faster than current-leader Sequoia, built indigenously by 2017 at a cost of Rs. 4,700 crore across 5 years.)

Kapil Sibal

India already has many supercomputers: NAL’s Flosolver, C-DAC’s PARAM, DRDO’s PACE/ANURAG, BARC’s Anupam, IMS’s Kabru-Linux cluster and CRL’s Eka (both versions of PARAM), and ISRO’s Saga 220.

The most-powerful among them, PARAM (through its latest version), is ranked 58th in the world. It was designed and deployed by the Pune-based Centre for Development of Advanced Computing (C-DAC) and the Department of Electronics and Information Technology (DEITY – how apt) in 1991. Its first version, PARAM 8000, used 8,000 Inmos transputers (a microprocessor architecture built with parallel-processing in mind); subsequent versions include PARAM 10000, Padma, and the latest Yuva. Yuva came into operation in November 2008 and boasts a peak speed of 54 teraflops (1 teraflops = 1 trillion floating point operations per second; floating point is a data type that stores numbers as {significant digits * base^exponent}).

Interestingly, in July 2009, C-DAC had announced that a new version of PARAM was in the works and that it would be deployed in 2012 with a computing power of more than 1 petaflops (1 petalfops = 1,000 teraflops) at a cost of Rs. 500 crore. Where is it?

Then, in May, 2011, it was announced that India would spend Rs. 10,000 crore in building a 132.8-exaflops supercomputer by 2017. Does that make today’s announcement an effective reduction in budget as well as diminishing of ambitions? If so, then why? If not, then are we going to have two high-power supercomputers?!

Such high-power supercomputers that the proposed 2017-supercomputer will compete with usually find use in computational fluid dynamics simulations, weather forecasting, finite element analysis, seismic modelling, e-governance, telemedicine, and administering high-speed network activities. Obviously, these are tasks that operate with a lot of probabilities thrown into the simulation and calculation mix, and require hundreds of millions of operation per second to be solved within an “acceptable” chance of the answer being right. As a result, and because of the broad scale of these applications, such supercomputers are built only when the need for the answers is already present. They are not installed to create needs but only to satisfy them.

So, that said, why does India need such a high-power supercomputer? Deploying a supercomputer is no easy task, and deploying one that’s so far ahead of the field also involves an overhaul of the existing system and network architectures. What needs is the government creating that might require so much power? Will we be able to afford it?

In fact, I worry that Mr. Kapil Sibal has announced the decision to build such a device simply because India doesn’t feature in the list of top 10 countries that have high-power supercomputers. Because, beyond being able to predict weather patterns and further extend the country’s space-faring capabilities, what will the device be used for? Are there records that the ones already in place are being used effectively?

Eigenstates of the human mind

  1. Would a mind’s computing strength be determined by its ability to make sense of counter-intuitive principles (Type I) or by its ability to solve an increasing number of simple problems in a second (Type II)?
  2. Would Type I and Type II strengths translate into the same computing strength?
  3. Does either Type I or Type II metric possess a local inconsistency that prevents its state-function from being continuous at all points?
  4. Does either Type I or Type II metric possess an inconsistency that manifests as probabilistic eigenstates?