Neuromorphic hype

We all know there’s a difference between operating an Indica Diesel car and a WDP 4 diesel locomotive. The former has two cylinders and the latter 16. But that doesn’t mean the WDP 4 simply has eight times more components as the Indica. This is what comes to my mind when I come across articles that trumpet an achievement without paying any attention to its context.

In an example from yesterday, IEEE Spectrum published an article with the headline ‘Nanowire Synapses 30,000x Faster Than Nature’s’. An artificial neural network is a network of small data-processing components called neurons. Once the neurons are fed data, they work together to analyse it and solve problems (like spotting the light from one star in a picture of a galaxy). The network also iteratively adjusts the connections between neurons, called synapses, so that the neurons cooperate more efficiently. The architecture and the process broadly mimic the way the human brain works, so they’re also collected under the label ‘neuromorphic computing’.

Now consider this excerpt:

“… a new superconducting photonic circuit … mimics the links between brain cells—burning just 0.3 percent of the energy of its human counterparts while operating some 30,000 times as fast. … the synapses are capable of [producing output signals at a rate] exceeding 10 million hertz while consuming roughly 33 attojoules of power per synaptic event (an attojoule is 10-18 of a joule). In contrast, human neurons have a maximum average [output] rate of about 340 hertz and consume roughly 10 femtojoules per synaptic event (a femtojoule is 10-15 of a joule).”

The article, however, skips the fact that the researchers operated only four circuit blocks in their experiment – while there are 86 billion neurons on average in the human brain working at the ‘lower’ efficiency. When such a large assemblage functions together, there are emergent problems that aren’t present when a smaller assemblage is at work, like removing heat and clearing cellular waste. (The human brain also contains “85 billion non-neuronal cells”, including the glial cells that support neurons.) The energy efficiency of the neurons must be seen in this context, instead of being directly compared to a bespoke laboratory setup.

Philip W. Anderson’s ‘more is different’ argument provides a more insightful argument against such reductive thinking. In a 1972 essay, Anderson, a theoretical physicist, wrote:

“The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws the less relevance they seem to have to the very real problems of the rest of science, much less to those of society.”

He contended that the constructionist hypothesis – that you can start from the first principles and arrive straightforwardly at a cutting-edge discovery in that field – “breaks down” because it can’t explain “the twin difficulties scale and complexity”. That is, things that operate on larger scale and with more individual parts are physically greater than the sum of those parts. (I like to think Anderson’s insight to be the spatial analogue of L.P. Hartley’s time-related statement of the same nature: “The past is a foreign country, they do things differently there.”)

So let’s not celebrate something because it’s “30,000x faster than” the same thing in nature – as the Spectrum article’s headline goes – but because it represents good innovation in and of itself. Indeed, the researchers who conducted the new study and are quoted in the article don’t make the comparison themselves but focus on the leap forward their innovation portends in the field of neuromorphic computing.

Faulty comparisons on the other hand could inflate readers’ expectations about what the outcomes of future innovation could be, and when it (almost) inevitably starts to fall behind nature’s achievements, those unmet expectations could seed disillusionment. We’ve already had this happen with quantum computing. Spectrum‘s choice could have been motivated by wanting to pique readers’ interest, which is a fair thing to aspire to, but it remains that the headline employed a clichéd comparison, with nature, instead of expending more effort and framing the idea right.

A tale of two myopias, climate change and the present participle

The Assam floods are going on. One day, they will stop. The water will subside in many parts of the state but the things that caused the floods will continue to work, ceaselessly, and will cause them to occur again next year, and the year after and so on for the foreseeable future.

Journalists, politicians and even civil society members have become adept at seeing the floods in space. Every year, as if on cue, there have been reports on the cusp of summer of floodwaters inundating many districts in the state, including those containing and surrounding the Kaziranga national park; displacing lakhs of people and killing hundreds; destroying home, crop, cattle and soil; encouraging the spread of diseases; eroding banks and shores; and prompting political leaders to promise all the help that they can muster for the affected people. But the usefulness of the spatial cognition of the Assam floods has run its course.

Instead, now, we need to inculcate a temporal cognition, whether this alone or a spatio-temporal one. The reason is that more than the floods themselves, we are currently submerged by the effects of two myopias, like two rocks tied around our necks that are dragging us to the bottom. The first one is sustained by the members of our political class, such as Assam CM Himanta Biswa Sarma and Union home minister Amit Shah, when they say that they will avail all the support and restitution to displaced people and the relatives of those killed directly or indirectly by the floods.

The floods are not the product of climate change but of mindless infrastructure ‘development’, the construction of dikes and embankments, encroachment of wetlands and plains, destruction of forests and the over-extraction resources and its consequences. A flood happens when the water levels rise, but destruction is the result of objects of human value being in the waters’ way. More and more human property is being located in places where the water used to go, and more and more human property is being rendered vulnerable to being washed away.

When political leaders offer support to the people after every flood (which is the norm), it is akin to saying, “I will shoot you with a gun and then I will pay for your care.” Offering people support is not helpful, at least not when it stops there, followed by silence. Everyone – from parliamentary committees to civil society members – should follow the utterances of Shah, Sarma & co. (both BJP and non-BJP leaders, including those of the Congress, CPI(M), DMK, TMC, etc.) through time, acknowledge the seasonality of their proclamations, and bring them to book for failing to prevent the floods from occurring every year, instead of giving them brownie points for providing support on each occasion post facto.

The second myopia exists on the part of many journalists, especially in the Indian mainstream press, and their attitude towards cyclones, which can be easily and faithfully extrapolated to floods as well. Every year for the last two decades at least, there has been a cyclone or two that ravaged two states in particular: Andhra Pradesh and West Bengal (the list included Odisha but it has done well to mitigate the consequences). And on every occasion plus some time, reports have appeared in newspapers and magazines of fisherpeople in dire straits with their boats broken, nets torn and stomachs empty; of coastal properties laid to waste; and, soon after, of fuel and power subsidies, loan waivers and – if you wait long enough – sobering stories of younger fishers migrating to other parts of the country looking for other jobs.

These stories are all important and necessary – but they are not sufficient. We also need stories about something new – stories that are mindful of the passage of time, of people growing old, the rupee becoming less valuable, the land becoming more recalcitrant, and of the world itself passing them all by. We need the present participle.

This is not a plea for media houses to commoditise tragedy and trade in interestingness but a plea to consider that these stories miss something: the first myopia, the one that our political leaders espouse. By keeping the focus on problem X, we also keep the focus on the solutions for X. Now ask yourself what X might be if all the stories appearing in the mainstream press are about post-disaster events, and thus which solutions – or, indeed, points of accountability – we tend to focus on to the exclusion of others. We also need stories – ranging in type from staff reports to reported features, from hyperlocal dispatches to literary essays – of everything that has happened in the aftermath of a cyclone making landfall near, say, Nellore or North 24 Parganas, whether things have got better or worse with time, whether politicians have kept their promises to ameliorate the conditions of the people there (especially those not living inside concrete structures and/or whose livelihoods depends directly on natural resources); and whether by restricting ourselves to supporting a people after a storm or a flood has wreaked havoc, we are actually dooming them.

We need timewise data and we need timewise first-hand accounts. To adapt the wisdom of Philip Warren Anderson, we may know how a shrinking wetland may exacerbate the intensity of the next flood, but we cannot ever derive from this relationship knowledge of the specific ways in which people, and then the country, suffer, diminish and fade away.

The persistence of these two myopias also feeds the bane of incrementalism. By definition, incremental events occur orders of magnitude more often than significant events (so to speak), so it is more efficient to evolve to monitor and record the former. This applies as much to our memories as it does to the economics of newsrooms. We tend to get caught up in the day-to-day and are capable within weeks of forgetting something that happened last year; unscrupulous politicians play to this gallery by lying through their teeth about something happening when it didn’t (or vice versa), offending the memories of all those who have died because of a storm or a flood and yet others who survive but on the brink of tragedy. On the other hand, newsrooms are staffed with more journalists attuned to the small details but not implicitly able to piece all of them together into the politically and economically inconvenient big picture (there are exceptions, of course).

I am not sure when we entered the crisis period of climate change but in mid-2022, it is a trivial fact that we are in the thick of it – the thick of a beast that assails us both in space and through time. In response, we must change the way we cognise disasters. The Assam floods are ongoing – and so are the Kosithe Sabarmati and the Cauvery floods. We just haven’t seen the waters go wild yet.

The constructionist hypothesis and expertise during the pandemic

Now that COVID-19 cases are rising again in the country, the trash talk against journalists has been rising in tandem. The Indian government was unprepared and hapless last year, and it is this year as well, if only in different ways. In this environment, journalists have come under criticism along two equally unreasonable lines. First, many people, typically supporters of the establishment, either don’t or can’t see the difference between good journalism and contrarianism, and don’t or can’t acknowledge the need for expertise in the practise of journalism.

Second, the recognition of expertise itself has been sorely lacking across the board. Just like last year, when lots of scientists dropped what they were doing and started churning out disease transmission models each one more ridiculous than the last, this time — in response to a more complex ‘playing field’ involving new and more variants, intricate immunity-related mechanisms and labyrinthine clinical trial protocols — too many people have been shouting their mouths off, and getting most of it wrong. All of these misfires have reminded us of two things: again and again that expertise matters, and that unless you’re an expert on something, you’re unlikely to know how deep it runs. The latter isn’t trivial.

There’s what you know you don’t know, and what you don’t know you don’t know. The former is the birthplace of learning. It’s the perfect place from which to ask questions and fill gaps in your knowledge. The latter is the verge of presumptuousness — a very good place from which to make a fool of yourself. Of course, this depends on your attitude: you can always be mindful of the Great Unknown, such as it is, and keep quiet.

As these tropes have played out in the last few months, I have been reminded of an article written by the physicist Philip Warren Anderson, called ‘More is Different’, and published in 1972. His idea here is simple: that the statement “if everything obeys the same fundamental laws, then the only scientists who are studying anything really fundamental are those who are working on those laws” is false. He goes on to explain:

“The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a ‘constructionist’ one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. … The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. The behaviour of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviours requires research which I think is as fundamental in its nature as any other.”

The seemingly endless intricacies that beset the interaction of a virus, a human body and a vaccine are proof enough that the “twin difficulties of scale and complexity” are present in epidemiology, immunology and biochemistry as well – and testament to the foolishness of any claims that the laws of conservation, thermodynamics or motion can help us say, for example, whether a particular variant infects people ‘better’ because it escapes the immune system better or because the immune system’s protection is fading.

But closer to my point: not even all epidemiologists, immunologists and/or biochemists can meaningfully comment on every form or type of these interactions at all times. I’m not 100% certain, but at least from what I’ve learnt reporting topics in physics (and conceding happily that covering biology seems more complex), scale and complexity work not just across but within fields as well. A cardiologist may be able to comment meaningfully on COVID-19’s effects on the heart in some patients, or a neurologist on the brain, but they may not know how the infection got there even if all these organs are part of the same body. A structural biologist may have deciphered why different mutations change the virus’s spike protein the way they do, but she can’t be expected to comment meaningfully on how epidemiological models will have to be modified for each variant.

To people who don’t know better, a doctor is a doctor and a scientist is a scientist, but as journalists plumb the deeper, more involved depths of a new yet specific disease, we bear from time to time a secret responsibility to be constructive and not reductive, and this is difficult. It becomes crucial for us to draw on the wisdom of the right experts, who wield the right expertise, so that we’re moving as much and as often as possible away from the position of what we don’t know we don’t know even as we ensure we’re not caught in the traps of what experts don’t know they don’t know. The march away from complete uncertainty and towards the names of uncertainty is precarious.

Equally importantly, at this time, to make our own jobs that much easier, or at least less acerbic, it’s important for everyone else to know this as well – that more is vastly different.

Peter Higgs, self-promoter

I was randomly rewatching The Big Bang Theory on Netflix today when I spotted this gem:

Okay, maybe less a gem and more a shiny stone, but still. The screenshot, taken from the third episode of the sixth season, shows Sheldon Cooper mansplaining to Penny the work of Peter Higgs, whose name is most famously associated with the scalar boson the Large Hadron Collider collaboration announced the discovery of to great fanfare in 2012.

My fascination pertains to Sheldon’s description of Higgs as an “accomplished self-promoter”. Higgs, in real life, is extremely reclusive and self-effacing and journalists have found him notoriously hard to catch for an interview, or even a quote. His fellow discoverers of the Higgs boson, including François Englert, the Belgian physicist with whom Higgs won the Nobel Prize for physics in 2013, have been much less media-shy. Higgs has even been known to suggest that a mechanism in particle physics involving the Higgs boson should really be called the ABEGHHK’tH mechanism, include the names of everyone who hit upon its theoretical idea in the 1960s (Philip Warren Anderson, Robert Brout, Englert, Gerald Guralnik, C.R. Hagen, Higgs, Tom Kibble and Gerardus ‘t Hooft) instead of just as the Higgs mechanism.

No doubt Sheldon thinks Higgs did right by choosing not to appear in interviews for the public or not writing articles in the press himself, considering such extreme self-effacement is also Sheldon’s modus of choice. At the same time, Higgs might have lucked out and be recognised for work he conducted 50 years prior probably because he’s white and from an affluent country, both of which attributes nearly guarantee fewer – if any – systemic barriers to international success. Self-promotion is an important part of the modern scientific endeavour, as it is with most modern endeavours, even if one is an accomplished scientist.

All this said, it is notable that Higgs was also a conscientious person. When he was awarded the Wolf Prize in 2004 – a prestigious award in the field of physics – he refused to receive it in person in Jerusalem because it was a state function and he has protested Israel’s war against Palestine. He was a member of the Campaign for Nuclear Disarmament until the group extended its opposition to nuclear power as well; then he resigned. He also stopped supporting Greenpeace after they become opposed to genetic modification. If it is for these actions that Sheldon deemed Higgs an “accomplished self-promoter”, then I stand corrected.

Featured image: A portrait of Peter Higgs by Lucinda Mackay hanging at the James Clerk Maxwell Foundation, Edinburgh. Caption and credit: FF-UK/Wikimedia Commons, CC BY-SA 4.0.

“Maybe the Higgs boson is fictitious!”

That’s an intriguing and, as he remarks, plausible speculation by the noted condensed-matter physicist Philip Warren Anderson. It appears in a short article penned by him in Nature Physics on January 26, in which he discusses how the Higgs mechanism as in particle physics was inspired by a similar phenomenon observed in superconductors.

According to the Bardeen-Cooper-Schrieffer theory, certain materials lose their resistance to the flow of electric current completely and become superconductors below a critical temperature. Specifically, below this temperature, electrons don’t have the energy to sustain their mutual Coulomb repulsion. Instead, they experience a very weak yet persistent attractive force between them, which encourages them to team up in pairs called Cooper pairs (named for Leon Cooper).

If even one Cooper pair is disrupted, all Cooper pairs in the superconductor will break, and it will cease to be a superconductor as well. As a result, the energy to break one pair is equivalent to the energy necessary to break all pairs – a coercive state of affairs that keeps the pairs paired up despite energetic vibrations from the atoms in the material’s lattice. In this energetic environment, the Cooper pairs all behave as if they were part of a collective (described as a Bose-Einstein condensate).

This transformation can be understood as the spontaneous breaking of a symmetry: the gauge symmetry of electromagnetism, which dictates that no experiment can distinguish between the laws governing electricity and magnetism. With a superconductor, however, the laws governing electricity in the material become different below the critical temperature. And when a gauge symmetry breaks, a massive1 boson is formed. In the case of BCS superconductivity, however, it is not an actual particle as much as the collective mode of the condensate.

In particle physics, a similar example exists in the form of electroweak symmetry breaking. While we are aware of four fundamental forces in play around us (strong, weak, electromagnetic and gravitational), at higher energies the forces are thought to become unified into one ‘common’ force. And on the road to unification, the first to happen is of the electromagnetic and weak forces – into the electroweak force. Axiomatically, the electroweak symmetry was broken to yield the electromagnetic and weak forces, and the massive Higgs boson.

Anderson, who first discussed the ‘Higgs mode’ in superconductors in a paper in 1958, writes in his January 26 article (titled Higgs, Anderson and all that),

… Yoichiro Nambu, who was a particle theorist and had only been drawn into our field by the gauge problem, noticed in 1960 that a BCS-like theory could be used to create mass terms for massless elementary particles out of their interactions. After all, one way to describe the energy gap in BCS is that it represents a mass term for every point on the Fermi surface, mixing the particle with its opposite spin and momentum antiparticle. In 1960 Nambu and Jona-Lasinio developed a theory in which most of the mass of the nucleon comes from interactions — this theory is still considered partially correct.

But the real application of the idea of a superconductivity-like broken symmetry as a source of the particle spectrum came with the electroweak theory — which unified the electromagnetic and weak interactions — of Sheldon Glashow, Abdus Salam and Steven Weinberg.

What is fascinating is that these two phenomena transpire at outstandingly different energy scales. The unification of the electromagnetic and weak forces into the electroweak force happens beyond 100 GeV. The energy scale at which the electrons in magnesium diboride become superconducting is around 0.002 eV. As Terry Pratchett would have it, the “aching gulf” of energy in between spans 12 orders of magnitude.

At the same time, the parallels between superconductivity and electroweak symmetry breaking are more easily drawn than between other, more disparate fields of study because their occurrence is understood in terms of the behavior of fundamental particles, especially bosons and fermions. It is this equivalence that makes Anderson’s speculative remark more attractive:

If superconductivity does not require an explicit Higgs in the Hamiltonian to observe a Higgs mode, might the same be true for the 126 GeV mode? As far as I can interpret what is being said about the numbers, I think that is entirely plausible. Maybe the Higgs boson is fictitious!

To help us along, all we have at the moment is the latest in an increasingly asymptotic series of confirmations: as reported by CERN, “the results draw a picture of a particle that – for the moment – cannot be distinguished from the Standard Model predictions for the Higgs boson.”

1Massive as in having mass, not as in a giant boson.