A new source of cosmic rays?

The International Space Station carries a suite of instruments conducting scientific experiments and measurements in low-Earth orbit. One of them is the Alpha Magnetic Spectrometer (AMS), which studies antimatter particles in cosmic rays to understand how the universe has evolved since its birth.

Cosmic rays are particles or particle clumps flying through the universe at nearly the speed of light. Since the mid-20th century, scientists have found cosmic-ray particles are emitted during supernovae and in the centres of galaxies that host large black holes. Scientists installed the AMS in May 2011, and by April 2021, it had tracked more than 230 billion cosmic-ray particles.

When scientists from the Massachusetts Institute of Technology (MIT) recently analysed these data — the results of which were published on June 25 — they found something odd. Roughly one in 10,000 of the cosmic ray particles were neutron-proton pairs, a.k.a. deuterons. The universe has a small number of these particles because they were only created in a 10-minute-long period a short time after the universe was born, around 0.002% of all atoms.

Yet cosmic rays streaming past the AMS seemed to have around 5x greater concentration of deuterons. The implication is that something in the universe — some event or some process — is producing high-energy deuterons, according to the MIT team’s paper.

Before coming to this conclusion, the researchers considered and eliminated some alternative explanations. Chief among them is the way scientists know how deuterons become cosmic rays. When primary cosmic rays produced by some process in outer space smash into matter, they produce a shower of energetic particles called secondary cosmic rays. Thus far, scientists have considered deuterons to be secondary cosmic rays, produced when helium-4 ions smash into atoms in the interstellar medium (the space between stars).

This event also produces helium-3 ions. So if the deuteron flux in cosmic rays is high, and if we believe more helium-4 ions are smashing into the interstellar medium than expected, the AMS should have detected more helium-3 cosmic rays than expected as well. It didn’t.

To make sure, the researchers also checked the AMS’s instruments and the shared properties of the cosmic-ray particles. Two in particular are time and rigidity. Time deals with how the flux of deuterons changes with respect to the flux of other cosmic ray particles, especially protons and helium-4 ions. Rigidity measures the likelihood a cosmic-ray particle will reach Earth and not be deflected away by the Sun. (Equally rigid particles behave the same way in a magnetic field.) When denoted in volts, rigidity indicates the extent of deflection the particle will experience.

The researchers analysed deuterons with rigidity from 1.9 billion to 21 billion V and found that “over the entire rigidity range the deuteron flux exhibits nearly identical time variations with the proton, 3-He, and 4-He fluxes.” At rigidity greater than 4.5 billion V, the fluxes of deuterons and helium-4 ions varied together whereas those of helium-3 and helium-4 didn’t. At rigidity beyond 13 billion V, “the rigidity dependence of the D and p fluxes [was] nearly identical”.

Similarly, they found the change in the deuteron flux was greater than the change in the helium-3 flux, both relative to the helium-4 flux. The statistical significance of this conclusion far exceeded the threshold particle physicists use to check whether an anomaly in the data is really real rather than the result of some fluke error. Finally, “independent analyses were performed on the same data sample by four independent study groups,” the paper added. “The results of these analyses are consistent with this Letter.”

The MIT team ultimately couldn’t find a credible alternative explanation, leaving their conclusion: deuterons could be primary cosmic rays, and we don’t (yet) know the process that could be producing them.

MIT develops thermo-PV cell with 40% efficiency

Researchers at MIT have developed a heat engine that can convert heat to electricity with 40% efficiency. Unlike traditional heat engines – a common example is the internal combustion engine inside a car – this device doesn’t have any moving parts. Second, this device has been designed to work with a heat source that has a temperature of 1,900º to 2,400º C. Effectively, it’s like a solar cell that has been optimised to work with photons from vastly hotter sources – although its efficiency still sets it apart. If you know the history, you’ll understand why 40% is a big deal. And if you know a bit of optics and some materials science, you’ll understand how this device could be an important part of the world’s efforts to decarbonise its power sources. But first the history.

We’ve known how to build heat engines for almost two millennia. They were first built to convert heat, generated by burning a fuel, into mechanical energy – so they’ve typically had moving parts. For example, the internal combustion engine combusts petrol or diesel and harnesses the energy produced to move a piston. However, the engine can only extract mechanical work from the fuel – it can’t put the heat back. If it did, it would have to ‘give back’ the work it just extracted, nullifying the engine’s purpose. So once the piston has been moved, the engine dumps the heat and begins the next cycle of heat extraction from more fuel. (In the parlance of thermodynamics, the origin of the heat is called the source and its eventual resting place is called the sink.)

The inevitability of this waste heat keeps the heat engine’s efficiency from ever reaching 100% – and is further dragged down by the mechanical energy losses implicit in the moving parts (the piston, in this case). In 1820, the French mechanical engineer Nicolas Sadi Carnot derived the formula to calculate the maximum possible efficiency of a heat engine that works in this way. (The formula also assumes that the engine is reversible – i.e. that it can pump heat from a colder source to a hotter sink.) The number spit out by this formula is called the Carnot efficiency. No heat engine can have an energy efficiency that’s greater than its Carnot efficiency. The internal combustion engines of today have a Carnot efficiency of around 37%. A steam generator at a large power plant can go up to 51%. Against this background, the heat engine that the MIT team has developed has a celebration-worthy efficiency of 40%.

The other notable thing about it is the amount of heat with which it can operate. There are two potential applications of the new device that come immediately to mind: to use the waste heat from something that operates at 1,900-2,400º C and to take the heat from something that stores energy at those temperatures. There aren’t many entities in the world that maintain a temperature of 1,900-2,400º C as well as dump waste heat. Work on the device caught my attention after I spotted a press release from MIT. The release described one application that combined both possibilities in the form of a thermal battery system. Here, heat from the Sun is concentred in graphite blocks (using lenses and mirrors) that are located in a highly insulated chamber. When the need arises, the insulation can be removed to a suitable extent for the graphite to lose some heat, which the new device then converts to electricity.

On Twitter, user Scott Leibrand (@ScottLeibrand) also pointed me to a similar technology called FIRES – short for ‘Firebrick Resistance-Heated Energy Storage’, proposed by MIT researchers in 2018. According to a paper they wrote, it “stores electricity as … high-temperature heat (1000–1700 °C) in ceramic firebrick, and discharges it as a hot airstream to either heat industrial plants in place of fossil fuels, or regenerate electricity in a power plant.” They add that “traditional insulation” could limit heat leakage from the firebricks to less than 3% per day and estimate a storage cost of $10/kWh – “substantially less expensive than batteries”. This is where the new device could shine, or better yet enable a complete power-production system: by converting heat deliberately leaked from the graphite blocks or firebricks to electricity, at 40% efficiency. Even given the fact that heat transfer is more efficient at higher temperatures, this is impressive – more since such energy storage options are also geared for the long-term.

Let’s also take a peek at how the device works. It’s called a thermophotovoltaic (TPV) cell. The “photovoltaic” in the name indicates that it uses the photovoltaic effect to create an electric current. It’s closely related to the photoelectric effect. In both cases, an incoming photon knocks out an electron in the material, creating a voltage that then supports an electric current. In the photoelectric effect, the electron is completely knocked out of the material. In the photovoltaic effect, the electron stays within the material and can be recaptured. Second, in order to achieve the high efficiency, the research team wrote in its paper that it did three things. It’s a bunch of big words but they actually have straightforward implications, as I explain, so don’t back down.

1. “The usage of higher bandgap materials in combination with emitter temperatures between 1,900 and 2,400 °C” – Band gap refers to the energy difference between two levels. In metals, for example, when electrons in the valence band are imparted enough energy, they can jump across the band gap into the conduction band, where they can flow around the metal conducting electricity. The same thing happens in the TPV cell, where incoming photons can ‘kick’ electrons into the material’s conduction band if they have the right amount of energy. Because the photon source is a very hot object, the photons are bound to have the energy corresponding to the infrared wavelength of light – which carries around 1-1.5 electron-volt, or eV. So the corresponding TPV material also needs to have a bandgap of 1-1.5 eV. This brings us to the second point.

2. “High-performance multi-junction architectures with bandgap tunability enabled by high-quality metamorphic epitaxy” – Architecture refers to the configuration of the cell’s physical, electrical and chemical components and epitaxy refers to the way in which the cell is made. In the new TPV cell, the MIT team used a multi-junction architecture that allowed the device to ‘accept’ photons of a range of wavelengths (corresponding to the temperature range). This is important because the incoming photons can have one of two effects: either kick out an electron or heat up the material. The latter is undesirable and should be avoided, so the multi-junction setup to absorb as many photons as possible. A related issue is that the power output per unit volume of an object radiating heat scales according to the fourth power of its temperature. That is, if its temperature increases by x, its power output per volume will increase by x^4. Since the heat source of the TPV cell is so hot, it will have a high power output, thus again privileging the multi-junction architecture. The epitaxy is not interesting to me, so I’m skipping it. But I should note that electric cells like the current one aren’t ubiquitous because making them is a highly intricate process.

3. “The integration of a highly reflective back surface reflector (BSR) for band-edge filtering” – The MIT press release explains this part clearly: “The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold” – the BSR. “The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.”

While it seems obvious that technology like this will play an important part in humankind’s future, particularly given the attractiveness of maintaining a long-term energy store as well as the use of a higher-efficiency heat engine, the economics matter muchly. I don’t know how much the new TPV cell will cost, especially since it isn’t being mass-produced yet; in addition, the design of the thermal battery system will determine how many square feet of TPV cells will be required, which in turn will affect the cells’ design as well as the economics of the overall facility. This said, the fact that the system as a whole will have so few moving parts as well as the availability of both sunlight and graphite or firebricks, or even molten silicon, which has a high heat capacity, keep the lucre of MIT’s high-temperature TPVs alive.

Featured image: A thermophotovoltaic cell (size 1 cm x 1 cm) mounted on a heat sink designed to measure the TPV cell efficiency. To measure the efficiency, the cell is exposed to an emitter and simultaneous measurements of electric power and heat flow through the device are taken. Caption and credit: Felice Frankel/MIT, CC BY-NC-ND.

Another exit from MIT Media Lab

J. Nathan Matias, a newly minted faculty member at Cornell University and a visiting scholar at the MIT Media Lab, has announced that he will cut all ties with the latter at the end of the academic year over the lab director’s, i.e. Joi Ito’s, association with Jeffrey Epstein. His announcement comes on the heels of one by Ethan Zuckerman, a philosopher and director of the lab’s Center for Civic Media, who also said he’d leave at the end of the academic year despite not having any job offers. Matias wrote on Medium on August 21:

During my last two years as a visiting scholar, the Media Lab has continued to provide desk space, organizational support, and technical infrastructure to CivilServant, a project I founded to advance a safer, fairer, more understanding internet. As part of our work, CivilServant does research on protecting women and other vulnerable people online from abuse and harassment. I cannot with integrity do that from a place with the kind of relationship that the Media Lab has had with Epstein. It’s that simple.

Zuckerman had alluded to a similar problem with a different group of people:

I also wrote notes of apology to the recipients of the Media Lab Disobedience Prize, three women who were recognized for their work on the #MeToo in STEM movement. It struck me as a terrible irony that their work on combatting sexual harassment and assault in science and tech might be damaged by their association with the Media Lab.

On the other hand, Ito’s note of apology on August 15, which precipitated these high-profile resignations and put the future of the lab in jeopardy, didn’t at all mention any regret over what Ito’s fraternising with Epstein could mean for its employees, many of whom are working on sensitive projects. Instead, Ito has only said that he would return the money Epstein donated to the lab, a sum of $200,000 (Rs 143.09 crore) according to the Boston Globe, while pleading ignorance to Epstein’s crimes.

Remembering John Nash, mathematician who unlocked game theory for economics

The Wire
May 25, 2015

The economist and Nobel Laureate Robert Solow once said, “It wasn’t until Nash that game theory came alive for economists.” He was speaking of the work of John Forbes Nash, Jr., a mathematician whose 27-page PhD thesis from 1949 transformed a chapter in mathematics from a novel idea to a powerful tool in economics, business and political science.

At the time, Nash was only 21, his age a telltale mark of genius that had accompanied and would accompany him for the rest of his life.

That life was brought to a tragic close on May 23 when his wife Alicia Nash and he were killed in a car-accident at the New Jersey Turnpike. He was 86 and she was 82; they are survived by two children.

Alicia (née Larde) met Nash when she took an advanced calculus class from him at the Massachusetts Institute of Technology in the mid-1950s. He had received his PhD in 1950 from Princeton University, spent some time as an instructor there and as a consultant at the Rand Corporation, and had moved to MIT in 1951 determined to take on the biggest problems in mathematics.

Between then and 1959, Nash made a name for himself as possibly one of the greatest mathematicians since Carl Friedrich Gauss. He solved what was until then believed to be an unsolvable problem in geometry dating from the 19th century. He worked on a cryptography machine he’d invented while at Rand and tried to get the NSA to use it. He worked with the Canadian-American mathematician Louis Nirenberg to develop non-linear partial differential equations (in recognition, the duo was awarded the coveted Abel Prize in 2015).

He made significant advances in the field of number theory and analysis that – in the eyes of other mathematicians – easily overshadowed his work from the previous decade. After Nash was awarded the Nobel Prize for economics in 1994 for transforming the field of game theory, the joke was that he’d won the prize for his most trivial work.

In 1957, Nash took a break from the Institute for Advanced Studies in Princeton, during which he married Alicia. In 1958, she became pregnant with John Charles Martin Nash. Then, in 1959, misfortune struck when Nash was diagnosed with paranoid schizophrenia. The illness would transform him, his work and the community of his peers in the next 20 years far beyond putting a dent in his professional career – even as it exposed the superhuman commitments of those who stood by him.

This group included his family, his friends at Princeton and MIT, and the Princeton community at large, even as Nash was as good as dead for the world outside.

His colleagues were no longer able to understand his work. He stopped publishing papers after 1958. He was committed to psychiatric hospitals many times but treatment didn’t help. Psychoanalysis was still in vogue in the 1950s and 1960s – while it’s been discredited now, its unsurprising inability to get through to Nash ground at people’s hopes. In these trying times, Alicia Nash became a great source of support.

Although the couple had divorced in 1963, he continued to write her strange letters – while roaming around Europe, while absconding from Princeton to Roanoke (West Virginia), while convinced that the American government was spying on him.

She later let him live in her house along with their son, paying the bills by working as a computer programmer. Many believe that his eventual remission – in the 1980s – had been the work of Alicia. She had firmly believed that he would feel better if he could live in a quiet, friendly environment, occasionally bumping into old friends, walking familiar walkways in peace. Princeton afforded him just these things.

The remission was considered miraculous because it was wholly unexpected. The intensity of Nash’s affliction was exacerbated by the genius tag, by how much of Nash’s brilliance the world was being deprived of. And the deprivation in turn served to intensify the sensation of loss, drawing out each day that he was unable to make sense when he spoke, when he worked. John Moore, a mathematician and friend of the Nashes, thought they could have been his most productive years.

After journalist Sylvia Nasar’s book A Beautiful Mind, and then an Academy-Award-winning movie based on it, his story became a part of popular culture – but the man himself withdrew from society. Ron Howard, who directed the movie, mentions in a 2002 interview that Nash couldn’t remember large chunks of his life from the 1970s.

While mood disorders like depression strike far more people – and are these days almost commonplace – schizophrenia is more ruthless and debilitating. Even as scientists think it has a firm neurological basis, a perfect cure is yet to be invented because schizophrenia damages a victim’s mind as much as her/his ability to process social stimuli.

In Nash’s case, his family and friends among the professors of Princeton and MIT protected him from succumbing to his own demons – the voices in his head, the ebb of reason, the tendency to isolate himself, that are altogether often the first step toward suicide in people less cared for. Moreover, Nash’s own work played a role in his illness. He was convinced for a time that a new global government was on the horizon, a probable outcome in game theory that his work had made possible, and tried to give up his American citizenship. As a result, his re-emergence from the two decades of mental torture were as much about escaping the vile grip of irrationality and paranoia as much as regaining a sense of certainty in the face of his mathematics’ enchanting possibilities.

A Beautiful Mind closes with Nash’s peers at Princeton learning of his being awarded the Nobel in 1994, and walking up to his table to congratulate him. On screen, Russell Crowe smiles the smile of a simple man, a certain man, revealing nothing of the once-brazen virtuosity that had him dashing into classrooms at Princeton just to scribble equations on the boards, dismissing his colleagues’ work, rearing to have a go at the next big thing in science. By then, that brilliance lay firmly trapped within John Nash’s beautiful but unsettled mind. With his death, and that of Alicia, that mind will now always be known and remembered by the brilliant body of work it produced.

A leap forward in ‘flow’ batteries

Newly constructed windmills D4 (nearest) to D1 on the Thornton Bank, 28 km off shore, on the Belgian part of the North Sea. The windmills are 157 m (+TAW) high, 184 m above the sea bottom.
Newly constructed windmills D4 (nearest) to D1 on the Thornton Bank, 28 km off shore, on the Belgian part of the North Sea.

Polymer-based separators in conventional batteries bring their share of structural and operational defects to the table, and reduce the efficiency and lifetime of the battery. To circumvent this issue, researchers at the Massachusetts Institute of Technology (MIT) have developed a membrane-less battery, a.k.a. a ‘flow’ battery. It stores and releases energy using electrochemical reactions between hydrogen and bromine. Within the battery, bromine and hydrogen bromide are pumped in through a channel between the electrodes. They keep the flow rate really low, prompting the fluids to achieve laminar flow: in this state, they flow parallely instead of mixing with each other, creating a ‘natural’ membrane that still keeps the ion-transfer channel open. The researchers, led by doctoral student William Braff, estimate that the battery, if scaled up to megawatts, could incur a one-time cost of as little as $100/kWh. This is value that’s quite attractive to the emerging renewable energy economy. From a purely research perspective, this H-Br variant is significant also for being the first rechargeable ‘flow’ battery. I covered this development for The Hindu.

tumblr_mptlc1tXoB1qao8kio1_500

Most of the principles of the MIT Media Lab I think can be adopted by young professionals looking to make it big. It’s not safe, it’s not sure either, but it definitely re-establishes the connection with intuitive thought (“compasses”) instead of the process-entombed one (“maps”) that’s driving many good ideas and initiatives – like the newspaper – into the ground.

Aaron Swartz is dead.

This article, as written by me and a friend, appeared in The Hindu on January 16, 2013.

In July, 2011, Aaron Swartz was indicted by the district of Massachusetts for allegedly stealing more than 4.8 million articles from the online academic literature repository JSTOR via the computer network at the Massachusetts Institute of Technology. He was charged with, among others, wire-fraud, computer-fraud, obtaining information from a protected computer, and criminal forfeiture.

After paying a $100,000-bond for release, he was expected to stand trial in early 2013 to face the charges and, if found guilty, a 35-year prison sentence and $1 million in fines. More than the likelihood of the sentence, however, what rankled him most was that he was labelled a “felon” by his government.

On January 11, Friday, Swartz’s fight, against information localisation as well as the label given to him, ended when he hung himself in his New York apartment. He was only 26. At the time of his death, JSTOR did not intend to press charges and had decided to release 4.5 million of its articles into the public domain. It seems as though this crime had no victims.

But, he was so much more than an alleged thief of intellectual property. His life was a perfect snapshot of the American Dream. But the nature of his demise shows that dreams are not always what they seem.

At the age of 14, Swartz became a co-author of the RSS (Rich Site Summary) 1.0 specification, now a widely used method for subscribing to web content. He went on to attend Stanford University, dropped out, founded a popular social news website and then sold it — leaving him a near millionaire a few days short of his 20th birthday.

A recurring theme in his life and work, however, were issues of internet freedom and public access to information, which led him to political activism. An activist organisation he founded campaigned heavily against the Stop Online Piracy Act (SOPA) bill, and eventually killed it. If passed, SOPA would have affected much of the world’s browsing.

At a time that is rife with talk of American decline, Swartz’s life reminds us that for now, the United States still remains the most innovative society on Earth, while his death tells us that it is also a place where envelope pushers discover, sometimes too late, that the line between what is acceptable and what is not is very thin.

The charges that he faced, in the last two years before his death, highlight the misunderstood nature of digital activism — an issue that has lessons for India. For instance, with Section 66A of the Indian IT Act in place, there is little chance of organising an online protest and blackout on par with the one that took place over the SOPA bill.

While civil disobedience and street protests usually carry light penalties, why should Swartz have faced long-term incarceration just because he used a computer instead? In an age of Twitter protests and online blackouts, his death sheds light on the disparities that digital activism is subjected to.

His act of trying to liberate millions of scholarly articles was undoubtedly political activism. But had he undertaken such an act in the physical world, he would have faced only light penalties for trespassing as part of a political protest. One could even argue that MIT encouraged such free exchange of information — it is no secret that its campus network has long been extraordinarily open with minimal security.

What then was the point of the public prosecutors highlighting his intent to profit from stolen property worth “millions of dollars” when Swartz’s only aim was to make them public as a statement on the problems facing the academic publishing industry? After all, any academic would tell you that there is no way to profit off a hoard of scientific literature unless you dammed the flow and then released it per payment.

In fact, JSTOR’s decision to not press charges against him came only after they had reclaimed their “stolen” articles — even though Laura Brown, the managing director of JSTOR, had announced in September 2011, that journal content from 500,000 articles would be released for free public viewing and download. In the meantime, Swartz was made to face 13 charges anyway.

Assuming the charges are reasonable at all, his demise will then mean that the gap between those who hold onto information and those who would use it is spanned only by what the government thinks is criminal. That the hammer fell so heavily on someone who tried to bridge this gap is tragic. Worse, long-drawn, expensive court cases are becoming roadblocks on the path towards change, especially when they involve prosecutors incapable of judging the difference between innovation and damage on the digital frontier. It doesn’t help that it also neatly avoids the aura of illegitimacy that imprisoning peaceful activists would have for any government.

Today, Aaron Swartz is dead. All that it took to push a brilliant mind over the edge was a case threatening to wipe out his fortunes and ruin the rest of his life. In the words of Lawrence Lessig, American academic activist, and his former mentor at the Harvard University Edmond J. Safra Centre for Ethics: “Somehow, we need to get beyond the ‘I’m right so I’m right to nuke you’ ethics of our time. That begins with one word: Shame.”

Problems associated with studying the brain

Paul Broca announced in 1861 that the region of the brain now named after him was the “seat of speech”. Through a seminal study, researchers Nancy Kanwisher and Evelina Fedorenko from MIT announced on October 11, 2012, that Broca’s area actually consists of two sub-units, and one of them specifically handles cognition when the body performed demanding tasks.

As researchers explore more on the subject, two things become clear.

The first: The more we think we know about the brain and go on to try and study it, the more we discover things we never knew existed. This is significant because, apart from giving researchers more avenues through which to explore the brain, it also details their, rather our, limits in terms of being able to predict how things really might work.

The biology is, after all, intact. Cells are cells, muscles are muscles, but through their complex interactions are born entirely new functionalities.

The second: how the cognitive-processing and the language-processing networks might communicate internally is unknown to us. This means we’ll have to devise new ways of studying the brain, forcing it to flex some muscles over others by subjecting it to performing carefully crafted tasks.

Placing a person’s brain under an fMRI scanner reveals a lot about which parts of the brain are being used at each moment, but now we realize we have no clue about how many parts are actually there! This places an onus on the researcher to devise tests that

  1. Affect only specific areas of the brain;
  2. If they have ended up affecting some other areas as well, allow the researcher to distinguish between the areas in terms of how they handle the test

Once this is done, we will finally understand both the functions and the limits of Broca’s area, and also acquire pointers as to how it communicates with the rest of the brain.

A lot of predictability and antecedent research is held back because of humankind’s inchoate visualization of the brain.