New LHC data puts ‘new physics’ lead to bed

One particle in the big zoo of subatomic particles is the B meson. It has a very short lifetime once it’s created. In rare instances it decays to three lighter particles: a kaon, a lepton and an anti-lepton. There are many types of leptons and anti-leptons. Two are electrons/anti-electrons and muons/anti-muons. According to the existing theory of particle physics, they should be the decay products with equal probability: a B meson should decay to a kaon, electron and anti-electron as often as it decays to a kaon, muon and anti-muon (after adjusting for mass, since the muon is heavier).

In the last 13 years, physicists studying B meson decays had found on four occasions that it decayed to a kaon, electron and anti-electron more often. They were glad for it, in a way. They had worked out the existing theory, called the Standard Model of particle physics, from the mid-20th century in a series of Nobel Prize-winning papers and experiments. Today, it stands complete, explaining the properties of a variety of subatomic particles. But it still can’t explain what dark matter is, why the Higgs boson is so heavy or why there are three ‘generations’ of quarks, not more or less. If the Standard Model is old physics, particle physicists believe there could be a ‘new physics’ out there – some particle or force they haven’t discovered yet – which could really complete the Standard Model and settle the unresolved mysteries.

Over the years, they have explored various leads for ‘new physics’ in different experiments, but eventually, with more data, the findings have all been found to be in line with the predictions of the Standard Model. Until 2022, the anomalous B meson decays were thought to be a potential source of ‘new physics’ as well. A 2009 study in Japan found that some B meson decays created electron/anti-electrons pairs more often than muons/anti-muon pairs – as did a 2012 study in the US and a 2014 study in Europe. The last one involved the Large Hadron Collider (LHC), operated by the European Organisation for Nuclear Research (CERN) in France, and a detector on it called LHCb. Among other things, the LHCb tracks B mesons. In March 2021, the LHCb collaboration released data qualitatively significant enough to claim ‘evidence’ that some B mesons were decaying to electron/anti-electron pairs more often than to muon/anti-muon pairs.

But the latest data from the LHC, released on December 20, appears to settle the question: it’s still old physics. The formation of different types of lepton/anti-lepton particle pairs with equal probability is called lepton-flavour universality. Since 2009, physicists had been recording data that suggested that one type of some B meson decays were violating lepton-flavour university, in the form of a previously unknown particle or force acting on the decay process. In the new data, physicists analysed B meson decays in the current as well as one of two other pathways, and at two different energy levels – thus, as the official press release put it, “yielding four independent comparisons of the decays”. The more data there is to compare, the more robust the findings will be.

This data was collected over the last five years. Every time the LHC operates, it’s called a ‘run’. Each run generates several terabytes of data that physicists, with the help of computers, comb through in search of evidence for different hypotheses. The data for the new analysis was collected over two runs. And it led physicists to conclude that B mesons’ decay does not violate lepton-flavour universality. The Standard Model still stands and, perhaps equally importantly, a 13-year-old ‘new physics’ lead has been returned to dormancy.

The LHC is currently in its third run; scientists and engineers working with the machine perform maintenance and install upgrades between runs, so each new cycle of operations is expected to produce more as well as more precise data, leading to more high-precision analyses that could, physicists hope, one day reveal ‘new physics’.

The awesome limits of superconductors

On June 24, a press release from CERN said that scientists and engineers working on upgrading the Large Hadron Collider (LHC) had “built and operated … the most powerful electrical transmission line … to date”. The transmission line consisted of four cables – two capable of transporting 20 kA of current and two, 7 kA.

The ‘A’ here stands for ‘ampere’, the SI unit of electric current. Twenty kilo-amperes is an extraordinary amount of current, nearly equal to the amount in a single lightning strike.

In the particulate sense: one ampere is the flow of one coulomb per second. One coulomb is equal to around 6.24 quintillion elementary charges, where each elementary charge is the charge of a single proton or electron (with opposite signs). So a cable capable of carrying a current of 20 kA can essentially transport 124.8 sextillion electrons per second.

According to the CERN press release (emphasis added):

The line is composed of cables made of magnesium diboride (MgB2), which is a superconductor and therefore presents no resistance to the flow of the current and can transmit much higher intensities than traditional non-superconducting cables. On this occasion, the line transmitted an intensity 25 times greater than could have been achieved with copper cables of a similar diameter. Magnesium diboride has the added benefit that it can be used at 25 kelvins (-248 °C), a higher temperature than is needed for conventional superconductors. This superconductor is more stable and requires less cryogenic power. The superconducting cables that make up the innovative line are inserted into a flexible cryostat, in which helium gas circulates.

The part in bold could have been more explicit and noted that superconductors, including magnesium diboride, can’t carry an arbitrarily higher amount of current than non-superconducting conductors. There is actually a limit for the same reason why there is a limit to the current-carrying capacity of a normal conductor.

This explanation wouldn’t change the impressiveness of this feat and could even interfere with readers’ impression of the most important details, so I can see why the person who drafted the statement left it out. Instead, I’ll take this matter up here.

An electric current is generated between two points when electrons move from one point to the other. The direction of current is opposite to the direction of the electrons’ movement. A metal that conducts electricity does so because its constituent atoms have one or more valence electrons that can flow throughout the metal. So if a voltage arises between two ends of the metal, the electrons can respond by flowing around, birthing an electric current.

This flow isn’t perfect, however. Sometimes, a valence electron can bump into atomic nuclei, impurities – atoms of other elements in the metallic lattice – or be thrown off course by vibrations in the lattice of atoms, produced by heat. Such disruptions across the metal collectively give rise to the metal’s resistance. And the more resistance there is, the less current the metal can carry.

These disruptions often heat the metal as well. This happens because electrons don’t just flow between the two points across which a voltage is applied. They’re accelerated. So as they’re speeding along and suddenly bump into an impurity, they’re scattered into random directions. Their kinetic energy then no longer contributes to the electric energy of the metal and instead manifests as thermal energy – or heat.

If the electrons bump into nuclei, they could impart some of their kinetic energy to the nuclei, causing the latter to vibrate more, which in turn means they heat up as well.

Copper and silver have high conductance because they have more valence electrons available to conduct electricity and these electrons are scattered to a lesser extent than in other metals. Therefore, these two also don’t heat up as quickly as other metals might, allowing them to transport a higher current for longer. Copper in particular has a higher mean free path: the average distance an electron travels before being scattered.

In superconductors, the picture is quite different because quantum physics assumes a more prominent role. There are different types of superconductors according to the theories used to understand how they conduct electricity with zero resistance and how they behave in different external conditions. The electrical behaviour of magnesium diboride, the material used to transport the 20 kA current, is described by Bardeen-Cooper-Schrieffer (BCS) theory.

According to this theory, when certain materials are cooled below a certain temperature, the residual vibrations of their atomic lattice encourages their valence electrons to overcome their mutual repulsion and become correlated, especially in terms of their movement. That is, the electrons pair up.

While individual electrons belong to a class of particles called fermions, these electron pairs – a.k.a. Cooper pairs – belong to another class called bosons. One difference between these two classes is that bosons don’t obey Pauli’s exclusion principle: that no two fermions in the same quantum system (like an atom) can have the same set of quantum numbers at the same time.

As a result, all the electron pairs in the material are now free to occupy the same quantum state – which they will when the material is supercooled. When they do, the pairs collectively make up an exotic state of matter called a Bose-Einstein condensate: the electron pairs now flow through the material as if they were one cohesive liquid.

In this state, even if one pair gets scattered by an impurity, the current doesn’t experience resistance because the condensate’s overall flow isn’t affected. In fact, given that breaking up one pair will cause all other pairs to break up as well, the energy required to break up one pair is roughly equal to the energy required to break up all pairs. This feature affords the condensate a measure of robustness.

But while current can keep flowing through a BCS superconductor with zero resistance, the superconducting state itself doesn’t have infinite persistence. It can break if it stops being cooled below a specific temperature, called the critical temperature; if the material is too impure, contributing to a sufficient number of collisions to ‘kick’ all electrons pairs out of their condensate reverie; or if the current density crosses a particular threshold.

At the LHC, the magnesium diboride cables will be wrapped around electromagnets. When a large current flows through the cables, the electromagnets will produce a magnetic field. The LHC uses a circular arrangement of such magnetic fields to bend the beam of protons it will accelerate into a circular path. The more powerful the magnetic field, the more accurate the bending. The current operational field strength is 8.36 tesla, about 128,000-times more powerful than Earth’s magnetic field. The cables will be insulated but they will still be exposed to a large magnetic field.

Type I superconductors completely expel an external magnetic field when they transition to their superconducting state. That is, the magnetic field can’t penetrate the material’s surface and enter the bulk. Type II superconductors are slightly more complicated. Below one critical temperature and one critical magnetic field strength, they behave like type I superconductors. Below the same temperature but a slightly stronger magnetic field, they are superconducting and allow the fields to penetrate their bulk to a certain extent. This is called the mixed state.

A hand-drawn phase diagram showing the conditions in which a mixed-state type II superconductor exists. Credit: Frederic Bouquet/Wikimedia Commons, CC BY-SA 3.0

Say a uniform magnetic field is applied over a mixed-state superconductor. The field will plunge into the material’s bulk in the form of vortices. All these vortices will have the same magnetic flux – the number of magnetic field lines per unit area – and will repel each other, settling down in a triangular pattern equidistant from each other.

An annotated image of vortices in a type II superconductor. The scale is specified at the bottom right. Source: A set of slides entitled ‘Superconductors and Vortices at Radio Frequency Magnetic Fields’ by Ernst Helmut Brandt, Max Planck Institute for Metals Research, October 2010.

When an electric current passes through this material, the vortices are slightly displaced, and also begin to experience a force proportional to how closely they’re packed together and their pattern of displacement. As a result, to quote from this technical (yet lucid) paper by Praveen Chaddah:

This force on each vortex … will cause the vortices to move. The vortex motion produces an electric field1 parallel to [the direction of the existing current], thus causing a resistance, and this is called the flux-flow resistance. The resistance is much smaller than the normal state resistance, but the material no longer [has] infinite conductivity.

1. According to Maxwell’s equations of electromagnetism, a changing magnetic field produces an electric field.

Since the vortices’ displacement depends on the current density: the greater the number of electrons being transported, the more flux-flow resistance there is. So the magnesium diboride cables can’t simply carry more and more current. At some point, setting aside other sources of resistance, the flux-flow resistance itself will damage the cable.

There are ways to minimise this resistance. For example, the material can be doped with impurities that will ‘pin’ the vortices to fixed locations and prevent them from moving around. However, optimising these solutions for a given magnetic field and other conditions involves complex calculations that we don’t need to get into.

The point is that superconductors have their limits too. And knowing these limits could improve our appreciation for the feats of physics and engineering that underlie achievements like cables being able to transport 124.8 sextillion electrons per second with zero resistance. In fact, according to the CERN press release,

The [line] that is currently being tested is the forerunner of the final version that will be installed in the accelerator. It is composed of 19 cables that supply the various magnet circuits and could transmit intensities of up to 120 kA!

§

While writing this post, I was frequently tempted to quote from Lisa Randall‘s excellent book-length introduction to the LHC, Knocking on Heaven’s Door (2011). Here’s a short excerpt:

One of the most impressive objects I saw when I visited CERN was a prototype of LHC’s gigantic cylindrical dipole magnets. Event with 1,232 such magnets, each of them is an impressive 15 metres long and weighs 30 tonnes. … Each of these magnets cost EUR 700,000, making the ned cost of the LHC magnets alone more than a billion dollars.

The narrow pipes that hold the proton beams extend inside the dipoles, which are strung together end to end so that they wind through the extent of the LHC tunnel’s interior. They produce a magnetic field that can be as strong as 8.3 tesla, about a thousand times the field of the average refrigerator magnet. As the energy of the proton beams increases from 450 GeV to 7 TeV, the magnetic field increases from 0.54 to 8.3 teslas, in order to keep guiding the increasingly energetic protons around.

The field these magnets produce is so enormous that it would displace the magnets themselves if no restraints were in place. This force is alleviated through the geometry of the coils, but the magnets are ultimately kept in place through specially constructed collars made of four-centimetre thick steel.

… Each LHC dipole contains coils of niobium-titanium superconducting cables, each of which contains stranded filaments a mere six microns thick – much smaller than a human hair. The LHC contains 1,200 tonnes of these remarkable filaments. If you unwrapped them, they would be long enough to encircle the orbit of Mars.

When operating, the dipoles need to be extremely cold, since they work only when the temperature is sufficiently low. The superconducting wires are maintained at 1.9 degrees above absolute zero … This temperature is even lower than the 2.7-degree cosmic microwave background radiation in outer space. The LHC tunnel houses the coldest extended region in the universe – at least that we know of. The magnets are known as cryodipoles to take into account their special refrigerated nature.

In addition to the impressive filament technology used for the magnets, the refrigeration (cryogenic) system is also an imposing accomplishment meriting its own superlatives. The system is in fact the world’s largest. Flowing helium maintains the extremely low temperature. A casing of approximately 97 metric tonnes of liquid helium surrounds the magnets to cool the cables. It is not ordinary helium gas, but helium with the necessary pressure to keep it in a superfluid phase. Superfluid helium is not subject to the viscosity of ordinary materials, so it can dissipate any heat produced in the dipole system with great efficiency: 10,000 metric tonnes of liquid nitrogen are first cooled, and this in turn cools the 130 metric tonnes of helium that circulate in the dipoles.

Featured image: A view of the experimental MgB2 transmission line at the LHC. Credit: CERN.

Atoms within atoms

It’s a matter of some irony that forces that act across larger distances also give rise to lots of empty space – although the more you think about it, the more it makes sense. The force of gravity, for example, can act across millions of kilometres but this only means two massive objects can still influence each across this distance instead of having to get closer to do so. Thus, you have galaxies with a lot more space between stars than stars themselves.

The electromagnetic force, like the force of gravity, also follows an inverse-square law: its strength falls off as the square of the distance – but never fully reaches zero. So you can have an atom with a nucleus of protons and neutrons held tightly together but electrons located so far away that each atom is more than 90% empty space.

In fact, you can use the rules of subatomic physics to make atoms even more vacuous. Electrons orbit the nucleus in an atom at fixed distances, and when an electron gains some energy, it jumps into a higher orbit. Physicists have been able to excite electrons to such high energies that the atom itself becomes thousands of times larger than an atom of hydrogen.

This is the deceptively simple setting for the Rydberg polaron: the atom inside another atom, with some features added.

In January 2018, physicists from Austria, Brazil, Switzerland and the US reported creating the first Rydberg polaron in the lab, based on theoretical predictions that another group of researchers had advanced in October 2015. The concept, as usual, is far simpler than the execution, so exploring the latter should provide a good sense of the former.

The January 2018 group first created a Bose-Einstein condensate, a state of matter in which a dilute gas of particles called bosons is maintained in an ultra-cold container. Bosons are particles whose quantum spin takes integer values. (Other particles called fermions have half-integer spin). As the container is cooled to near absolute zero, the bosons begin to collectively display quantum mechanical phenomena at the macroscopic scale, essentially becoming a new form of matter and displaying certain properties that no other form of matter has been known to exhibit.

Atoms of strontium-84, -86 and -88 have zero spin, so the physicists used them to create the condensate. Next, they used lasers to bombard some strontium atoms with photons to impart energy to electrons in the outermost orbits (a.k.a. valence electrons), forcing them to jump to an even higher orbit. Effectively, the atom expands, becoming a so-called Rydberg atom[1]. In this state, if the distance between the nucleus and an excited electron is greater than the average distance between the other strontium atoms in the condensate, then some of the other atoms could technically fit into the Rydberg atom, forming the atom-within-an-atom.

[1] Rydberg atoms are called so because many of their properties depend on the value of the principal quantum number, which the Swedish physicist Johannes Robert Rydberg first (inadvertently) described in a formula in 1888.

Rydberg atoms are gigantic relative to other atoms; some are even bigger than a virus, and their interactions with their surroundings can be observed under a simple light microscope. They are relatively long-lived, in that the excited electron decays to its ground state slowly. Astronomers have found them in outer space. However, Rydberg atoms are also fragile: because the electron is already so far from the nucleus, any other particles in the vicinity, even a weak electromagnetic field or a slightly warmer temperature could easily knock the excited electron out of the Rydberg atom and end the Rydberg state.

Some clever physicists took advantage of this property and used Rydberg atoms as sensitive detectors of single photons of light. They won the Nobel Prize for physics for such work in 2011.

However, simply sticking one atom inside a Rydberg atom doth not a Rydberg polaron make. A polaron is a quasiparticle, which means it isn’t an actual particle by itself, as the –on suffix might suggest, but an entity that scientists study as if it were a particle. Quasiparticles are thus useful because they simplify the study of more complicated entities by allowing scientists to apply the rules of particle physics to arrive at equally correct solutions.

This said, a polaron is a quasiparticle that’s also a particle. Specifically, physicists describe the properties and behaviour of electrons inside a solid as polarons because as the electrons interact with the atomic lattice, they behave in a way that electrons usually don’t. So polarons combine the study of electrons and electrons-interacting-with-atoms into a single subject.

Similarly, a Rydberg polaron is formed when the electron inside the Rydberg atom interacts with the trapped strontium atom. While an atom within an atom is cool enough, the January 2018 group wanted to create a Rydberg polaron because it’s considered to be a new state of matter – and they succeeded. The physicists found that the excited electron did develop a loose interaction with the strontium atoms lying between itself and the Rydberg atom’s nucleus – so loose that even as they interacted, the electron could still remain part of the Rydberg atom without getting kicked out.

In effect, since the Rydberg atom and the strontium atoms inside it influence each other’s behaviour, they altogether made up one larger complicated assemblage of protons, neutrons and electrons – a.k.a. a Rydberg polaron.

Good writing is an atom

https://twitter.com/HochTwit/status/1174875013708746752

The act of writing well is like an atom, or the universe. There is matter but it is thinly distributed, with lots of empty space in between. Removing this seeming nothingness won’t help, however. Its presence is necessary for things to remain the way they are and work just as well. Similarly, writing is not simply the deployment of words. There is often the need to stop mid-word and take stock of what you have composed thus far and what the best way to proceed could be, even as you remain mindful of the elegance of the sentence you are currently constructing and its appropriate situation in the overarching narrative. In the end, there will be lots of words to show for your effort but you will have spent even more time thinking about what you were doing and how you were doing it. Good writing, like the internal configuration of a set of protons, neutrons and electrons, is – physically speaking – very little about the labels attached to describe them. And good writing, like the vacuum energy of empty space, acquires its breadth and timelessness because it encompasses a lot of things that one cannot directly see.

‘Weak charge’ measurement holds up SM prediction

Various dark matter detectors around the world, massive particle accelerators and colliders, powerful telescopes on the ground and in space all have their distinct agendas but ultimately what unites them is humankind’s quest to understand what the hell this universe is on about. There are unanswered questions in every branch of scientific endeavour that will keep us busy for millennia to come.

Among them, physics seems to be sufferingly uniquely, as it stumbles even as we speak through a ‘nightmare scenario’: the most sensitive measurements we have made of the physical reality around us, at the largest and smallest scales, don’t agree with what physicists have been able to work out on paper. Something’s gotta give – but scientists don’t know where or how they will find their answers.

The Qweak experiment at the Jefferson Lab, Virginia, is one of scores of experiments around the world trying to find a way out of the nightmare scenario. And Qweak is doing that by studying how the rate at which electrons scatter off a proton is affected by the electrons’ polarisation (a.k.a. spin polarisation: whether the spin of each electron is “left” or “right”).

Unlike instruments like the Large Hadron Collider, which are very big, operate at much higher energies, are expensive and are used to look for new particles hiding in spacetime, Qweak and others like it make ultra-precise measurements of known values, in effect studying the effects of particles both known and unknown on natural phenomena.

And if these experiments are able to find that these values deviate at some level from that predicted by the theory, physicists will have the break they’re looking for. For example, if Qweak is the one to break new ground, then physicists will have reason to suspect that the two nuclear forces of nature, simply called strong and weak, hold some secrets.

However, Qweak’s latest – and possibly its last – results don’t break new ground. In fact, they assert that the current theory of particle physics is correct, the same theory that physicists are trying to break free of.

Most of us are familiar with protons and electrons: they’re subatomic particles, carry positive and negative charges resp., and are the stuff of one chapter of high-school physics. What students of science find out quite later is that electrons are fundamental particles – they’re not made up of smaller particles – but protons are not. Protons are made up of quarks and gluons.

Interactions between electrons and quarks/gluons is mediated by two fundamental forces: the electromagnetic and the weak nuclear. The electromagnetic force is much stronger than the aptly named weak nuclear force. On the other hand, it is agnostic to the electron’s polarisation while the weak nuclear force is sensitive to it. In fact, the weak nuclear force is known to respond differently to left- and right-handed particles.

When electrons are bombarded at protons, the electrons are scattered off. Scientists at measure how often this happens and at what angle, together with the electrons’ polarisation – and try to find correlations between the two sets of data.

An illustration showing the expected outcomes when left- and right-handed electrons, visualised as mirror-images of each other, scatter off of a proton. Credit: doi:10.1038/s41586-018-0096-0
An illustration showing the expected outcomes when left- and right-handed electrons, visualised as mirror-images of each other, scatter off of a proton. Credit: doi:10.1038/s41586-018-0096-0

At Qweak, the electrons were accelerated to 1.16 GeV and bombarded at a tank of liquid hydrogen. A detector positioned near the tank picked up on electrons scattered at angles between 5.8º and 11.6º. By finely tuning different aspects of this setup, the scientists were able to up the measurement precision to 10 parts per billion.

For example, they were able to achieve a detection rate of 7 billion per second, a target luminosity of 1.7 x 1039 cm-2 s-1 and provide a polarised beam of electrons at 180 µA – all considered high for an experiment of this kind.

The scientists were looking for patterns in the detector data that would tell them something about the proton’s weak charge: the strength with which it interacts with electrons via the weak nuclear force. (Its notation is Qweak, hence the experiment’s name.)

At Qweak, they’re doing this by studying how the electrons are scattered versus their polarisation. The Standard Model (SM) of particle physics, the theory that physicists work with to understand the behaviour of elementary particles, predicts that the number of left- and right-handed electrons scattered should differ by one for every 10 million interactions. If this number is found to be bigger or smaller than usual when measured in the wild, then the Standard Model will be in trouble – much to physicists’ delight.

SM’s corresponding value for the proton’s weak charge is 0.0708. At Qweak, the value was measured to be 0.0719 ± 0.0045, i.e. between 0.0674 and 0.0764, completely agreeing with the SM prediction. Something’s gotta give – but it’s not going to be the proton’s weak charge for now.

Paper: Precision measurement of the weak charge of the proton

Featured image credit: Pexels/Unsplash.

Weyl semimetals make way for super optics

In 2015, materials scientists made an unexpected discovery. In a compound of the metals tantalum and arsenic, they discovered a quasiparticle called a Weyl fermion. A quasiparticle is a packet of energy trapped in a system, like a giant cage of metal atoms, that in some ways moves around and interacts like a particle would. A fermion is a type of elementary particle that makes up matter; it includes electrons. A Weyl fermion, however, is a collection of electrons that behaves as if it is one big fermion – and as if it has no mass.

In June 2017, physicists reported that they had discovered another kind of Weyl fermion, dubbed a type-II Weyl fermion, in a compound of aluminium, germanium and lanthanum. It differed from other Weyl fermions in that it violated Lorentz symmetry. According to Wikipedia, Lorentz symmetry is the fact that “the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame”.

Both ‘regular’ and type-II Weyl fermions can do strange things. By extension, the solid substance engineered to be hospitable to Weyl fermions can be a strange thing itself. For example, when an electrical conductor is placed within a magnetic field, the current flowing through it faces more resistance. However, in a conductor conducting electricity using the flow of Weyl fermions, the resistance drops when a magnetic field is applied. When there are type-II Weyl fermions, resistance drops if the magnetic field is applied one way and increases if the field is applied the other way.

In the case of a Weyl semimetal, things get weirder.

Crystals are substances whose atoms are arranged in a regular, repeating pattern throughout. They’re almost always solids (which makes LCD displays cooler). Sometimes, this arrangement of atoms carries a tension, as if the atoms themselves were beads on a taut guitar string. If the string is plucked, it begins to vibrate at a particular note. Similarly, a crystal lattice vibrates at a particular note in some conditions, as if thrumming with energy. As the thrum passes through the crystal carrying this energy, it is as if a quasiparticle is making its way. Such quasiparticles are called phonons.

A Weyl semimetal is a crystal whose phonon is actually a Weyl fermion. So instead of carrying vibrational energy, a Weyl semimetal’s lattice carries electrical energy. Mindful of this uncommon ability, a group of physicists reported a unique application of Weyl semimetals on June 5, with a paper in the journal Physical Review B.

It’s called a superlens. A more historically aware name is the Veselago’s lens, for the Russian physicist Viktor Veselago, who didn’t create the lens itself but laid the theoretical foundations for its abilities in a 1967 paper. The underlying physics is in fact high-school stuff.

When light passes through a rarer medium into a denser medium, its path becomes bent towards the normal (see image below).

Credit: Wikimedia Commons
Credit: Wikimedia Commons

How much the path changes depends on the refractive indices of the two mediums. In nature, the indices are always positive, and this angle of deflection is always positive as well. The light ray coming in through the second quadrant (in the image) will either go through fourth quadrant, as depicted, or, if the denser medium is too dense, become reflected back into the third quadrant.

But if the denser medium has a negative refractive index, then the ray entering from the second quadrant will exit through the first quadrant, like so:

The left panel depicts refraction when the refraction indices are positive. In the left panel, the 'green' medium has a negative refractive index, causing the light to bend inward. Credit: APS/Alan Stonebraker
The left panel depicts refraction when the refraction indices are positive. In the left panel, the ‘green’ medium has a negative refractive index, causing the light to bend inward. Credit: APS/Alan Stonebraker

Using computer simulations developed using Veselago’s insights, the British physicist J.B. Pendry showed in 2000 that such mediums could be used to refocus light diverging from a point. (I highly recommend giving his paper a read if you’ve studied physics at the undergraduate level.

Credit: APS
Credit: APS

This is a deceptively simple application. It stands for much more in the context of how microscopes work.

A light microscope, of the sort used in biology labs, has a maximum zoom of about 1,500. This is because the microscope is limited by the size of the thing it is using to study its sample: light itself. Specifically, (visible) light as a wave has a wavelength of 200 nanometers (corresponding to bluer colours) to 700 nanometers (to redder colours). The microscope will be blind to anything smaller than these wavelengths, imposing a limit on the size of the sample. So physicists use an electron microscope. As waves, electrons have a wavelength 100,000-times shorter than that of visible-light photons. This allows electron microscopes to magnify objects by 10,000,000-times and probe samples a few dozen picometers wide. But as it happens, scientists are still disappointed: they want to probe even smaller samples now.

To overcome this, Pendry had proposed in his 2000 study that a material with a negative refractive index could be used to focus light – rather, electromagnetic radiation – in a way that was independent of its wavelength. In 2007, British and American physicists had found a way to achieve this in graphene, which is a two-dimensional, single-atom-thick layer of carbon atoms – but using electrons instead of photons. Scientists have previously noted that some electrons in graphene can flow around the material as if they had no mass. In the 2007 study, when these electrons were passed through a pn junction, a type of junction typically used between semiconductors in electronics, the particles’ path bent inward on the other side as if the refractive index was negative.

In the June 5 paper in Physical Review B, physicists demonstrated the same phenomenon – using electrons – in a three-dimensional material: a Weyl semimetal. According to them, a stack of two Weyl semimetals can be engineered such that the Weyl fermions from one semimetal compound can enter the other as if the latter had a negative refractive index. With this in mind, Adolfo Grushin and Jens Bardarson write in Physics:

Current [scanning tunnelling electron microscopes (STMs)] use a sharp metallic tip to focus an electron beam onto a sample. Since STM’s imaging resolution is limited by the tip’s geometry and imperfections, it ultimately depends on the tip manufacturing process, which today remains a specialised art, unsuitable for mass production. According to [the paper’s authors], replacing the STM tip with their multilayer Weyl structure would result in a STM whose spatial resolution is limited only by how accurately the electron beam can be focused through Veselago lensing. A STM designed in this way could focus electron beams onto sub-angstrom regions, which would boost STM’s precision to levels at which the technique could routinely see individual atomic orbitals and chemical bonds.

This is the last instalment in a loose trilogy of pieces documenting the shape of the latest research on topological materials. You can read the other two here and here.

Amorphous topological insulators

A topological insulator is a material that conducts electricity only on its surface. Everything below, through the bulk of the material, is an insulator. An overly simplified way to understand this is in terms of the energies and momenta of the electrons in the material.

The electrons that an atom can spare to share with other atoms – and so form chemical bonds – are called valence electrons. In a metal, these electrons can have various momenta, but unless they have a sufficient amount of energy, they’re going to stay near their host atoms – i.e. within the valence band. If they do have energies over a certain threshold, then they can graduate from the valence band to the conduction band, flowing throw the metal and conducting electricity.

In a topological insulator, the energy gap between the valence band and the conduction band is occupied by certain ‘states’ that represent the material’s surface. The electrons in these states aren’t part of the valence band but they’re not part of the conduction band either, and can’t flow throw the entire bulk.

The electrons within these states, i.e. on the surface, display a unique property. Their spins (on their own axis) are coupled strongly with their motion around their host atoms. As a result, theirs spins become aligned perpendicularly to their momentum, the direction in which they can carry electric charge. Such coupling staves off an energy-dissipation process called Umklapp scattering, allowing them to conduct electricity. Detailed observations have shown that the spin-momentum coupling necessary to achieve this is present only in a few-nanometre-thick layer on the surface.

If you’re talking about this with a physicist, she will likely tell you at this point about time-reversal symmetry. It is a symmetry of nature that is said to (usually) ‘protect’ a topological insulator’s unique surface states.

There are many fundamental symmetries in nature. In particle physics, if a force acts similarly on left- and right-handed particles, it is said to preserve parity (P) symmetry. If the dynamics of the force are similar when it is acting against positively and negatively charged particles, then charge conjugation (C) symmetry is said to be preserved. Now, if you videotaped the force acting on a particle and then played the recording backwards, the force must be seen to be acting the way it would if the video was played the other way. At least if it did it would be preserving time-reversal (T) symmetry.

Physicists have known some phenomena that break C and P symmetry simultaneously. T symmetry is broken continuously by the second law of thermodynamics: if you videographed the entropy of a universe and then played it backwards, entropy will be seen to be reducing. However, CPT symmetries – all together – cannot be broken (we think).

Anyway, the surface states of a topological insulator are protected by T symmetry. This is because the electrons’ wave-functions, the mathematical equations that describe some of the particles’ properties, do not ‘flip’ going backwards in time. As a result, a topological insulator cannot lose its surface states unless it undergoes some sort of transformation that breaks time-reversal symmetry. (One example of such a transformation is a phase transition.)

This laboured foreword is necessary – at least IMO – to understand what it is that scientists look for when they’re looking for topological insulators among all the materials that we have been, and will be able, to synthesise. It seems they’re looking for materials that have surface states, with spin-momentum coupling, that are protected by T symmetry.


Physicists from the Indian Institute of Science, Bengaluru, have found that topological insulators needn’t always be crystals – as has been thought. Instead, using a computer simulation, Adhip Agarwala and Vijay Shenoy, of the institute’s physics department, have shown that a kind of glass also behaves as a topological insulator.

The band theory described earlier is usually described with crystals in mind, wherein the material’s atoms are arranged in a well-defined pattern. This allows physicists to determine, with some amount of certainty, as to how the atoms’ electrons interact and give rise to the material’s topological states. In an amorphous material like glass, on the other hand, the constituent atoms are arranged randomly. How then can something as well-organised as a surface with spin-momentum coupling be possible on it?

As Michael Schirber wrote in Physics magazine,

In their study, [Agarwala and Shenoy] assume a box with a large number of lattice sites arranged randomly. Each site can host electrons in one of several energy levels, and electrons can hop between neighboring sites. The authors tuned parameters, such as the lattice density and the spacing of energy levels, and found that the modeled materials could exhibit symmetry-protected surface currents in certain cases. The results suggest that topological insulators could be made by creating glasses with strong spin-orbit coupling or by randomly placing atoms of other elements inside a normal insulator.

The duo’s paper was published in the journal Physical Review Letters on June 8. The arXiv preprint is available to read here. The latter concludes,

The possibility of topological phases in a completely random system opens up several avenues both from experimental and theoretical perspectives. Our results suggest some new routes to the laboratory realization of topological phases. First, two dimensional systems can be made by choosing an insulating surface on which suitable [atoms or molecules] with appropriate orbitals are deposited at random (note that this process will require far less control than conventional layered materials). The electronic states of these motifs will then [interact in a certain way] to produce the required topological phase. Second is the possibility of creating three dimensional systems starting from a suitable large band gap trivial insulator. The idea then is to place “impurity atoms”, again with suitable orbitals and “friendly” chemistry with the host… The [interaction] of the impurity orbitals would again produce a topological insulating state in the impurity bands under favourable conditions.

Agarwala/Shenoy also suggest that “In realistic systems the temperature scales over which one will see the topological physics … may be low”, although this is not unusual. However, they don’t suggest which amorphous materials could be suitable topological insulators.

Thanks to penflip.com and its nonexistent autosave function, I had to write the first half of this article twice. Not the sort of thing I can forgive easily, less so since I’m loving everything else about it.

Physicists could have to wait 66,000 yottayears to see an electron decay

The longest coherently described span of time I’ve encountered is from Hindu cosmology. It concerns the age of Brahma, one of Hinduism’s principal deities, who is described as being 51 years old (with 49 more to go). But these are no simple years. Each day in Brahma’s life lasts for a period called the kalpa: 4.32 billion Earth-years. In 51 years, he will actually have lived for almost 80 trillion Earth-years. In a 100, he will have lived 157 trillion Earth-years.

157,000,000,000,000. That’s stupidly huge. Forget astronomy – I doubt even economic crises have use for such numbers.

On December 3, scientists announced that we’ve all known something that will live for even longer: the electron.

Yup, the same tiny lepton that zips around inside atoms with gay abandon, that’s swimming through the power lines in your home, has been found to be stable for at least 66,000 yottayears – yotta- being the largest available prefix in the decimal system.

In stupidly huge terms, that’s 66,000,000,000,000,000,000,000,000,000 (66,000 trillion trillion) years. Brahma just slipped to second place among the mortals.

But why were scientists making this measurement in the first place?

Because they’re desperately trying to disprove a prevailing theory in physics. Called the Standard Model, it describes how fundamental particles interact with each other. Though it was meticulously studied and built over a period of more than 30 years to explain a variety of phenomena, the Standard Model hasn’t been able to answer few of the more important questions. For example, why is gravity among the four fundamental forces so much weaker than the rest? Or why is there more matter than antimatter in the universe? Or why does the Higgs boson not weigh more than it does? Or what is dark matter?

Silence.

The electron belongs to a class of particles called leptons, which in turn is well described by the Standard Model. So if physicists are able to find that an electron is less stable the model predicts, it’d be a breakthrough. But despite multiple attempts to find an equally freak event, physicists haven’t succeeded – not even with the LHC (though hopeful rumours are doing the rounds that that could change soon).

The measurement of 66,000 yottayears was published in the journal Physical Review Letters on December 3 (a preprint copy is available on the arXiv server dated November 11). It was made at the Borexino neutrino experiment buried under the Gran Sasso mountain in Italy. The value itself is hinged on a simple idea: the conservation of charge.

If an electron becomes unstable and has to break down, it’ll break down into a photon and a neutrino. There are almost no other options because the electron is the lightest charged particle and whatever it breaks down into has to be even lighter. However, neither the photon nor the neutrino has an electric charge so the breaking-down would violate a fundamental law of nature – and definitely overturn the Standard Model.

The Borexino experiment is actually a solar neutrino detector, using 300 tonnes of a petroleum-based liquid to detect and study neutrinos streaming in from the Sun. When a neutrino strikes the liquid, it knocks out an electron in a tiny flash of energy. Some 2,210 photomultiplier tubes surrounding the tank amplify this flash for examination. The energy released is about 256 keV (by the mass-energy equivalence, corresponding to about a 4,000th the mass of a proton).

However, the innards of the mountain where the detector is located also produce photons thanks to the radioactive decay of bismuth and polonium in it. So the team making the measurement used a simulator to calculate how often photons of 256 keV are logged by the detector against the ‘background’ of all the photons striking the detector. Kinda like a filter. They used data logged over 408 days (January 2012 to May 2013).

The answer: once every 66,000 yotta-years (that’s 420 trillion Brahma-years).

Physics World reports that if photons from the ‘background’ radiation could be eliminated further, the electron’s lifetime could probably be increased by a thousand times. But there’s historical precedent that to some extent encourages stronger probes of the humble electron’s properties.

In 2006, another experiment situated under the Gran Sasso mountain tried to measure the rate at which electrons violated a defining rule in particle physics called Pauli’s exclusion principle. All electrons can be described by four distinct attibutes called their quantum numbers, and the principle holds that no two electrons can have the same four numbers at any given time.

The experiment was called DEAR (DAΦNE Exotic Atom Research). It energised electrons and then measured how much of it was released when the particles returned to a lower-energy state. After three years of data-taking, its team announced in 2009 that the principle was being violated once every 570 trillion trillion measurements (another stupidly large number).

That’s a violation 0.0000000000000000000000001% of the time – but it’s still something. And it could amount to more when compared to the Borexino measurement of an electron’s stability. In March 2013, the team that worked DEAR submitted a proposal for building an instrument that improve the measurement by a 100-times, and in May 2015, reported that such an instrument was under construction.

Here’s hoping they don’t find what they were looking for?