Using superconductors to measure electric current

Simply place two superconductors very close to each other, separated by a small gap, and you’ll have taken a big step towards an important piece of technology called a Josephson junction.

When the two superconductors are close to each other and exposed to electromagnetic radiation in the microwave frequency (0.3-30 GHz), a small voltage develops in the gap. As waves from the radiation rise and fall between the gap, so too the voltage. And it so happens that the voltage can be calculated exactly from the frequency of the microwave radiation.

A Josephson junction is also created when two superconductors are brought very close and a current is passed through one of them. Now, their surfaces form a capacitor: a device that builds up and holds electric charge. When the amount of charge crosses a threshold on the surface of the current-bearing superconductor, the voltage between the surfaces crosses a threshold and allows a current to jump from this to the other surface, across the gap. Then the voltage drops and the surface starts building charge again. This process keeps going as the voltage rises, falls, rises, falls.

This undulating rise and fall is called a Bloch oscillation. It’s only apparent when the Josephson junction is really small, in the order of micrometres. Since the Bloch oscillation is like a wave, it has a frequency and an amplitude. It so happens that the frequency is equal to the value of the current flowing in the superconductor divided by 2e, where e is the smallest unit of electric charge (1.602 × 10-19 coulomb).

The amazing thing about a Josephson junction is the current that jumps between the two surfaces is entirely due to quantum effects, and it’s visible to the naked eye – which is to say the junction shows quantum mechanics at work at the macroscopic scale. This is rare and extraordinary. Usually, observing quantum mechanics’ effects requires sophisticated microscopes and measuring devices.

Josephson junctions are powerful detectors of magnetic fields because of the ways in which they’re sensitive to external forces. For example, devices called SQUIDs (short for ‘superconducting quantum interference devices’) use Josephson junctions to detect magnetic fields that are a trillion-times weaker than a field produced by a refrigerator magnet.

They do this by passing an electric current through a superconductor that forks into two, with a Josephson junction at the end of each path. If there’s a magnetic field nearby, even a really small one, it will distort the amount of current passing in each path to a different degree. The resulting current mismatch will be sufficient to trigger a voltage rise in one of the junctions and a current will jump. Such SQUIDS are used, among other things, to detect dark matter.

Shapiro steps

The voltage and current in a Josephson junction share a peculiar relationship. As the current in one of the superconductors is increased in a smooth way, the voltage doesn’t increase smoothly but in small jumps. On a graph (see below), the rise in the voltage looks like a staircase. The steps here are called Shapiro steps. Each step is related to a moment when the current in the superconductor is a multiple of the frequency of the Bloch oscillation.

I’ve misplaced the source of this graph in my notes. If you know it, please share; if I find it, I will update the article asap.

In a new study, published in Physical Review Letters on January 12, physicists from Germany reported finding a way to determine the amount of electric current passing in the superconductor by studying the Bloch oscillation. This is an important feat because it could close the gap in the metrology triangle.

The metrology triangle

Josephson junctions are also useful because they provide a precise relationship between frequency and voltage. If a junction is made to develop Bloch oscillations of a specific frequency, it will develop a specific voltage. The US National Institute of Standards and Technology (NIST) uses a circuit of Josephson junctions to define the standard volt, a.k.a. the Josephson voltage standard.

We say 1 V is the potential difference between two points if 1 ampere (A) of current dissipates 1 W of power when moving between those points. How do we make sure what we say is also how things work in reality? Enter the Josephson voltage standard.

In fact, decades of advancements in science and technology have led to a peculiar outcome: the tools scientists have today to measure the frequency of waves are just phenomenal – so much so that scientists have been able to measure other properties of matter more accurately by linking them to some frequency and measuring that frequency instead.

This is true of the Josephson voltage standard. The NIST’s setup consists of 20,208 Josephson junctions. Each junction has two small superconductors separated by a few nanometres and is irradiated by microwave radiation. The resulting voltage is equal to the microwave frequency multiplied by a proportionality constant. (E.g. when the frequency is around 70 GHz, the gap between each pair of Shapiro steps is around 150 microvolt.) This way, the setup can track the voltage with a precision of up to 1 nanovolt.

The proportionality constant is in turn a product of the microwave frequency and the Planck constant, divided by two times the basic electric charge e. The latter two numbers are fundamental constants of our universe. Their values are the same for both macroscopic objects and subatomic particles.

Voltage, resistance, and current together make up Ohm’s law – the statement that voltage is roughly equal to current multiplied by resistance (V = IR). Scientists would like to link all three to fundamental constants because they know Ohm’s law works in the classical regime, in the macroscopic world of wires that we can see and hold. They don’t know for sure if the law holds in the quantum regime of individual atoms and subatomic particles as well, but they’d like to.

Measuring things in the quantum world is much more difficult than in the classical world, and it will help greatly if scientists can track voltage, resistance, and current by simply calculating them from some fundamental constants or by tracking some frequencies.

Josephson junctions make this possible for voltage.

For resistance, there’s the quantum Hall effect. Say there’s a two-dimensional sheet of electrons held at an ultracold temperature. When a magnetic field is applied perpendicular to this sheet, an electrical resistance develops across the breadth of the sheet. The amount of resistance depends on a combination of fundamental constants. The formation of this quantised resistance is the quantum Hall effect.

The new study makes the case that the Josephson junction setup it describes could pave the way for scientists to measure electric currents better using the frequency of Bloch oscillations.

Scientists have often referred to this pending task as a gap in the ‘metrology triangle’. Metrology is the science of the way we measure things. And Ohm’s law links voltage, resistance, and current in a triangular relationship.

A JJ + SQUID setup

In their experiment, the physicists coupled a Bloch oscillation in a Josephson junction to a SQUID in such a way that the SQUID would also have Bloch oscillations of the same frequency.

The coupling happens via a capacitor, as shown in the circuit schematic below. This setup is just a few micrometres wide. When a current entered the Josephson junction and crossed the threshold, electrons jumped across and produced a current in one direction. In the SQUID, this caused electrons to jump and induce a current in the opposite direction (a.k.a. a mirror current).

I1 and I2 are biasing currents, which are direct currents supplied to make the circuit work as intended. The parallel lines that form the ‘bridge’ on the left denote a capacitor. The large ‘X’ marks denote the Josephson junction and the SQUID. The blue blocks are resistors. The ellipses containing two dots each denote pairs of electrons that ‘jump’. Source: Phys. Rev. Lett. 132, 027001

This setup requires the use of resistors connected to the circuit, shown as blue blocks in the schematic. The resistance they produce suppresses certain quantum effects that get in the way of the circuit’s normal operation. However, resistors also produce heat, which could interfere with the Josephson junction’s normal operation as well.

The team had to balance these two requirements with a careful choice of the resistor material, rendering the circuit operational in a narrow window of conditions. For added measure the team also cooled the entire circuit to 0.1 K to further suppress noise.

In their paper, the team reported that it could observe Bloch oscillations and the first Shapiro step in its setup, indicating that the junction operated as intended. The team also found it could accurately simulate its experimental results using computer models – meaning the theories and assumptions the team was using to explain what could be going on inside the circuit were on the right track.

Recall that the frequency of a Bloch oscillation can be computed by dividing the amount of current flowing in the superconductor by 2e. So by tracking these oscillations with the SQUID, the team wrote in its paper that it should soon be able to accurately calculate the current – once it had found ways to further reduce noise in their setup.

For now, they have a working proof of concept.

A tale of vortices, skyrmions, paths and shapes

There are many types of superconductors. Some of them can be explained by an early theory of superconductivity called Bardeen-Cooper-Schrieffer (BCS) theory.

In these materials, vibrations in the atomic lattice force the electrons in the material to overcome their mutual repulsion and team up in pairs, if the material’s temperature is below a particular threshold (very low). These pairs of electrons, called Cooper pairs, have some properties that individual electrons can’t have. One of them is that all Cooper pairs together form an exotic state of matter called a Bose-Einstein condensate, which can flow through the material with much less resistance than individuals electrons experience. This is the gist of BCS theory.

When the Cooper pairs are involved in the transmission of an electric current through the material, the material is an electrical superconductor.

Some of the properties of the two electrons in each Cooper pair can influence the overall superconductivity itself. One of them is the orbital angular momentum, which is an intrinsic property of all particles. If both electrons have equal orbital angular momentum but are pointing in different directions, the relative orbital angular momentum is 0. Such materials are called s-wave superconductors.

Sometimes, in s-wave superconductors, some of the electric current – or supercurrent – starts flowing in a vortex within the material. If these vortices can be coupled with a magnetic structure called a skyrmion, physicists believe they can give rise to some new behaviour previously not seen in materials, some of them with important applications in quantum computing. Coupling here implies that a change in the properties of the vortex should induce changes in the skyrmion, and vice versa.

However, physicists have had a tough time creating a vortex-skyrmion coupling that they can control. As Gustav Bihlmayer, a staff scientist at the Jülich Research Centre, Germany, wrote for APS Physics, “experimental studies of these systems are still rare. Both parts” of the structures bearing these features “must stay within specific ranges of temperature and magnetic-field strength to realise the desired … phase, and the length scales of skyrmions and vortices must be similar in order to study their coupling.”

In a new paper, a research team from Nanyang Technical University, Singapore, has reported that they have achieved just such a coupling: they created a skyrmion in a chiral magnet and used it to induce the formation of a supercurrent vortex in an s-wave superconductor. In their observations, they found this coupling to be stable and controllable – important attributes to have if the setup is to find practical application.

A chiral magnet is a material whose internal magnetic field “typically” has a spiral or swirling pattern. A supercurrent vortex in an electrical superconductor is analogous to a skyrmion in a chiral magnet; a skyrmion is a “knot of twisting magnetic field lines” (source).

The researchers sandwiched an s-wave superconductor and a chiral magnet together. When the magnetic field of a skyrmion in the chiral magnet interacted with the superconductor at the interface, it induced a spin-polarised supercurrent (i.e. the participating electrons’ spin are aligned along a certain direction). This phenomenon is called the Rashba-Edelstein effect, and it essentially converts electric charge to electron spin and vice versa. To do so, the effect requires the two materials to be in contact and depends among other things on properties of the skyrmion’s magnetic field.

There’s another mechanism of interaction in which the chiral magnet and the superconductor don’t have to be in touch, and which the researchers successfully attempted to recreate. They preferred this mechanism, called stray-field coupling, to demonstrate a skyrmion-vortex system for a variety of practical reasons. For example, the chiral magnet is placed in an external magnetic field during the experiment. Taking the Rashba-Edelstein route means to achieve “stable skyrmions at low temperatures in thin films”, the field needs to be stronger than 1 T. (Earth’s magnetic field measures 25-65 µT.) Such a field could damage the s-wave superconductor.

For the stray-field coupling mechanism, the researchers inserted an insulator between the chiral magnet and the superconductor. Then, when they applied a small magnetic field, Bihlmayer wrote, the field “nucleated” skyrmions in the structure. “Stray magnetic fields from the skyrmions [then] induced vortices in the [superconducting] film, which were observed with scanning tunnelling spectroscopy.”


Experiments like this one reside at the cutting edge of modern condensed-matter physics. A lot of their complexity resides in scientists being able to closely control the conditions in which different quantum effects play out, using similarly advanced tools and techniques to understand what could be going on inside the materials, and to pick the right combination of materials to use.

For example, the heterostructure the physicists used to manifest the stray-field coupling mechanism had the following composition, from top to bottom:

  • Platinum, 2 nm (layer thickness)
  • Niobium, 25 nm
  • Magnesium oxide, 5 nm
  • Platinum, 2 nm

The next four layers are repeated 10 times in this order:

  • Platinum, 1 nm
  • Cobalt, 0.5 nm
  • Iron, 0.5 nm
  • Iridium, 1 nm

Back to the overall stack:

  • Platinum, 10 nm
  • Tantalum, 2 nm
  • Silicon dioxide (substrate)

The first three make up the superconductor, the magnesium oxide is the insulator, and the rest (except the substrate) make up the chiral magnet.

It’s possible to erect a stack like this through trial and error, with no deeper understanding dictating the choice of materials. But when the universe of possibilities – of elements, compounds and alloys, their shapes and dimensions, and ambient conditions in which they interact – is so vast, the exercise could take many decades. But here we are, at a time when scientists have explored various properties of materials and their interactions, and are able to engineer novel behaviours into existence, blurring the line between discovery and invention. Even in the absence of applications, such observations are nothing short of fascinating.

Applications aren’t wanting, however.


quasiparticle is a packet of energy that behaves like a particle in a specific context even though it isn’t actually one. For example, the proton is a quasiparticle because it’s really a clump of smaller particles (quarks and gluons) that together behave in a fixed, predictable way. A phonon is a quasiparticle that represents some vibrational (or sound) energy being transmitted through a material. A magnon is a quasiparticle that represents some magnetic energy being transmitted through a material.

On the other hand, an electron is said to be a particle, not a quasiparticle – as are neutrinos, photons, Higgs bosons, etc.

Now and then physicists abstract packets of energy as particles in order to simplify their calculations.

(Aside: I’m aware of the blurred line between particles and quasiparticles. For a technical but – if you’re prepared to Google a few things – fascinating interview with condensed-matter physicist Vijay Shenoy on this topic, see here.)

We understand how these quasiparticles behave in three-dimensional space – the space we ourselves occupy. Their properties are likely to change if we study them in lower or higher dimensions. (Even if directly studying them in such conditions is hard, we know their behaviour will change because the theory describing their behaviour predicts it.) But there is one quasiparticle that exists in two dimensions, and is quite different in a strange way from the others. They are called anyons.

Say you have two electrons in an atom orbiting the nucleus. If you exchanged their positions with each other, the measurable properties of the atom will stay the same. If you swapped the electrons once more to bring them back to their original positions, the properties will still remain unchanged. However, if you switched the positions of two anyons in a quantum system, something about the system will change. More broadly, if you started with a bunch of anyons in a system and successively exchanged their positions until they had a specific final arrangement, the system’s properties will have changed differently depending on the sequence of exchanges.

This is called path dependency, and anyons are unique in possessing this property. In technical language, anyons are non-Abelian quasiparticles. They’re interesting for many reasons, but one application stands out. Quantum computers are devices that use the quantum mechanical properties of particles, or quasiparticles, to execute logical decisions (the same way ‘classical’ computers use semiconductors). Anyons’ path dependency is useful here. Arranging anyons in one sequence to achieve a final arrangement can be mapped to one piece of information (e.g. 1), and arranging anyons by a different sequence to achieve the same final arrangement can be mapped to different information (e.g. 0). This way, what information can be encoded depends on the availability of different paths to a common final state.

In addition, an important issue with existing quantum computers is that they are too fragile: even a slight interaction with the environment can cause the devices to malfunction. Using anyons for the qubits could overcome this problem because the information stored doesn’t depend on the qubits’ existing states but the paths that they have taken there. So as long as the paths have been executed properly, environmental interactions that may disturb the anyons’ final states won’t matter.

However, creating such anyons isn’t easy.

Now, recall that s-wave superconductors are characterised by the relative orbital angular momentum of electrons in the Cooper pairs being 0 (i.e. equal but in opposite directions). In some other materials, it’s possible that the relative value is 1. These are the p-wave superconductors. And at the centre of a supercurrent vortex in a p-wave superconductor, physicists expect to find non-Abelian anyons.

So the ability to create and manipulate these vortices in superconductors, as well as, more broadly, explore and understand how magnet-superconductor heterostructures work, is bound to be handy.


The Nanyang team’s paper calls the vortices and skyrmions “topological excitations”. An ‘excitation’ here is an accumulation of energy in a system over and above what the system has in its ground state. Ergo, it’s excited. A topological excitation refers to energy manifested in changes to the system’s topology.

On this subject, one of my favourite bits of science is topological phase transitions.

I usually don’t quote from Wikipedia but communicating condensed-matter physics is exacting. According to Wikipedia, “topology is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling and bending”. For example, no matter how much you squeeze or stretch a donut (without breaking it), it’s going to be a ring with one hole. Going one step further, your coffee mug and a donut are topologically similar: they’re both objects with one hole.

I also don’t like the Nobel Prizes but some of the research that they spotlight is nonetheless awe-inspiring. In 2016, the prize was awarded to Duncan Haldane, John Kosterlitz and David Thouless for “theoretical discoveries of topological phase transitions and topological phases of matter”.

David Thouless in 1995. Credit: Mary Levin/University of Washington

Quoting myself from 2016:

There are four popularly known phases of matter: plasma, gas, liquid and solid. If you cooled plasma, its phase would transit to that of a gas; if you cooled gases, you’d get a liquid; if you cooled liquids, you’d get a solid. If you kept cooling a solid until you were almost at absolute zero, you’d find substances behaving strangely because, suddenly, quantum mechanical effects show up. These phases of matter are broadly called quantum phases. And their phase transitions are different from when plasma becomes a gas, a gas becomes a liquid, and so on.

A Kosterlitz-Thouless transition describes a type of quantum phase transition. A substance in the quantum phase, like all substances, tries to possess as low energy as possible. When it gains some extra energy, it sheds it. And how it sheds it depends on what the laws of physics allow. Kosterlitz and Thouless found that, at times, the surface of a flat quantum phase – like the surface of liquid helium – develops vortices, akin to a flattened tornado. These vortices always formed in pairs, so the surface always had an even number of vortices. And at very low temperatures, the vortices were always tightly coupled: they remained close to each other even when they moved across the surface.

The bigger discovery came next. When Kosterlitz and Thouless raised the temperature of the surface, the vortices moved apart and moved around freely, as if they no longer belonged to each other. In terms of thermodynamics alone, the vortices being alone or together wouldn’t depend on the temperature, so something else was at play. The duo had found a kind of phase transition – because it did involve a change in temperature – that didn’t change the substance itself but only a topological shift in how it behaved. In other words, the substance was able to shed energy by coupling the vortices.

Reality is so wonderfully weird. It’s also curious that some concepts that seemed significant when I was learning science in school (like invention versus discovery) and in college (like particle versus quasiparticle) – concepts that seemed meaningful and necessary to understand what was really going on – don’t really matter in the larger scheme of things.

The awesome limits of superconductors

On June 24, a press release from CERN said that scientists and engineers working on upgrading the Large Hadron Collider (LHC) had “built and operated … the most powerful electrical transmission line … to date”. The transmission line consisted of four cables – two capable of transporting 20 kA of current and two, 7 kA.

The ‘A’ here stands for ‘ampere’, the SI unit of electric current. Twenty kilo-amperes is an extraordinary amount of current, nearly equal to the amount in a single lightning strike.

In the particulate sense: one ampere is the flow of one coulomb per second. One coulomb is equal to around 6.24 quintillion elementary charges, where each elementary charge is the charge of a single proton or electron (with opposite signs). So a cable capable of carrying a current of 20 kA can essentially transport 124.8 sextillion electrons per second.

According to the CERN press release (emphasis added):

The line is composed of cables made of magnesium diboride (MgB2), which is a superconductor and therefore presents no resistance to the flow of the current and can transmit much higher intensities than traditional non-superconducting cables. On this occasion, the line transmitted an intensity 25 times greater than could have been achieved with copper cables of a similar diameter. Magnesium diboride has the added benefit that it can be used at 25 kelvins (-248 °C), a higher temperature than is needed for conventional superconductors. This superconductor is more stable and requires less cryogenic power. The superconducting cables that make up the innovative line are inserted into a flexible cryostat, in which helium gas circulates.

The part in bold could have been more explicit and noted that superconductors, including magnesium diboride, can’t carry an arbitrarily higher amount of current than non-superconducting conductors. There is actually a limit for the same reason why there is a limit to the current-carrying capacity of a normal conductor.

This explanation wouldn’t change the impressiveness of this feat and could even interfere with readers’ impression of the most important details, so I can see why the person who drafted the statement left it out. Instead, I’ll take this matter up here.

An electric current is generated between two points when electrons move from one point to the other. The direction of current is opposite to the direction of the electrons’ movement. A metal that conducts electricity does so because its constituent atoms have one or more valence electrons that can flow throughout the metal. So if a voltage arises between two ends of the metal, the electrons can respond by flowing around, birthing an electric current.

This flow isn’t perfect, however. Sometimes, a valence electron can bump into atomic nuclei, impurities – atoms of other elements in the metallic lattice – or be thrown off course by vibrations in the lattice of atoms, produced by heat. Such disruptions across the metal collectively give rise to the metal’s resistance. And the more resistance there is, the less current the metal can carry.

These disruptions often heat the metal as well. This happens because electrons don’t just flow between the two points across which a voltage is applied. They’re accelerated. So as they’re speeding along and suddenly bump into an impurity, they’re scattered into random directions. Their kinetic energy then no longer contributes to the electric energy of the metal and instead manifests as thermal energy – or heat.

If the electrons bump into nuclei, they could impart some of their kinetic energy to the nuclei, causing the latter to vibrate more, which in turn means they heat up as well.

Copper and silver have high conductance because they have more valence electrons available to conduct electricity and these electrons are scattered to a lesser extent than in other metals. Therefore, these two also don’t heat up as quickly as other metals might, allowing them to transport a higher current for longer. Copper in particular has a higher mean free path: the average distance an electron travels before being scattered.

In superconductors, the picture is quite different because quantum physics assumes a more prominent role. There are different types of superconductors according to the theories used to understand how they conduct electricity with zero resistance and how they behave in different external conditions. The electrical behaviour of magnesium diboride, the material used to transport the 20 kA current, is described by Bardeen-Cooper-Schrieffer (BCS) theory.

According to this theory, when certain materials are cooled below a certain temperature, the residual vibrations of their atomic lattice encourages their valence electrons to overcome their mutual repulsion and become correlated, especially in terms of their movement. That is, the electrons pair up.

While individual electrons belong to a class of particles called fermions, these electron pairs – a.k.a. Cooper pairs – belong to another class called bosons. One difference between these two classes is that bosons don’t obey Pauli’s exclusion principle: that no two fermions in the same quantum system (like an atom) can have the same set of quantum numbers at the same time.

As a result, all the electron pairs in the material are now free to occupy the same quantum state – which they will when the material is supercooled. When they do, the pairs collectively make up an exotic state of matter called a Bose-Einstein condensate: the electron pairs now flow through the material as if they were one cohesive liquid.

In this state, even if one pair gets scattered by an impurity, the current doesn’t experience resistance because the condensate’s overall flow isn’t affected. In fact, given that breaking up one pair will cause all other pairs to break up as well, the energy required to break up one pair is roughly equal to the energy required to break up all pairs. This feature affords the condensate a measure of robustness.

But while current can keep flowing through a BCS superconductor with zero resistance, the superconducting state itself doesn’t have infinite persistence. It can break if it stops being cooled below a specific temperature, called the critical temperature; if the material is too impure, contributing to a sufficient number of collisions to ‘kick’ all electrons pairs out of their condensate reverie; or if the current density crosses a particular threshold.

At the LHC, the magnesium diboride cables will be wrapped around electromagnets. When a large current flows through the cables, the electromagnets will produce a magnetic field. The LHC uses a circular arrangement of such magnetic fields to bend the beam of protons it will accelerate into a circular path. The more powerful the magnetic field, the more accurate the bending. The current operational field strength is 8.36 tesla, about 128,000-times more powerful than Earth’s magnetic field. The cables will be insulated but they will still be exposed to a large magnetic field.

Type I superconductors completely expel an external magnetic field when they transition to their superconducting state. That is, the magnetic field can’t penetrate the material’s surface and enter the bulk. Type II superconductors are slightly more complicated. Below one critical temperature and one critical magnetic field strength, they behave like type I superconductors. Below the same temperature but a slightly stronger magnetic field, they are superconducting and allow the fields to penetrate their bulk to a certain extent. This is called the mixed state.

A hand-drawn phase diagram showing the conditions in which a mixed-state type II superconductor exists. Credit: Frederic Bouquet/Wikimedia Commons, CC BY-SA 3.0

Say a uniform magnetic field is applied over a mixed-state superconductor. The field will plunge into the material’s bulk in the form of vortices. All these vortices will have the same magnetic flux – the number of magnetic field lines per unit area – and will repel each other, settling down in a triangular pattern equidistant from each other.

An annotated image of vortices in a type II superconductor. The scale is specified at the bottom right. Source: A set of slides entitled ‘Superconductors and Vortices at Radio Frequency Magnetic Fields’ by Ernst Helmut Brandt, Max Planck Institute for Metals Research, October 2010.

When an electric current passes through this material, the vortices are slightly displaced, and also begin to experience a force proportional to how closely they’re packed together and their pattern of displacement. As a result, to quote from this technical (yet lucid) paper by Praveen Chaddah:

This force on each vortex … will cause the vortices to move. The vortex motion produces an electric field1 parallel to [the direction of the existing current], thus causing a resistance, and this is called the flux-flow resistance. The resistance is much smaller than the normal state resistance, but the material no longer [has] infinite conductivity.

1. According to Maxwell’s equations of electromagnetism, a changing magnetic field produces an electric field.

Since the vortices’ displacement depends on the current density: the greater the number of electrons being transported, the more flux-flow resistance there is. So the magnesium diboride cables can’t simply carry more and more current. At some point, setting aside other sources of resistance, the flux-flow resistance itself will damage the cable.

There are ways to minimise this resistance. For example, the material can be doped with impurities that will ‘pin’ the vortices to fixed locations and prevent them from moving around. However, optimising these solutions for a given magnetic field and other conditions involves complex calculations that we don’t need to get into.

The point is that superconductors have their limits too. And knowing these limits could improve our appreciation for the feats of physics and engineering that underlie achievements like cables being able to transport 124.8 sextillion electrons per second with zero resistance. In fact, according to the CERN press release,

The [line] that is currently being tested is the forerunner of the final version that will be installed in the accelerator. It is composed of 19 cables that supply the various magnet circuits and could transmit intensities of up to 120 kA!

§

While writing this post, I was frequently tempted to quote from Lisa Randall‘s excellent book-length introduction to the LHC, Knocking on Heaven’s Door (2011). Here’s a short excerpt:

One of the most impressive objects I saw when I visited CERN was a prototype of LHC’s gigantic cylindrical dipole magnets. Event with 1,232 such magnets, each of them is an impressive 15 metres long and weighs 30 tonnes. … Each of these magnets cost EUR 700,000, making the ned cost of the LHC magnets alone more than a billion dollars.

The narrow pipes that hold the proton beams extend inside the dipoles, which are strung together end to end so that they wind through the extent of the LHC tunnel’s interior. They produce a magnetic field that can be as strong as 8.3 tesla, about a thousand times the field of the average refrigerator magnet. As the energy of the proton beams increases from 450 GeV to 7 TeV, the magnetic field increases from 0.54 to 8.3 teslas, in order to keep guiding the increasingly energetic protons around.

The field these magnets produce is so enormous that it would displace the magnets themselves if no restraints were in place. This force is alleviated through the geometry of the coils, but the magnets are ultimately kept in place through specially constructed collars made of four-centimetre thick steel.

… Each LHC dipole contains coils of niobium-titanium superconducting cables, each of which contains stranded filaments a mere six microns thick – much smaller than a human hair. The LHC contains 1,200 tonnes of these remarkable filaments. If you unwrapped them, they would be long enough to encircle the orbit of Mars.

When operating, the dipoles need to be extremely cold, since they work only when the temperature is sufficiently low. The superconducting wires are maintained at 1.9 degrees above absolute zero … This temperature is even lower than the 2.7-degree cosmic microwave background radiation in outer space. The LHC tunnel houses the coldest extended region in the universe – at least that we know of. The magnets are known as cryodipoles to take into account their special refrigerated nature.

In addition to the impressive filament technology used for the magnets, the refrigeration (cryogenic) system is also an imposing accomplishment meriting its own superlatives. The system is in fact the world’s largest. Flowing helium maintains the extremely low temperature. A casing of approximately 97 metric tonnes of liquid helium surrounds the magnets to cool the cables. It is not ordinary helium gas, but helium with the necessary pressure to keep it in a superfluid phase. Superfluid helium is not subject to the viscosity of ordinary materials, so it can dissipate any heat produced in the dipole system with great efficiency: 10,000 metric tonnes of liquid nitrogen are first cooled, and this in turn cools the 130 metric tonnes of helium that circulate in the dipoles.

Featured image: A view of the experimental MgB2 transmission line at the LHC. Credit: CERN.

When cooling down really means slowing down

Consider this post the latest in a loosely defined series about atomic cooling techniques that I’ve been writing since June 2018.

Atoms can’t run a temperature, but things made up of atoms, like a chair or table, can become hotter or colder. This is because what we observe as the temperature of macroscopic objects is at the smallest level the kinetic energy of the atoms it is made up of. If you were to cool such an object, you’d have to reduce the average kinetic energy of its atoms. Indeed, if you had to cool a small group of atoms trapped in a container as well, you’d simply have to make sure they – all told – slow down.

Over the years, physicists have figured out more and more ingenious ways to cool atoms and molecules this way to ultra-cold temperatures. Such states are of immense practical importance because at very low energy, these particles (an umbrella term) start displaying quantum mechanical effects, which are too subtle to show up at higher temperatures. And different quantum mechanical effects are useful to create exotic things like superconductors, topological insulators and superfluids.

One of the oldest modern cooling techniques is laser-cooling. Here, a laser beam of a certain frequency is fired at an atom moving towards the beam. Electrons in the atom absorb photons in the beam, acquire energy and jump to a higher energy level. A short amount of time later, the electrons lose the energy by emitting a photon and jump back to the lower energy level. But since the photons are absorbed in only one direction but are emitted in arbitrarily different directions, the atom constantly loses momentum in one direction but gains momentum in a variety of directions (by Newton’s third law). The latter largely cancel themselves out, leaving the atom with considerably lower kinetic energy, and therefore cooler than before.

In collisional cooling, an atom is made to lose momentum by colliding not with a laser beam but with other atoms, which are maintained at a very low temperature. This technique works better if the ratio of elastic to inelastic collisions is much greater than 50. In elastic collisions, the total kinetic energy of the system is conserved; in inelastic collisions, the total energy is conserved but not the kinetic energy alone. In effect, collisional cooling works better if almost all collisions – if not all of them – conserve kinetic energy. Since the other atoms are maintained at a low temperature, they have little kinetic energy to begin with. So collisional cooling works by bouncing warmer atoms off of colder ones such that the colder ones take away some of the warmer atoms’ kinetic energy, thus cooling them.

In a new study, a team of scientists from MIT, Harvard University and the University of Waterloo reported that they were able to cool a pool of NaLi diatoms (molecules with only two atoms) this way to a temperature of 220 nK. That’s 220-billionths of a kelvin, about 12-million-times colder than deep space. They achieved this feat by colliding the warmer NaLi diatoms with five-times as many colder Na (sodium) atoms through two cycles of cooling.

Their paper, published online on April 8 (preprint here), indicates that their feat is notable for three reasons.

First, it’s easier to cool particles (atoms, ions, etc.) in which as many electrons as possible are paired to each other. A particle in which all electrons are paired is called a singlet; ones that have one unpaired electron each are called doublets; those with two unpaired electrons – like NaLi diatoms – are called triplets. Doublets and triplets can also absorb and release more of their energy by modifying the spins of individual electrons, which messes with collisional cooling’s need to modify a particle’s kinetic energy alone. The researchers from MIT, Harvard and Waterloo overcame this barrier by applying a ‘bias’ magnetic field across their experiment’s apparatus, forcing all the particles’ spins to align along a common direction.

Second: Usually, when Na and NaLi come in contact, they react and the NaLi molecule breaks down. However, the researchers found that in the so-called spin-polarised state, the Na and NaLi didn’t react with each other, preserving the latter’s integrity.

Third, and perhaps most importantly, this is not the coldest temperature to which we have been able to cool quantum particles, but it still matters because collisional cooling offers unique advantages that makes it attractive for certain applications. Perhaps the most well-known of them is quantum computing. Simply speaking, physicists prefer ultra-cold molecules to atoms to use in quantum computers because physicists can control molecules more precisely than they can the behaviour of atoms. But molecules that have doublet or triplet states or are otherwise reactive can’t be cooled to a few billionths of a kelvin with laser-cooling or other techniques. The new study shows they can, however, be cooled to 220 nK using collisional cooling. The researchers predict that in future, they may be able to cool NaLi molecules even further with better equipment.

Note that the researchers didn’t cool the NaLi atoms from room temperature to 220 nK but from 2 µK. Nonetheless, their achievement remains impressive because there are other well-established techniques to cool atoms and molecules from room temperature to a few micro-kelvin. The lower temperatures are harder to reach.

One of the researchers involved in the current study, Wolfgang Ketterle, is celebrated for his contributions to understanding and engineering ultra-cold systems. He led an effort in 2003 to cool sodium atoms to 0.5 nK – a record. He, Eric Cornell and Carl Wieman won the Nobel Prize for physics two years before that: Cornell, Wieman and their team created the first Bose-Einstein condensate in 1995, and Ketterle created ‘better’ condensates that allowed for closer inspection of their unique properties. A Bose-Einstein condensate is a state of matter in which multiple particles called bosons are ultra-cooled in a container, at which point they occupy the same quantum state – something they don’t do in nature (even as they comply with the laws of nature) – and give rise to strange quantum effects that can be observed without a microscope.

Ketterle’s attempts make for a fascinating tale; I collected some of them plus some anecdotes together for an article in The Wire in 2015, to mark the 90th year since Albert Einstein had predicted their existence, in 1924-1925. A chest-thumper might be cross that I left Satyendra Nath Bose out of this citation. It is deliberate. Bose-Einstein condensates are named for their underlying theory, called Bose-Einstein statistics. But while Bose had the idea for the theory to explain the properties of photons, Einstein generalised it to more particles, and independently predicted the existence of the condensates based on it.

This said, if it is credit we’re hungering for: the history of atomic cooling techniques includes the brilliant but little-known S. Pancharatnam. His work in wave physics laid the foundations of many of the first cooling techniques, and was credited as such by Claude Cohen-Tannoudji in the journal Current Science in 1994. Cohen-Tannoudji would win a piece of the Nobel Prize for physics in 1997 for inventing a technique called Sisyphus cooling – a way to cool atoms by converting more and more of their kinetic energy to potential energy, and then draining the potential energy.

Indeed, the history of atomic cooling techniques is, broadly speaking, a history of physicists uncovering newer, better ways to remove just a little bit more energy from an atom or molecule that’s already lost a lot of its energy. The ultimate prize is absolute zero, the lowest temperature possible, at which the atom retains only the energy it can in its ground state. However, absolute zero is neither practically attainable nor – more importantly – the goal in and of itself in most cases. Instead, the experiments in which physicists have achieved really low temperatures are often pegged to an application, and getting below a particular temperature is the goal.

For example, niobium nitride becomes a superconductor below 16 K (-257º C), so applications using this material prepare to achieve this temperature during operation. For another, as the MIT-Harvard-Waterloo group of researchers write in their paper, “Ultra-cold molecules in the micro- and nano-kelvin regimes are expected to bring powerful capabilities to quantum emulation and quantum computing, owing to their rich internal degrees of freedom compared to atoms, and to facilitate precision measurement and the study of quantum chemistry.”

A stinky superconductor

The next time you smell a whiff of rot in your morning’s eggs, you might not want to throw them away. Instead, you might do better to realise what you’re smelling could be a superconductor (under the right conditions) that’s, incidentally, riled up the scientific community.

The source of excitement is a paper published in Nature on August 17, penned by a group of German scientists, describing an experiment in which the compound hydrogen sulphide conducts electricity with zero resistance under a pressure of 90 gigapascals (about 888,231-times the atmospheric pressure) – when it turns into a metal – and at a temperature of 203.5 kelvin, about -70.5° C. The discovery makes it an unexpected high-temperature superconductor, doubly so for becoming one under conditions physicists don’t find too esoteric.

The tag of ‘high-temperature’ may be unfit for something operating at -70.5° C, but in superconductivity, -70.5° C approaches summer in the Atacama. When the phenomenon was first discovered – by the Dutch physicist Heike Kamerlingh Onnes in 1911 – it required the liquid metal mercury to be cooled to 4.2 kelvin, about -269° C. What happened in those conditions was explained by an American trio with a theory of superconductivity in 1957.

The explanation lies in quantum mechanics, where all particles have a characteristic ‘spin’ number. And QM allows all those particles with integer spin (0, 1, 2, …) to – in some conditions – cohere into one bigger ‘particle’ with enough energy of itself to avoid being disturbed by things like friction or atomic vibrations*. Electrons, however, have half-integer (1/2) spin, so can’t slip into this state. In 1957, John Bardeen, Leon Cooper and Robert Schrieffer proposed that at very low temperatures – like 4 K – the electrons in a metal interact with the positively charged latticework of atoms around them to pair up with each other. These electronic pairs are called Cooper pairs, kept twinned by vibrations of the lattice. The pair’s total spin is 1, allowing all of them to condense into one cohesive sea of electrons that then flows through the metal unhindered.

The BCS theory soon became a ‘conventional’ theory of superconductivity, able to explain the behaviour of many metals cooled to cryogenic temperatures. The German team’s hydrogen sulphide system is also one such conventional scenario – in which the gas had to compressed to form a metal before its superconducting abilities were teased out.

The team, led by Mikhail Eremets and Alexander Drozdov from the Max Planck Institute for Chemistry in Mainz, first made its claims last year, that under heavy pressure hydrogen sulphide becomes sulphur hydride (H2S → H3S), which in turn is a superconductor. At the time their experiment showed only one of two typical properties of a superconducting system, however: that its electrical resistance vanished at 190 K, higher than the previous record of 164 K.

Their August 17 paper reports that the second property has since been observed, too: that pressurised hydrogen sulphide doesn’t allow any external magnetic field to penetrate beyond its surface. This effect, called the Meissner effect, is observed only in superconductors. For Eremets, Drozdov et al, this is the full monty: a superconductor functioning at temperatures that actually exist on Earth. But for the broader scientific community, the paper marks the frenzied beginning of a new wave of experiments in the field.

Given the profundity of the findings – of a hydrogen-based high-temperature superconductor – they won’t enter the canon just yet but will require independent verification from other teams. A report by Edwin Cartlidge in Nature already notes five other teams around the world working on replicating the discovery. If and when they succeed, the implications will be wide-ranging – for physics as well as historical traditions of physical chemistry.

The BCS theory of superconductivity provided a precise mechanism of action that allowed scientists to predict the critical temperature (Tc) – below which a material becomes superconducting – of all materials that abided by the theory. Nonetheless, by 1957, the highest Tc reached had been 10 K despite scientists’ best efforts; so great was their frustration that in 1972, Philip Warren Anderson and Marvin Cohen predicted that there could be a natural limit at 30 K.

However, just a few years earlier – in 1968 – two physicists, Neil Ashcroft and Vitaly Ginzburg, refusing to subscribe to a natural limit on the critical temperature, proposed that the Tc could be very high in substances in which the vibrations of the atomic latticework surrounding the electrons was pretty energetic. Such vigour is typically found in the lighter elements like hydrogen and helium. Thus, the Ashcroft-Ginzburg idea effectively set the theoretical precedent for Eremets and Drozdov’s work.

But between the late 1960s and 2014, when hydrogen sulphide entered the fray of experiments, two discoveries threw the BCS theory off kilter. In 1986, scientists discovered cuprates, a class of copper’s compounds that were superconductors at 133 K (at 164 K under pressure) but didn’t function according to the BCS theory. Thus, they came to be called unconventional superconductors. The second discovery was of another class of unconventional superconductors, this time in compounds of iron and arsenic called pnictides, in 2008. The highest Tc among them was less than that of the cuprates. And because cuprates under pressure could muster a Tc of 164 K, scientists pinned their hopes on them of breaching the room-temperature barrier, and worked on developing an unconventional theory of superconductivity.

But for those choosing to persevere with the conventional order of things, there was a brief flicker of hope in 2001 with the discovery of magnesium diboride superconductors: they had a Tc of 39 K, an important but not very substantial improvement on previous records among conventional materials.

The work of Eremets & Drozdov was also indirectly assisted by a group of Chinese researchers in 2014, who were able to anticipate hydrogen sulphide’s superconducting abilities using the conventional BCS theory. According to them, hydrogen sulphide would become a metal under the application of 111 gigapascals of pressure, with a Tc between 191 K and 204 K. And once it survives independent experimental scrutiny intact, the Chinese theoretical work will prove valuable as scientists confront their next big challenge: pressure.

The ultimate fantasy would be to have a Tc is in the range of ambient temperatures. Imagine leagues of superconducting cables radiating out from coal-choked power plants, a gigawatt of power transmitted for a gigawatt of power produced**, or maglev trains running on superconducting tracks at lower costs and currents, or the thousands of superconducting electromagnets around the LHC that won’t have to be supercooled using jackets of liquid helium. Sadly, that Eremets & Drozdov have (probably) achieved a Tc of 203.5 K doesn’t mean that the engineering is accessible or affordable. In fact, what allowed them to fetch 203.5 K is what the barrier is for the tech to be ubiquitously used, making their feat an antecedence of possibilities rather than a demonstration itself.

It wasn’t possible until the 1970s to achieve pressures of a few gigapascals in the lab, and similar processes today are confined to industrial purposes. A portable device that’d sustain that pressure across large areas is difficult to build – yet that’s when metallic sulphur hydride shows itself. In their experiment, Eremets and Drozdov packed a cold mass of hydrogen sulphide against a stainless steel gasket using some insulating material like teflon, and then sandwiched the pellet between two diamond anvils that pressurised it. The diameter of the entire apparatus was a little more than a 100 micrometers across. Moreover, they also note in their paper that the ‘loading’ of the hydrogen sulphide between the anvils needs to be done at a low temperature – before pressurisation – so that the gas doesn’t decompose before the superconducting can begin.

These are impractical conditions if hydrogen sulphide cables have to be handled by a crew of non-specialists and in conditions nowhere near controllable enough as the insides of a small steel gasket. As an alternative, should independent verification of the Eremets & Drozdov experiment happen, scientists will use it as a validation of the Chinese theorists’ calculations and extend that to fashion a material more suited to their purposes.

*The foundation for this section of QM was laid by Satyendra Nath Bose, and later expanded by Albert Einstein to become the Bose-Einstein statistics.

**But not a gigawatt of power consumed, thanks to power thefts to the tune of Rs.2.52 lakh crore.