The awesome limits of superconductors

On June 24, a press release from CERN said that scientists and engineers working on upgrading the Large Hadron Collider (LHC) had “built and operated … the most powerful electrical transmission line … to date”. The transmission line consisted of four cables – two capable of transporting 20 kA of current and two, 7 kA.

The ‘A’ here stands for ‘ampere’, the SI unit of electric current. Twenty kilo-amperes is an extraordinary amount of current, nearly equal to the amount in a single lightning strike.

In the particulate sense: one ampere is the flow of one coulomb per second. One coulomb is equal to around 6.24 quintillion elementary charges, where each elementary charge is the charge of a single proton or electron (with opposite signs). So a cable capable of carrying a current of 20 kA can essentially transport 124.8 sextillion electrons per second.

According to the CERN press release (emphasis added):

The line is composed of cables made of magnesium diboride (MgB2), which is a superconductor and therefore presents no resistance to the flow of the current and can transmit much higher intensities than traditional non-superconducting cables. On this occasion, the line transmitted an intensity 25 times greater than could have been achieved with copper cables of a similar diameter. Magnesium diboride has the added benefit that it can be used at 25 kelvins (-248 °C), a higher temperature than is needed for conventional superconductors. This superconductor is more stable and requires less cryogenic power. The superconducting cables that make up the innovative line are inserted into a flexible cryostat, in which helium gas circulates.

The part in bold could have been more explicit and noted that superconductors, including magnesium diboride, can’t carry an arbitrarily higher amount of current than non-superconducting conductors. There is actually a limit for the same reason why there is a limit to the current-carrying capacity of a normal conductor.

This explanation wouldn’t change the impressiveness of this feat and could even interfere with readers’ impression of the most important details, so I can see why the person who drafted the statement left it out. Instead, I’ll take this matter up here.

An electric current is generated between two points when electrons move from one point to the other. The direction of current is opposite to the direction of the electrons’ movement. A metal that conducts electricity does so because its constituent atoms have one or more valence electrons that can flow throughout the metal. So if a voltage arises between two ends of the metal, the electrons can respond by flowing around, birthing an electric current.

This flow isn’t perfect, however. Sometimes, a valence electron can bump into atomic nuclei, impurities – atoms of other elements in the metallic lattice – or be thrown off course by vibrations in the lattice of atoms, produced by heat. Such disruptions across the metal collectively give rise to the metal’s resistance. And the more resistance there is, the less current the metal can carry.

These disruptions often heat the metal as well. This happens because electrons don’t just flow between the two points across which a voltage is applied. They’re accelerated. So as they’re speeding along and suddenly bump into an impurity, they’re scattered into random directions. Their kinetic energy then no longer contributes to the electric energy of the metal and instead manifests as thermal energy – or heat.

If the electrons bump into nuclei, they could impart some of their kinetic energy to the nuclei, causing the latter to vibrate more, which in turn means they heat up as well.

Copper and silver have high conductance because they have more valence electrons available to conduct electricity and these electrons are scattered to a lesser extent than in other metals. Therefore, these two also don’t heat up as quickly as other metals might, allowing them to transport a higher current for longer. Copper in particular has a higher mean free path: the average distance an electron travels before being scattered.

In superconductors, the picture is quite different because quantum physics assumes a more prominent role. There are different types of superconductors according to the theories used to understand how they conduct electricity with zero resistance and how they behave in different external conditions. The electrical behaviour of magnesium diboride, the material used to transport the 20 kA current, is described by Bardeen-Cooper-Schrieffer (BCS) theory.

According to this theory, when certain materials are cooled below a certain temperature, the residual vibrations of their atomic lattice encourages their valence electrons to overcome their mutual repulsion and become correlated, especially in terms of their movement. That is, the electrons pair up.

While individual electrons belong to a class of particles called fermions, these electron pairs – a.k.a. Cooper pairs – belong to another class called bosons. One difference between these two classes is that bosons don’t obey Pauli’s exclusion principle: that no two fermions in the same quantum system (like an atom) can have the same set of quantum numbers at the same time.

As a result, all the electron pairs in the material are now free to occupy the same quantum state – which they will when the material is supercooled. When they do, the pairs collectively make up an exotic state of matter called a Bose-Einstein condensate: the electron pairs now flow through the material as if they were one cohesive liquid.

In this state, even if one pair gets scattered by an impurity, the current doesn’t experience resistance because the condensate’s overall flow isn’t affected. In fact, given that breaking up one pair will cause all other pairs to break up as well, the energy required to break up one pair is roughly equal to the energy required to break up all pairs. This feature affords the condensate a measure of robustness.

But while current can keep flowing through a BCS superconductor with zero resistance, the superconducting state itself doesn’t have infinite persistence. It can break if it stops being cooled below a specific temperature, called the critical temperature; if the material is too impure, contributing to a sufficient number of collisions to ‘kick’ all electrons pairs out of their condensate reverie; or if the current density crosses a particular threshold.

At the LHC, the magnesium diboride cables will be wrapped around electromagnets. When a large current flows through the cables, the electromagnets will produce a magnetic field. The LHC uses a circular arrangement of such magnetic fields to bend the beam of protons it will accelerate into a circular path. The more powerful the magnetic field, the more accurate the bending. The current operational field strength is 8.36 tesla, about 128,000-times more powerful than Earth’s magnetic field. The cables will be insulated but they will still be exposed to a large magnetic field.

Type I superconductors completely expel an external magnetic field when they transition to their superconducting state. That is, the magnetic field can’t penetrate the material’s surface and enter the bulk. Type II superconductors are slightly more complicated. Below one critical temperature and one critical magnetic field strength, they behave like type I superconductors. Below the same temperature but a slightly stronger magnetic field, they are superconducting and allow the fields to penetrate their bulk to a certain extent. This is called the mixed state.

A hand-drawn phase diagram showing the conditions in which a mixed-state type II superconductor exists. Credit: Frederic Bouquet/Wikimedia Commons, CC BY-SA 3.0

Say a uniform magnetic field is applied over a mixed-state superconductor. The field will plunge into the material’s bulk in the form of vortices. All these vortices will have the same magnetic flux – the number of magnetic field lines per unit area – and will repel each other, settling down in a triangular pattern equidistant from each other.

An annotated image of vortices in a type II superconductor. The scale is specified at the bottom right. Source: A set of slides entitled ‘Superconductors and Vortices at Radio Frequency Magnetic Fields’ by Ernst Helmut Brandt, Max Planck Institute for Metals Research, October 2010.

When an electric current passes through this material, the vortices are slightly displaced, and also begin to experience a force proportional to how closely they’re packed together and their pattern of displacement. As a result, to quote from this technical (yet lucid) paper by Praveen Chaddah:

This force on each vortex … will cause the vortices to move. The vortex motion produces an electric field1 parallel to [the direction of the existing current], thus causing a resistance, and this is called the flux-flow resistance. The resistance is much smaller than the normal state resistance, but the material no longer [has] infinite conductivity.

1. According to Maxwell’s equations of electromagnetism, a changing magnetic field produces an electric field.

Since the vortices’ displacement depends on the current density: the greater the number of electrons being transported, the more flux-flow resistance there is. So the magnesium diboride cables can’t simply carry more and more current. At some point, setting aside other sources of resistance, the flux-flow resistance itself will damage the cable.

There are ways to minimise this resistance. For example, the material can be doped with impurities that will ‘pin’ the vortices to fixed locations and prevent them from moving around. However, optimising these solutions for a given magnetic field and other conditions involves complex calculations that we don’t need to get into.

The point is that superconductors have their limits too. And knowing these limits could improve our appreciation for the feats of physics and engineering that underlie achievements like cables being able to transport 124.8 sextillion electrons per second with zero resistance. In fact, according to the CERN press release,

The [line] that is currently being tested is the forerunner of the final version that will be installed in the accelerator. It is composed of 19 cables that supply the various magnet circuits and could transmit intensities of up to 120 kA!

§

While writing this post, I was frequently tempted to quote from Lisa Randall‘s excellent book-length introduction to the LHC, Knocking on Heaven’s Door (2011). Here’s a short excerpt:

One of the most impressive objects I saw when I visited CERN was a prototype of LHC’s gigantic cylindrical dipole magnets. Event with 1,232 such magnets, each of them is an impressive 15 metres long and weighs 30 tonnes. … Each of these magnets cost EUR 700,000, making the ned cost of the LHC magnets alone more than a billion dollars.

The narrow pipes that hold the proton beams extend inside the dipoles, which are strung together end to end so that they wind through the extent of the LHC tunnel’s interior. They produce a magnetic field that can be as strong as 8.3 tesla, about a thousand times the field of the average refrigerator magnet. As the energy of the proton beams increases from 450 GeV to 7 TeV, the magnetic field increases from 0.54 to 8.3 teslas, in order to keep guiding the increasingly energetic protons around.

The field these magnets produce is so enormous that it would displace the magnets themselves if no restraints were in place. This force is alleviated through the geometry of the coils, but the magnets are ultimately kept in place through specially constructed collars made of four-centimetre thick steel.

… Each LHC dipole contains coils of niobium-titanium superconducting cables, each of which contains stranded filaments a mere six microns thick – much smaller than a human hair. The LHC contains 1,200 tonnes of these remarkable filaments. If you unwrapped them, they would be long enough to encircle the orbit of Mars.

When operating, the dipoles need to be extremely cold, since they work only when the temperature is sufficiently low. The superconducting wires are maintained at 1.9 degrees above absolute zero … This temperature is even lower than the 2.7-degree cosmic microwave background radiation in outer space. The LHC tunnel houses the coldest extended region in the universe – at least that we know of. The magnets are known as cryodipoles to take into account their special refrigerated nature.

In addition to the impressive filament technology used for the magnets, the refrigeration (cryogenic) system is also an imposing accomplishment meriting its own superlatives. The system is in fact the world’s largest. Flowing helium maintains the extremely low temperature. A casing of approximately 97 metric tonnes of liquid helium surrounds the magnets to cool the cables. It is not ordinary helium gas, but helium with the necessary pressure to keep it in a superfluid phase. Superfluid helium is not subject to the viscosity of ordinary materials, so it can dissipate any heat produced in the dipole system with great efficiency: 10,000 metric tonnes of liquid nitrogen are first cooled, and this in turn cools the 130 metric tonnes of helium that circulate in the dipoles.

Featured image: A view of the experimental MgB2 transmission line at the LHC. Credit: CERN.

My heart of physics

Every July 4, I have occasion to remember two things: the discovery of the Higgs boson, and my first published byline for an article about the discovery of the Higgs boson. I have no trouble believing it’s been eight years since we discovered this particle, using the Large Hadron Collider (LHC) and its ATLAS and CMS detectors, in Geneva. I’ve greatly enjoyed writing about particle physics in this time, principally because closely engaging with new research and the scientists who worked on them allowed me to learn more about a subject that high school and college had let me down on: physics.

In 2020, I haven’t been able to focus much on the physical sciences in my writing, thanks to the pandemic, the lockdown, their combined effects and one other reason. This has been made doubly sad by the fact that the particle physics community at large is at an interesting crossroads.

In 2012, the LHC fulfilled the principal task it had been built for: finding the Higgs boson. After that, physicists imagined the collider would discover other unknown particles, allowing theorists to expand their theories and answer hitherto unanswered questions. However, the LHC has since done the opposite: it has narrowed the possibilities of finding new particles that physicists had argued should exist according to their theories (specifically supersymmetric partners), forcing them to look harder for mistakes they might’ve made in their calculations. But thus far, physicists have neither found mistakes nor made new findings, leaving them stuck in an unsettling knowledge space from which it seems there might be no escape (okay, this is sensationalised, but it’s also kinda true).

Right now, the world’s particle physicists are mulling building a collider larger and more powerful than the LHC, at a cost of billions of dollars, in the hopes that it will find the particles they’re looking for. Not all physicists are agreed, of course. If you’re interested in reading more, I’d recommend articles by Sabine Hossenfelder and Nirmalya Kajuri and spiralling out from there. But notwithstanding the opposition, CERN – which coordinates the LHC’s operations with tens of thousands of personnel from scores of countries – recently updated its strategy vision to recommend the construction of such a machine, with the ability to produce copious amounts of Higgs bosons in collisions between electrons and positrons (a.k.a. ‘Higgs factories’). China has also announced plans of its own build something similar.

Meanwhile, scientists and engineers are busy upgrading the LHC itself to a ‘high luminosity version’, where luminosity represents the number of interesting events the machine can detect during collisions for further study. This version will operate until 2038. That isn’t a long way away because it took more than a decade to build the LHC; it will definitely take longer to plan for, convince lawmakers, secure the funds for and build something bigger and more complicated.

There have been some other developments connected to the current occasion in terms of indicating other ways to discover ‘new physics’, which is the collective name for phenomena that will violate our existing theories’ predictions and show us where we’ve gone wrong in our calculations.

The most recent one I think was the ‘XENON excess’, which refers to a moderately strong signal recorded by the XENON 1T detector in Italy that physicists think could be evidence of a class of particles called axions. I say ‘moderately strong’ because the statistical significance of the signal’s strength is just barely above the threshold used to denote evidence and not anywhere near the threshold that denotes a discovery proper.

It’s evoked a fair bit of excitement because axions count as new physics – but when I asked two physicists (one after the other) to write an article explaining this development, they refused on similar grounds: that the significance makes it seem likely that the signal will be accounted for by some other well-known event. I was disappointed of course but I wasn’t surprised either: in the last eight years, I can count at least four instances in which a seemingly inexplicable particle physics related development turned out to be a dud.

The most prominent one was the ‘750 GeV excess’ at the LHC in December 2015, which seemed to be a sign of a new particle about six-times heavier than a Higgs boson and 800-times heavier than a proton (at rest). But when physicists analysed more data, the signal vanished – a.k.a. it wasn’t there in the first place and what physicists had seen was likely a statistical fluke of some sort. Another popular anomaly that went the same way was the one at Atomki.

But while all of this is so very interesting, today – July 4 – also seems like a good time to admit I don’t feel as invested in the future of particle physics anymore (the ‘other reason’). Some might say, and have said, that I’m abandoning ship just as the field’s central animus is moving away from the physics and more towards sociology and politics, and some might be right. I get enough of the latter subjects when I work on the non-physics topics that interest me, like research misconduct and science policy. My heart of physics itself is currently tending towards quantum mechanics and thermodynamics (although not quantum thermodynamics).

One peer had also recommended in between that I familiarise myself with quantum computing while another had suggested climate-change-related mitigation technologies, which only makes me wonder now if I’m delving into those branches of physics that promise to take me farther away from what I’m supposed to do. And truth be told, I’m perfectly okay with that. 🙂 This does speak to my privileges – modest as they are on this particular count – but when it feels like there’s less stuff to be happy about in the world with every new day, it’s time to adopt a new hedonism and find joy where it lies.

A gear-train for particle physics

It has come under scrutiny at various times by multiple prominent physicists and thinkers, but it’s not hard to see why, when the idea of ‘grand unification’ first set out, it seemed plausible to so many. The first time it was seriously considered was about four decades ago, shortly after physicists had realised that two of the four fundamental forces of nature were in fact a single unified force if you ramped up the energy at which it acted. (electromagnetic + weak = electroweak). The thought that followed was simply logical: what if, at some extremely high energy (like what was in the Big Bang), all four forces unified into one? This was 1974.

There has been no direct evidence of such grand unification yet. Physicists don’t know how the electroweak force will unify with the strong nuclear force – let alone gravity, a problem that actually birthed one of the most powerful mathematical tools in an attempt to solve it. Nonetheless, they think they know the energy at which such grand unification should occur if it does: the Planck scale, around 1019 GeV. This is about as much energy as is contained in a few litres of petrol, but it’s stupefyingly large when you have to accommodate all of it in a particle that’s 10-15 metres wide.

This is where particle accelerators come in. The most powerful of them, the Large Hadron Collider (LHC), uses powerful magnetic fields to accelerate protons to close to light-speed, when their energy approaches about 7,000 GeV. But the Planck energy is still 10 million billion orders of magnitude higher, which means it’s not something we might ever be able to attain on Earth. Nonetheless, physicists’ theories show that that’s where all of our physical laws should be created, where the commandments by which all that exists does should be written.

… Or is it?

There are many outstanding problems in particle physics, and physicists are desperate for a solution. They have to find something wrong with what they’ve already done, something new or a way to reinterpret what they already know. The clockwork theory is of the third kind – and its reinterpretation begins by asking physicists to dump the idea that new physics is born only at the Planck scale. So, for example, it suggests that the effects of quantum gravity (a quantum-mechanical description of gravity) needn’t necessarily become apparent only at the Planck scale but at a lower energy itself. But even if it then goes on to solve some problems, the theory threatens to present a new one. Consider: If it’s true that new physics isn’t born at the highest energy possible, then wouldn’t the choice of any energy lower than that just be arbitrary? And if nothing else, nature is not arbitrary.

To its credit, clockwork sidesteps this issue by simply not trying to find ‘special’ energies at which ‘important’ things happen. Its basic premise is that the forces of nature are like a set of interlocking gears moving against each other, transmitting energy – rather potential – from one wheel to the next, magnifying or diminishing the way fundamental particles behave in different contexts. Its supporters at CERN and elsewhere think it can be used to explain some annoying gaps between theory and experiment in particle physics, particularly the naturalness problem.

Before the Higgs boson was discovered, physicists predicted based on the properties of other particles and forces that its mass would be very high. But when the boson’s discovery was confirmed at CERN in January 2013, its mass implied that the universe would have to be “the size of a football” – which is clearly not the case. So why is the Higgs boson’s mass so low, so unnaturally low? Scientists have fronted many new theories that try to solve this problem but their solutions often require the existence of other, hitherto undiscovered particles.

Clockwork’s solution is a way in which the Higgs boson’s interaction with gravity – rather gravity’s associated energy – is mediated by a string of effects described in quantum field theory that tamp down the boson’s mass. In technical parlance, the boson’s mass becomes ‘screened’. An explanation for this that’s both physical and accurate is hard to draw up because of various abstractions. So as University of Bruxelles physicist Daniele Teresi suggests, imagine this series: Χ = 0.5 × 0.5 × 0.5 × 0.5 × … × 0.5. Even if each step reduces Χ’s value by only a half, it is already an eighth after three steps; after four, a sixteenth. So the effect can get quickly drastic because it’s exponential.

And the theory provides a mathematical toolbox that allows for all this to be achieved without the addition of new particles. This is advantageous because it makes clockwork relatively more elegant than another theory that seeks to solve the naturalness problem, called supersymmetry, SUSY for short. Physicists like SUSY also because it allows for a large energy hierarchy: a distribution of particles and processes at energies between electroweak unification and grand unification, instead of leaving the region bizarrely devoid of action like the Standard Model does. But then SUSY predicts the existence of 17 new particles, none of which have been detected yet.

Even more, as Matthew McCullough, one of clockwork’s developers, showed at an ongoing conference in Italy, its solutions for a stationary particle in four dimensions exhibit conceptual similarities to Maxwell’s equations for an electromagnetic wave in a conductor. The existence of such analogues is reassuring because it recalls nature’s tendency to be guided by common principles in diverse contexts.

This isn’t to say clockwork theory is it. As physicist Ben Allanach has written, it is a “new toy” and physicists are still playing with it to solve different problems. Just that in the event that it has an answer to the naturalness problem – as well as to the question why dark matter doesn’t decay, e.g. – it is notable. But is this enough: to say that clockwork theory mops up the math cleanly in a bunch of problems? How do we make sure that this is how nature works?

McCullough thinks there’s one way, using the LHC. Very simplistically: clockwork theory induces fluctuations in the probabilities with which pairs of high-energy photons are created at some energies at the LHC. These should be visible as wavy squiggles in a plot with energy on the x-axis and events on the y-axis. If these plots can be obtained and analysed, and the results agree with clockwork’s predictions, then we will have confirmed what McCullough calls an “irreducible prediction of clockwork gravity”, the case of using the theory to solve the naturalness problem.

To recap: No free parameters (i.e. no new particles), conceptual elegance and familiarity, and finally a concrete and unique prediction. No wonder Allanach thinks clockwork theory inhabits fertile ground. On the other hand, SUSY’s prospects have been bleak since at least 2013 (if not earlier) – and it is one of the more favoured theories among physicists to explain physics beyond the Standard Model, physics we haven’t observed yet but generally believe exists. At the same time, and it bears reiterating, clockwork theory will also have to face down a host of challenges before it can be declared a definitive success. Tik tok tik tok tik tok

Some notes and updates

Four years of the Higgs boson

Missed this didn’t I. On July 4, 2012, physicists at CERN announced that the Large Hadron Collider had found a Higgs-boson-like particle. Though the confirmation would only come in January 2013 (that it was the Higgs boson and not any other particle), July 4 is the celebrated date. I don’t exactly mark the occasion every year except to recap on whatever’s been happening in particle physics. And this year: everyone’s still looking for supersymmetry; there was widespread excitement about a possible new fundamental particle weighing about 750 GeV when data-taking began at the LHC in late May but strong rumours from within CERN have it that such a particle probably doesn’t exist (i.e. it’s vanishing in the new data-sets). Pity. The favoured way to anticipate what might come to be well before the final announcements are made in August is to keep an eye out for conference announcements in mid-July. If they’re made, it’s a strong giveaway that something’s been found.

Live-tweeting and timezones

I’ve a shitty internet connection at home in Delhi which means I couldn’t get to see the live-stream NASA put out of its control room or whatever as Juno executed its orbital insertion manoeuvre this morning. Fortunately, Twitter came to the rescue; NASA’s social media team had done such a great job of hyping up the insertion (deservingly so) that it seemed as if all the 480 accounts I followed were tweeting about it. I don’t believe I missed anything at all, except perhaps the sounds of applause. Twitter’s awesome that way, and I’ll say that even if it means I’m stating the obvious. One thing did strike me: all times (of the various events in the timeline) were published in UTC and EDT. This makes sense because converting from UTC to a local timezone is easy (IST = UTC + 5.30) while EDT corresponds to the US east cost. However, the thing about IST being UTC + 5.30 isn’t immediately apparent to everyone (at least not to me), and every so often I wish an account tweeting from India, such as a news agency’s, uses IST. I do it every time.

New music

https://www.youtube.com/watch?v=F4IwxzU3Kv8

I don’t know why I hadn’t found Yat-kha earlier considering I listen to Huun Huur Tu so much, and Yat-kha is almost always among the recommendations (all bands specialising in throat-singing). And while Huun Huur Tu likes to keep their music traditional and true to its original compositional style, Yat-kha takes it a step further, banding its sound up with rock, and this tastes much better to me. With a voice like Albert Kuvezin’s, keeping things traditional can be a little disappointing – you can hear why in the song above. It’s called Kaa-khem; the same song by Huun Huur Tu is called Mezhegei. Bass evokes megalomania in me, and it’s all the more sensual when its rendition is accomplished with human voice, rising and falling. Another example of what I’m talking about is called Yenisei punk. Finally, this is where I’d suggest you stop if you’re looking for throat-singing made to sound more belligerent: I stumbled upon War horse by Tengger Cavalry, classified as nomadic folk metal. It’s terrible.

Fall of Light, a part 2

In fantasy trilogies, the first part benefits from establishing the premise and the third, from the denouement. If the second part has to benefit from anything at all, then it is the story itself, not the intensity of the stakes within its narrative. At least, that’s my takeaway from Fall of Light, the second book of Steven Erikson’s Kharkanas trilogy. Its predecessor, Forge of Darkness, established the kingdom of Kurald Galain and the various forces that shape its peoples and policies. Because the trilogy has been described as being a prequel (note: not the prequel) to Erikson’s epic Malazan Book of the Fallen series, and because of what we know about Kurald Galain in the series, the last book of the trilogy has its work cut out for it. But in the meantime, Fall of Light was an unexpectedly monotonous affair – and that was awesome. As a friend of mine has been wont to describe the Malazan series: Erikson is a master of raising the stakes. He does that in all of his books (including the Korbal Broach short-stories) and he does it really well. However, Fall of Light rode with the stakes as they were laid down at the end of the first book, through a plot that maintained the tension at all times. It’s neither eager to shed its burden nor is it eager to take on new ones. If you’ve read the Malazan series, I’d say he’s written another Deadhouse Gates, but better.

Oh, and this completes one of my bigger goals for 2016.

Money for science

Spending money on science has been tied to evaluating the value of spin-offs, assessing the link between technological advancement and GDP, and dissecting the metrics of productivity, but the debate won’t ever settle no matter how convincingly each time it is resolved.

For a piece titled The Telescope of the 2030s, Dennis Overbye writes in The New York Times,

I used to think $10 billion was a lot of money before TARP, the Troubled Asset Relief Program, the $700 billion bailout that saved the banks in 2008 and apparently has brought happy days back to Wall Street. Compared with this, the science budget is chump change, lunch money at a place like Goldman Sachs. But if you think this is not a bargain, you need look only as far as your pocket. Companies like Google and Apple have leveraged modest investments in computer science in the 1960s into trillions of dollars of economic activity. Not even Arthur C. Clarke, the vaunted author and space-age prophet, saw that coming.

Which is to say that all that NASA money — whether for planetary probes or space station trips — is spent on Earth, on things that we like to say we want more of: high technology, education, a more skilled work force, jobs, pride in American and human innovation, not to mention greater cosmic awareness, a dose of perspective on our situation here among the stars.

And this is a letter from Todd Huffman, a particle physicist at Oxford, to The Guardian:

Simon Jenkins parrots a cry that I have heard a few times during my career as a research scientist in high-energy physics (Pluto trumps prisons when we spend public money, 17 July). He is unimaginatively concerned that the £34m a year spent by the UK at Cern (and a similar amount per year would have been spent on the New Horizons probe to Pluto) is not actually money well spent.

Yet I read his article online using the world wide web, which was developed initially by and for particle physicists. I did this using devices with integrated circuits partly perfected for the aerospace industry. The web caused the longest non-wartime economic boom in recorded history, during the 90s. The industries spawned by integrated circuits are simply too numerous to count and would have been impossible to predict when that first transistor was made in the 50s. It is a failure of society that funnels such economic largesse towards hedge-fund managers and not towards solving the social ills Mr Jenkins rightly exposes.

Conflict of interest? Not really. Science is being cornered from all sides and if anyone’s going to defend its practice, it’s going to be scientists. But we’re often so ready to confuse participation for investment, and at the first hint of any allegation of conflict, don’t wait to verify matters for ourselves.

I’m sure Yuri Milner’s investment of $100 million today to help the search for extra-terrestrial intelligence will be questioned, too, despite Stephen Hawking’s moving endorsement of it:

Somewhere in the cosmos, perhaps, intelligent life may be watching these lights of ours, aware of what they mean. Or do our lights wander a lifeless cosmos — unseen beacons, announcing that here, on one rock, the Universe discovered its existence. Either way, there is no bigger question. It’s time to commit to finding the answer – to search for life beyond Earth. We are alive. We are intelligent. We must know.

Pursuits like exploring the natural world around us are, I think, what we’re meant to do as humans, what we must do when we can, and what we must ultimately aspire to.

The Large Hadron Collider is back online, ready to shift from the “what” of reality to “why”

The world’s single largest science experiment will restart on March 23 after a two-year break. Scientists and administrators at the European Organization for Nuclear Research – known by its French acronym CERN – have announced the status of the agency’s upgrades on its Large Hadron Collider (LHC) and its readiness for a new phase of experiments running from now until 2018.

Before the experiment was shut down in late 2013, the LHC became famous for helping discover the elusive Higgs boson, a fundamental (that is, indivisible) particle that gives other fundamental particles their mass through a complicated mechanism. The find earned two of the physicists who thought up the mechanism in 1964, Peter Higgs and Francois Englert, a Nobel Prize in that year.

Though the LHC had fulfilled one of its more significant goals by finding the Higgs boson, its purpose is far from complete. In its new avatar, the machine boasts of the energy and technical agility necessary to answer questions that current theories of physics are struggling to make sense of.

As Alice Bean, a particle physicist who has worked with the LHC, said, “A whole new energy region will be waiting for us to discover something.”

The finding of the Higgs boson laid to rest speculations of whether such a particle existed and what its properties could be, and validated the currently reigning set of theories that describe how various fundamental particles interact. This is called the Standard Model, and it has been successful in predicting the dynamics of those interactions.

From the what to the why

But having assimilated all this knowledge, what physicists don’t know, but desperately want to, is why those particles’ properties have the values they do. They have realized the implications are numerous and profound: ranging from the possible existence of more fundamental particles we are yet to encounter to the nature of the substance known as dark matter, which makes up a great proportion of matter in the universe while we know next to nothing about it. These mysteries were first conceived to plug gaps in the Standard Model but they have only been widening since.

With an experiment now able to better test theories, physicists have started investigating these gaps. For the LHC, the implication is that in its second edition it will not be looking for something as much as helping scientists decide where to look to start with.

As Tara Shears, a particle physicist at the University of Liverpool, told Nature, “In the first run we had a very strong theoretical steer to look for the Higgs boson. This time we don’t have any signposts that are quite so clear.”

Higher energy, luminosity

The upgrades to the LHC that would unlock new experimental possibilities were evident in early 2012.

The machine works by using powerful electric currents and magnetic fields to accelerate two trains, or beams, of protons in opposite directions, within a ring 27 km long, to almost the speed of light and then colliding them head-on. The result is a particulate fireworks of such high energy that the most rare, short-lived particles are brought into existence before they promptly devolve into lighter, more common particles. Particle detectors straddling the LHC at four points on the ring record these collisions and their effects for study.

So, to boost its performance, upgrades to the LHC were of two kinds: increasing the collision energy inside the ring and increasing the detectors’ abilities to track more numerous and more powerful collisions.

The collision energy has been nearly doubled in its second life, from 7-8 TeV to 13-14 TeV. The frequency of collisions has also been doubled from one set every 50 nanoseconds (billionth of a second) to one every 25 nanoseconds. Steve Myers, CERN’s director for accelerators and technology, had said in December 2012, “More intense beams mean more collisions and a better chance of observing rare phenomena.”

The detectors have received new sensors, neutron shields to protect from radiation damage, cooling systems and superconducting cables. An improved fail-safe system has also been installed to forestall accidents like the one in 2008, when failing to cool a magnet led to a shut-down for eight months.

In all, the upgrades cost approximately $149 million, and will increase CERN’s electricity bill by 20% to $65 million. A “massive debugging exercise” was conducted last week to ensure all of it clicked together.

Going ahead, these new specifications will be leveraged to tackle some of the more outstanding issues in fundamental physics.

CERN listed a few–presumably primary–focus areas. They include investigating if the Higgs boson could betray the existence of undiscovered particles, the particles dark matter could be made of, why the universe today has much more matter than antimatter, and if gravity is so much weaker than other forces because it is leaking into other dimensions.

Stride forward in three frontiers

Physicists are also hopeful for the prospects of discovering a class of particles called supersymmetric partners. The theory that predicts their existence is called supersymmetry. It builds on some of the conclusions of the Standard Model, and offers predictions that plug its holes as well with such mathematical elegance that it has many of the world’s leading physicists enamored. These predictions involve the existence of new particles called partners.

In a neat infographic by Elizabeth Gibney in Nature, she explains that the partner that will be easiest to detect will be the ‘stop squark’ as it is the lightest and can show itself in lower energy collisions.

In all, the LHC’s new avatar marks a big stride forward not just in the energy frontier but also in the intensity and cosmic frontiers. With its ability to produce and track more collisions per second as well as chart the least explored territories of the ancient cosmos, it’d be foolish to think this gigantic machine’s domain is confined to particle physics and couldn’t extend to fuel cells, medical diagnostics or achieving systems-reliability in IT.

Here’s a fitting video released by CERN to mark this momentous occasion in the history of high-energy physics.

Featured image: A view of the LHC. Credit: CERN

Update: After engineers spotted a short-circuit glitch in a cooled part of the LHC on March 21, its restart was postponed from March 23 by a few weeks. However, CERN has assured that its a fully understood problem and that it won’t detract from the experiment’s goals for the year.

Fabiola Gianotti, the first woman Director-General of CERN

The CERN Council has elected a new Director-General to succeed the incumbent Rolf-Dieter Heuer. Fabiola Gianotti, who served as the ATLAS collaboration’s spokesperson from 2009 to 2013 – a period that included the discovery of the long-sought Higgs boson by the ATLAS and CMS experiments – will be the first woman to hold the position. Her mandate begins from January 2016.

A CERN press release announcing the appointment said the “Council converged rapidly in favor of Dr. Gianotti”, implying it was a quick and unanimous decision.

The Large Hadron Collider (LHC), the mammoth particle smasher that produces the collisions that ATLAS, CMS and two other similar collaborations study, is set to restart in January 2015 after a series of upgrades to increase its energy and luminosity. And so Dr. Gianotti’s term will coincide with a distinct phase of science, this one eager for evidence to help answer deeper questions in particle physics – such as the Higgs boson’s mass, the strong force’s strength and dark matter.

Dr. Gianotti will succeed 15 men who, as Director Generals, have been responsible for not simply coordinating the scientific efforts stemming from CERN but also guiding research priorities and practices. They have effectively set the various agendas that the world’s preeminent nuclear physics lab has chosen to pursue since its establishment in 1945.

In fact, the title of ‘spokesperson’, which Dr. Gianotti held for the ATLAS collaboration for four years until 2013, is itself deceptively uncomplicated. The spokesperson not only speaks for the collaboration but is also the effective project manager who plays an important role when decisions are made about what measurements to focus on and what questions to answer. When on July 4, 2012, the discovery of a Higgs-boson-like particle was announced, results from the ATLAS particle-detector – and therefore Dr. Gianotti’s affable leadership – were instrumental in getting that far, and in getting Peter Higgs and Francois Englert their 2013 Nobel Prize in physics.

Earlier this year, she had likened her job to “a great scientific adventure”, and but “also a great human adventure”, to CNN. To guide the aspirations and creativity of 3,000 engineers and physicists without attenuation1 of productivity or will must have indeed been so.

That she will be the first woman to become the DG of CERN can’t escape attention either, especially at a time when women’s participation in STEM research seems to be on the decline and sexism in science is being recognized as a prevalent issue. Dr. Gianotti will no doubt make a strong role model for a field that is only 25% women. There will also be much to learn from her past, from the time she chose to become a physicist after learning about Albert Einstein’s idea of quantum mechanics to explain the photoelectric effect. She joined CERN while working toward her PhD from the University of Milan. She was 25, it was 1987 and the W/Z bosons had just been discovered at the facility’s UA1 and UA2 collaborations. Dr. Gianotti would join the latter.

It was an exciting time to be a physicist as well as exacting. Planning for the LHC would begin in that decade and launch one of the world’s largest scientific collaborations with it. The success of a scientist would start to demand not just research excellence but also a flair for public relations, bureaucratic diplomacy and the acuity necessary to manage public funds in the billions from different countries. Dr. Gianotti would go on to wear all these hats even as she started work in calorimetry at the LHC in 1990, on the ATLAS detector in 1992, and on the search for supersymmetric (‘new physics’) particles in 1996.

Her admiration for the humanities has been known to play its part in shaping her thoughts about the universe at its most granular. She has a professional music diploma from the Milan Conservatory and often unwinds at the end of a long day with a session on the piano. Her fifth-floor home in Geneva sometimes affords her a view of Mont Blanc, and she often enjoys long walks in the mountains. In the same interview, given to Financial Times in 2013, she adds,

There are many links between physics and art. For me, physics and nature have very nice foundations from an aesthetic point of view, and at the same time art is based on physics and mathematical principle. If you build a nice building, you have to build it with some criteria because otherwise it collapses.2

Her success in leading the ATLAS collaboration, and becoming the veritable face of the hunt for the Higgs boson, have catapulted her to being the next DG of CERN. At the same time, it must feel reassuring3 that as physicists embark on a new era of research that requires just as much ingenuity in formulating new ideas as in testing them, an era “where logic based on past theories does not guide us”4, Fabiola Gianotti’s research excellence, administrative astuteness and creative intuition is now there to guide them.

Good luck, Dr. Gianotti!


1Recommended read: Who really found the Higgs boson? The real genius in the Nobel Prize-winning discovery is not who you think it is. Nautilus, Issue 18.

2I must mention that it’s weird that someone which such strong aesthetic foundations used Comic Sans MS as the font of choice for her presentation at the CERN seminar in 2012 that announced the discovery of a Higgs-like-boson. It was probably the beginning of Comic Sans’s comeback.

3Though I am no physicist.

4In the words of Academy Award-winning film editor Walter S. Murch.

Featured image credit: Claudia Marcelloni/CERN

Restarting the LHC: A timeline

CERN has announced the restart schedule of its flagship science “project”, the Large Hadron Collider, that will see the giant machine return online in early 2015. I’d written about the upgrades that could be expected shortly before it shut down in 2012. They range from new pixel sensors and safety systems to facilities that will double the collider’s energy and the detectors’ eyes for tracking collisions. Here’s a little timeline I made with Timeline.js, check it out.

(It’s at times like this that I really wish WP.com would let bloggers embed iframes in posts.)

The hunt for supersymmetry: Reviewing the first run – 2

I’d linked to a preprint paper [PDF] on arXiv a couple days ago that had summarized the search for Supersymmetry (Susy) from the first run of the Large Hadron Collider (LHC). I’d written to one of the paper’s authors, Pascal Pralavorio at CERN, seeking some insights into his summary, but unfortunately he couldn’t reply by the time I’d published the post. He replied this morning and I’ve summed them up.

Pascal says physicists trained their detectors for “the simplest extension of the Standard Model” using supersymmetric principles called the Minimal Supersymmetric Standard Model (MSSM), formulated in the early 1980s. This meant they were looking for a total of 35 particles. In the first run, the LHC operated at two different energies: first at 7 TeV (at a luminosity of 5 fb-1), then at 8 TeV (at 20 fb-1; explainer here). The data was garnered from both the ATLAS and CMS detectors.

In all, they found nothing. As a result, as Pascal says, “When you find nothing, you don’t know if you are close or far from it!

His paper has an interesting chart that summarized the results for the search for Susy from Run 1. It is actually a superimposition of two charts. One shows the different Standard Model processes (particle productions, particle decays, etc.) at different energies (200-1,600 GeV). The second shows the Susy processes that are thought to occur at these energies.

Cross sections of several SUSY production channels, superimposed with Standard Model process at s = 8 TeV. The right-handed axis indicates the number of events for 20/fb.
Cross sections of several SUSY production channels, superimposed with Standard Model process at s = 8 TeV. The right-handed axis indicates the number of events for 20/fb.

The cross-section of the chart is the probability of an event-type to appear during a proton-proton collision. What you can see from this plot is the ratio of probabilities. For example, stop-stop* (the top quark’s Susy partner particle and anti-particle, respectively) production with a mass of 400 GeV is 1010 (10 billion) less probable than inclusive di-jet events (a Standard Model process). “In other words,” Pascal says, it is “very hard to find” a Susy process while Standard Model processes are on, but it is “possible for highly trained particle physics” to get there.

Of course, none of this means physicists aren’t open to the possibility of there being a theory (and corresponding particles out there) that even Susy mightn’t be able to explain. The most popular among such theories is “the presence of a “possible extra special dimension” on top of the three that we already know. “We will of course continue to look for it and for supersymmetry in the second run.”

Which way does antimatter swing?

In our universe, matter is king: it makes up everything. Its constituents are incredibly tiny particles – smaller than even the protons and neutrons they constitute – and they work together with nature’s forces to make up… everything.

There was also another form of particle once, called antimatter. It is extinct today, but when the universe was born 13.82 billion years ago, there were equal amounts of both kinds.

Nobody really knows where all the antimatter disappeared to or how, but they are looking. Some others, however, are asking another question: did antimatter, while it lasted, fall downward or upward in response to gravity?

Joel Fajans, a professor at the University of California, Berkeley, is one of the physicists doing the asking. “It is the general consensus that the interaction of matter with antimatter is the same as gravitational interaction of matter,” he told this correspondent.

But he wants to be sure, because what he finds could revolutionize the world of physics. Over the years, studying particles and their antimatter counterparts has revealed most of what we know today about the universe. In the future, physicists will explore their minuscule world, called the quantum world, further to see if answers to some unsolved problems are found. If, somewhere, an anomaly is spotted, it could pave the way for new explanations to take over.

“Much of our basic understanding of the evolution of the early universe might change. Concepts like dark energy and dark matter might have be to revised,” Fajans said.

Along with his colleague Jonathan Wurtele, Fajans will work with the ALPHA experiment at CERN to run an elegant experiment that could directly reveal gravity’s effect on antimatter. ALPHA stands for Anti-hydrogen Laser Physics Apparatus.

We know gravity acts on a ball by watching it fall when dropped. On Earth, the ball will fall toward the source of the gravitational pull, a direction called ‘down’. Fajans and Wurtele will study if down is in the same place for antimatter as for matter.

An instrument at CERN called the anti-proton decelerator (AD) synthesizes the antimatter counterpart of protons for study in the lab at a low energy. Fajans and co. will then use the ALPHA experiment’s setup to guide them into the presence of anti-electrons derived from another source using carefully directed magnetic fields.

When an anti-proton and an anti-electron come close enough, their charges will trap each other to form an anti-hydrogen atom.

Because antimatter and matter annihilate each other in a flash of energy, they couldn’t be let near each other during the experiment. Instead, the team used strong magnetic fields to form a force-field around the antimatter, “bottling” it in space.

Once this was done, the experiment was ready to go. Like fingers holding a ball unclench, the magnetic fields were turned off – but not instantaneously. They were allowed to go from ‘on’ to ‘off’ over 30 milliseconds. In this period, the magnetic force wears off and lets gravitational force take its place.

And in this state, Fajans and his team studied which way the little things moved: up or down.

The results

The first set of results from the experiment have allowed no firm conclusions to be drawn. Why? Fajans answered, “Relatively speaking, gravity has little effect on the energetic anti-atoms. They are already moving so fast that they are barely affected by the gravitational forces.” According to Wurtele, about 411 out 434 anti-atoms in the trap were so energetic that the way they escaped from the trap couldn’t be attributed to gravity’s pull or push on them.

Among them, they observed roughly equal numbers of anti-atoms to falling out at the bottom of the trap as at the top (and sides, for that matter.)

They shared this data with their ALPHA colleagues and two people from the University of California, lecturer Andrew Charman and postdoc Andre Zhmoginov. They ran statistical tests to separate results due to gravity from results due to the magnetic field. Again, much statistical uncertainty remained.

The team has no reason to give up, though. For now, they know that gravity would have to be 100 times stronger than it is for them to see any of its effects on anti-hydrogen atoms. They have a lower limit.

Moreover, the ALPHA experiment is also undergoing upgrades to become ALPHA-2. With this avatar, Fajans’s team also hopes to incorporate laser-cooling, a method of further slowing the anti-atoms, so that the effects of gravity are enhanced. Michael Doser, however, is cautious.

The future

As a physicist working with antimatter at CERN, Doser says, “I would be surprised if laser cooling of antihydrogen atoms, something that hasn’t been attempted to date, would turn out to be straightforward.” The challenge lies in bringing the systematics down to the point at which one can trust that any observation would be due to gravity, rather than due to the magnetic trap or the detectors being used.

Fajans and co. also plan to turn off the magnets more slowly in the future to enhance the effects of gravity on the anti-atom trajectories. “We hope to be able to definitively answer the question of whether or not antimatter falls down or up with these improvements,” Fajans concluded.

Like its larger sibling, the Large Hadron Collider, the AD is also undergoing maintenance and repair in 2013, so until the next batch of anti-protons are available in mid-2014, Fajans and Wurtele will be running tests at their university, checking if their experiment can be improved in any way.

They will also be taking heart from there being two other experiments at CERN that can verify their results if they come up with something anomalous, two experiments working with antimatter and gravity. They are the Anti-matter Experiment: Gravity, Interferometry, Spectrocopy (AEGIS), for which Doser is the spokesperson, and the Gravitational Behaviour of Anti-hydrogen at Rest (GBAR).

Together, they carry the potential benefit of an independent cross-check between techniques and results. “This is less important in case no difference to the behaviour of normal matter is found,” Doser said, “but would be crucial in the contrary case. With three experiments chasing this up, the coming years look to be interesting!”

This post, as written by me, originally appeared in The Copernican science blog at The Hindu on May 1, 2013.