A new dawn for particle accelerators in the wake

During a lecture in 2012, G. Rajasekaran, professor emeritus at the Institute for Mathematical Sciences, Chennai, said that the future of high-energy physics lay with engineers being able to design smaller particle accelerators. The theories of particle physics have for long been exploring energy levels that we might never be able to reach with accelerators built on Earth. At the same time, it will still be on physicists to reach the energies that we can reach but in ways that are cheaper, more efficient, and smaller – because reach them we will have to if our theories must be tested. According to Rajasekaran, the answer is, or will soon be, the tabletop particle accelerator.

In the last decade, tabletop accelerators have inched closer to commercial viability because of a method called plasma wakefield acceleration. Recently, a peer-reviewed experiment detailing the effects of this method was performed at the University of Maryland (UMD) and the results published in the journal Physical Review Letters. A team-member said in a statement: “We have accelerated high-charge electron beams to more than 10 million electron volts using only millijoules of laser pulse energy. This is the energy consumed by a typical household lightbulb in one-thousandth of a second.” Ten MeV pales in comparison to what the world’s most powerful particle accelerator, the Large Hadron Collider (LHC), achieves – a dozen million MeV – but what the UMD researchers have built doesn’t intend to compete against the LHC but against the room-sized accelerators typically used for medical imaging.

In particle accelerator like the LHC or the Stanford linac, a string of radiofrequency (RF) cavities are used to accelerate charged particles around a ring. Energy is delivered to the particles using powerful electromagnetic fields via the cavities, which switch polarity at 400 MHz – that’s switching at 400 million times a second. The particles’ arrival at the cavities are timed accordingly. Over the course of 15 minutes, the particle bunches are accelerated from 450 GeV to 4 TeV (the beam energy before the LHC was upgraded over 2014), with the bunches going 11,000 times around the ring per second. As the RF cavities switch faster and are ramped up in energy, the particles swing faster and faster around – until computers bring two such beams into each other’s paths at a designated point inside the ring and BANG.

A wakefield accelerator also has an electromagnetic field that delivers the energy, but instead of ramping and switching over time, it delivers the energy in one big tug.

First, scientists create a plasma, a fluidic state of matter consisting of free-floating ions (positively charged) and electrons (negatively charged). Then, the scientists shoot two bunches of electrons separated by 15-20 micrometers (millionths of a metre). As the leading bunch moves into the plasma, it pushes away the plasma’s electrons and so creates a distinct electric field around itself called the wakefield. The wakefield envelopes the trailing bunch of electrons as well, and exerts two forces on them: one along the direction of the leading bunch, which accelerates the trailing bunch, and one in the transverse direction, which either makes the bunch more or less focused. And as the two bunches shoot through the plasma, the leading bunch transfers its energy to the trailing bunch via the linear component of the wakefield, and the trailing bunch accelerates.

A plasma wakefield accelerator scores over a bigger machine in two key ways:

  • The wakefield is a very efficient energy transfer medium (but not as much as natural media), i.e. transformer. Experiments at the Stanford Linear Accelerator Centre (SLAC) have recorded 30% efficiency, which is considered high.
  • Wakefield accelerators have been able to push the energy gained per unit distance travelled by the particle to 100 GV/m (an electric potential of 1 GV/m corresponds to an energy gain of 1 GeV/c2 for one electron over 1 metre). Assuming a realistic peak accelerating gradient of 100 MV/m, a similar gain (of 100 GeV) at the SLAC would have taken over a kilometre.

There are many ways to push these limits – but it is historically almost imperative that we do. Could the leap in accelerating gradient by a factor of 100 to 1,000 break the slope of the Livingston plot?

Could the leap in accelerating gradient from RF cavities to plasma wakefields break the Livingston plot? Source: AIP
Could the leap in accelerating gradient from RF cavities to plasma wakefield accelerators break the Livingston plot? Source: AIP

In the UMD experiment, scientists shot a laser pulse into a hydrogen plasma. The photons in the laser then induced the wakefield that trailing electrons surfed and were accelerated through. To reduce the amount of energy transferred by the laser to generate the same wakefield, they made the plasma denser instead to capitalise on an effect called self-focusing.

A laser’s electromagnetic field, as it travels through the plasma, makes electrons near it wiggle back and forth as the field’s waves pass through. The more intense waves near the pulse’s centre make the electrons around it wiggle harder. Since Einstein’s theory of relativity requires objects moving faster to weigh more, the harder-wiggling electrons become heavier, slow down and then settle down, creating a focused beam of electrons along the laser pulse. The denser the plasma, the stronger the self-focusing – a principle that can compensate for weaker laser pulses to sustain a wakefield of the same strength if the pulses were stronger but the plasma less dense.

The UMD team increased the hydrogen gas density, of which the plasma is made, by some 20x and found that electrons could be accelerated by 2-12 MeV using 10-50 millijoule laser pulses. Additionally, the scientists also found that at high densities, the amplitude of the plasma wave propagated by the laser pulse increases to the point where it traps some electrons from the plasma and continuously accelerates them to relativistic energies. This obviates the need for trailing electrons to be injected separately and increases the efficiency of acceleration.

But as with all accelerators, there are limitations. Two specific to the UMD experiment are:

  • If the plasma density goes beyond a critical threshold (1.19 x 1020 electrons/cm3) and if the laser pulse is too powerful (>50 mJ), the electrons are accelerated more by the direct shot than by the plasma wakefield. These numbers define an upper limit to the advantage of relativistic self-focusing.
  • The accelerated electrons slowly drift apart (in the UMD case, to at most 250 milliradians) and so require separate structures to keep their beam focused – especially if they will be used for biomedical purposes. (In 2014, physicists from the Lawrence Berkeley National Lab resolved this problem by using a 9-cm long capillary waveguide through which the plasma was channelled.)

There is another way lasers can be used to build an accelerator. In 2013, physicists from Stanford University devised a small glass channel 0.075-0.1 micrometers wide, and etched with nanoscale ridges on the floor. When they shined infrared light with wavelength of twice the channel’s height across it, the eM field of the light wiggled the electrons back and forth – but the ridges on the floor were cut such that electrons passing over the crests would accelerate more than they would decelerate when passing over the troughs. Like this, they achieved an energy gain gradient of 300 MeV/m. This way, the accelerator is only a few millimetres long and devoid of any plasma, which is difficult to handle.

At the same time, this method shares a shortcoming with the (non-laser driven) plasma wakefield accelerator: both require the electrons to be pre-accelerated before injection, which means room-sized pre-accelerators are still in the picture.

Physical size is an important aspect of particle accelerators because, the way we’re building them, the higher-energy ones are massive. The LHC currently accelerates particles to 13 TeV (1 TeV = 1 million MeV) in a 27-km long underground tunnel running beneath the shared borders of France and Switzerland. The planned Circular Electron-Positron Collider in China envisages a 100-TeV accelerator around a 54.7-km long ring (Both the LHC and the CEPC involve pre-accelerators that are quite big – but not as much as the final-stage ring). The International Linear Collider will comprise a straight tube, instead of a ring, over 30 km long to achieve accelerations of 500 GeV to 1 TeV. In contrast, Georg Korn suggested in APS Physics in December 2014 that a hundred 10-GeV electron acceleration modules could be lined up facing against a hundred 10-GeV positron acceleration modules to have a collider that can compete with the ILC but from atop a table.

In all these cases, the net energy gain per distance travelled (by the accelerated particle) was low compared to the gain in wakefield accelerators: 250 MV/m versus 10-100 GV/m. This is the physical difference that translates to a great reduction in cost (from billions of dollars to thousands), which in turn stands to make particle accelerators accessible to a wider range of people. As of 2014, there were at least 30,000 particle accelerators around the world – up from 26,000 in 2010 according to a Physics Today census. More importantly, the latter estimated that almost half the accelerators were being used for medical imaging and research, such as in radiotherapy, while the really high-energy devices (>1 GeV) used for physics research numbered a little over 100.

These are encouraging numbers for India, which imports 75% of its medical imaging equipment for more than Rs.30,000 crore a year (2015). These are also encouraging numbers for developing nations in general that want to get in on experimental high-energy physics, innovations in which power a variety of applications, ranging from cleaning coal to detecting WMDs, not to mention expand their medical imaging capabilities as well.

Featured image credit: digital cat/Flickr, CC BY 2.0.

Is the universe as we know it stable?

The anthropic principle has been a cornerstone of fundamental physics, being used by some physicists to console themselves about why the universe is the way it is: tightly sandwiched between two dangerous states. If the laws and equations that define it had slipped during its formation just one way or the other in their properties, humans wouldn’t have existed to be able to observe the universe, and conceive the anthropic principle. At least, this is the weak anthropic principle – that we’re talking about the anthropic principle because the universe allowed humans to exist, or we wouldn’t be here. The strong anthropic principle thinks the universe is duty-bound to conceive life, and if another universe was created along the same lines that ours was, it would conceive intelligent life, too, give or take a few billion years.

The principle has been repeatedly resorted to because physicists are at that juncture in history where they’re not able to tell why some things are the way they are and – worse – why some things aren’t the way they should be. The latest significant addition to this list, and an illustrative example, is the Higgs boson, whose discovery was announced on July 4, 2012, at the CERN supercollider LHC. The Higgs boson’s existence was predicted by three independently working groups of physicists in 1964. In the intervening decades, from hypothesis to discovery, physicists spent a long time trying to find its mass. The now-shut American particle accelerator Tevatron helped speed up this process, using repeated measurements to steadily narrow down the range of masses in which the boson could lie. It was eventually found at the LHC at 125.6 GeV (a proton weighs about 0.98 GeV).

It was a great moment, the discovery of a particle that completed the Standard Model group of theories and equations that governs the behaviour of fundamental particles. It was also a problematic moment for some, who had expected the Higgs boson to weigh much, much more. The mass of the Higgs boson is connected to the energy of the universe (because the Higgs field that generates the boson pervades throughout the universe), so by some calculations 125.6 GeV implied that the universe should be the size of a football. Clearly, it isn’t, so physicists got the sense something was missing from the Standard Model that would’ve been able to explain the discrepancy. (In another example, physicists have used the discovery of the Higgs boson to explain why there is more matter than antimatter in the universe though both were created in equal amounts.)

The energy of the Higgs field also contributes to the scalar potential of the universe. A good analogy lies with the electrons in an atom. Sometimes, an energised electron sees fit to lose some extra energy it has in the form of a photon and jump to a lower-energy state. At others, a lower-energy electron can gain some energy to jump to a higher state, a phenomenon commonly observed in metals (where the higher-energy electrons contribute to conducting electricity). Like the electrons can have different energies, the scalar potential defines a sort of energy that the universe can have. It’s calculated based on the properties of all the fundamental forces of nature: strong nuclear, weak nuclear, electromagnetic, gravitational and Higgs.

For the last 13.8 billion years, the universe has existed in a particular way that’s been unchanged, so we know that it is at a scalar-potential minimum. The apt image is of a mountain-range, like so:

valleys1

The point is to figure out if the universe is lying at the deepest point of the potential – the global minimum – or at a point that’s the deepest in a given range but not the deepest overall – the local minimum. This is important for two reasons. First: the universe will always, always try to get to the lowest energy state. Second: quantum mechanics. With the principles of classical mechanics, if the universe were to get to the global minimum from the local minimum, its energy will first have to be increased so it can surmount the intervening peaks. But with the principles of quantum mechanics, the universe can tunnel through the intervening peaks to sink into the global minimum. And such tunnelling could occur if the universe is currently in a local minimum only.

To find out, physicists try and calculate the shape of the scalar potential in its entirety. This is an intensely complicated mathematical process and takes lots of computing power to tackle, but that’s beside the point. The biggest problem is that we don’t know enough about the fundamental forces, and we don’t know anything about what else could be out there at higher energies. For example, it took an accelerator capable of boosting particles to 3,500 GeV and then smash them head-on to discover a particle weighing 125 GeV. Discovering anything heavier – i.e. more energetic – would take ever more powerful colliders costing many billions of dollars to build.

Almost sadistically, theoretical physicists have predicted that there exists an energy level at which the gravitational force unifies with the strong/weak nuclear and electromagnetic forces to become one indistinct force: the Planck scale, 12,200,000,000,000,000,000 GeV. We don’t know the mechanism of this unification, and its rules are among the most sought-after in high-energy physics. Last week, Chinese physicists announced that they were planning to build a supercollider bigger than the LHC, called the Circular Electron-Positron Collider (CEPC), starting 2020. The CEPC is slated to collide particles at 100,000 GeV, more than 7x the energy at which the LHC collides particles now, in a ring 54.7 km long. Given the way we’re building our most powerful particle accelerators, one able to smash particles together at the Planck scale would have to be as large as the Milky Way.

(Note: 12,200,000,000,000,000,000 GeV is the energy produced when 57.2 litres of gasoline are burnt, which is not a lot of energy at all. The trick is to contain so much energy in a particle as big as the proton, whose diameter is 0.000000000000001 m. That is, the energy density is 1064 GeV/m3.)

We also don’t know how the Standard Model scales from the energy levels it currently inhabits unto the Planck scale. If it changes significantly as it scales up, then the forces’ contributions to the scalar potential will change also. Physicists think that if any new bosons, essentially new forces, appear along the way, then the equations defining the scalar potential – our picture of the peaks and valleys – will have to be changed themselves. This is why physicists want to arrive at more precise values of, say, the mass of the Higgs boson.

Or the mass of the top quark. While force-carrying particles are called bosons, matter-forming particles are called fermions. Quarks are a type of fermion; together with force-carriers called gluons, they make up protons and neutrons. There are six kinds, or flavours, of quarks, and the heaviest is called the top quark. In fact, the top quark is the heaviest known fundamental particle. The top quark’s mass is particularly important. All fundamental particles get their mass from interacting with the Higgs field – the more the level of interaction, the higher the mass generated. So a precise measurement of the top quark’s mass indicates the Higgs field’s strongest level of interaction, or “loudest conversation”, with a fundamental particle, which in turn contributes to the scalar potential.

On November 9, a group of physicists from Russia published the results of an advanced scalar-potential calculation to find where the universe really lay: in a local minimum or in a stable global minimum. They found that the universe was in a local minimum. The calculations were “advanced” because they used the best estimates available for the properties of the various fundamental forces, as well as of the Higgs boson and the top quark, to arrive at their results, but they’re still not final because the estimates could still vary. Hearteningly enough, the physicists also found that if the real values in the universe shifted by just 1.3 standard deviations from our best estimates of them, our universe would enter the global minimum and become truly stable. In other words, the universe is situated in a shallow valley on one side of a peak of the scalar potential, and right on the other side lies the deepest valley of all that it could sit in for ever.

If the Russian group’s calculations are right (though there’s no quick way for us to know if they aren’t), then there could be a distant future – in human terms – where the universe tunnels through from the local to the global minimum and enters a new state. If we’ve assumed that the laws and forces of nature haven’t changed in the last 13.8 billion years, then we can also assume that in the fully stable state, these laws and forces could change in ways we can’t predict now. The changes would sweep over from one part of the universe into others at the speed of light, like a shockwave, redefining all the laws that let us exist. One moment we’d be around and gone the next. For all we know, that breadth of 1.3 standard deviations between our measurements of particles’ and forces’ properties and their true values could be the breath of our lives.

The Wire
November 11, 2015