The strange beauty of Planck units

What does it mean to say that the speed of light is 1?

We know the speed of light in the vacuum of space to be 299,792,458 m/s – or about 300,000 km/s. It’s a quantity of speed that’s very hard to visualise with the human brain. In fact, it’s so fast as to practically be instantaneous for the human experience. In some contexts it might be reassuring to remember the 300,000 km/s figure, such as when you’re a theoretical physicist working on quantum physics problems and you need to remember that reality is often (but not always) local, meaning that when a force appears to to transmit its effects on its surroundings really rapidly, the transmission is still limited by the speed of light. (‘Not always’ because quantum entanglement appears to break this rule.)

Another way to understand the speed of light is as an expression of proportionality. If another entity, which we’ll call X, can move at best at 150,000 km/s in the vacuum of space, we can say the speed of light is 2x the speed of X in this medium. Let’s say that instead of km/s we adopt a unit of speed called kb/s, where b stands for bloop: 1 bloop = 79 km. So the speed of light in vacuum becomes 3,797 kb/s and the speed of X in vacuum becomes 1,898.5 kb/s. The proportionality between the two entities – the speeds of light and X in vacuum – you’ll notice is still 2x.

Let’s change things up a bit more, to expressing the speed of light as the nth power of 2. n = 18 comes closest for light and n = 17 for X. (The exact answer in each case would be log s/log 2, where s is the speed of each entity.) The constant of proportionality is not even close to 2 in this case. The reason is that we switched from linear units to logarithmic units.

This example shows how even our SI units – which allow us to make sense of how much a mile is relative to a kilometre and how much a solar year is in terms of seconds, and thus standardise our sense of various dimensions – aren’t universally standard. The SI units have been defined keeping the human experience of reality in mind – as opposed to, say, those of tardigrades or blue whales.

As it happens, when you’re a theoretical physicist, the human experience isn’t very helpful as you’re trying to understand the vast scales on which gravity operates and the infinitesimal realm of quantum phenomena. Instead, physicists set aside their physical experiences and turned to the universal physical constants: numbers whose values are constant in space and time, and which together control the physical properties of our universe.

By combining only four universal physical constants, the German physicist Max Planck found in 1899 that he could express certain values of length, mass, time and temperature in units related to the human experience. Put another way, these are the smallest distance, mass, duration and temperature values that we can express using the constants of our universe. These are:

  • G, the gravitational constant (roughly speaking, defines the strength of the gravitational force between two massive bodies)
  • c, the speed of light in vacuum
  • h, the Planck constant (the constant of proportionality between a photon’s energy and frequency)
  • kB, the Boltzmann constant (the constant of proportionality between the average kinetic energy of a group of particles and the temperature of the group)

Based on Planck’s idea and calculations, physicists have been able to determine the following:

Credit: Planck units/Wikipedia

(Note here that the Planck constant, h, has been replaced with the reduced Planck constant ħ, which is h divided by 2π.)

When the speed of light is expressed in these Planck units, it comes out to a value of 1 (i.e. 1 times 1.616255×10−35 m per 5.391247×10−44 s). The same goes for the values of the gravitational constant, the Boltzmann constant and the reduced Planck constant.

Remember that units are expressions of proportionality. Because the Planck units are all expressed in terms of universal physical constants, they give us a better sense of what is and isn’t proportionate. To borrow Frank Wilczek’s example, we know that the binding energy due to gravity contributes only ~0.000000000000000000000000000000000000003% of a proton’s mass; the rest comes from its constituent particles and their energy fields. Why this enormous disparity? We don’t know. More importantly, which entity has the onus of providing an explanation for why it’s so out of proportion: gravity or the proton’s mass?

The answer is in the Planck units, in which the value of the gravitational constant G is the desired 1, whereas the proton’s mass is the one out of proportion – a ridiculously small 10-19 (approx.). So the onus is on the proton to explain why it’s so light, rather than on gravity to explain why it acts so feebly on the proton.

More broadly, the Planck units define our universe’s “truly fundamental” units. All other units – of length, mass, time, temperature, etc. – ought to be expressible in terms of the Planck units. If they can’t be, physicists will take that as a sign that their calculations are incomplete, wrong or that there’s a part of physics that they haven’t discovered yet. The use of Planck units can reveal such sources of tension.

For example, since our current theories of physics are founded on the universal physical constants, the theories can’t describe reality beyond the scale described by the Planck units. This is why we don’t really know what happened in the first 10-43 seconds after the Big Bang (and for that matter any events that happen for a duration shorter than this), how matter behaves beyond the Planck temperature or what gravity feels like at distances shorter than 10-35 m.

In fact, just like how gravity dominates the human experience of reality while quantum physics dominates the microscopic experience, physicists expect that theories of quantum gravity (like string theory) will dominate the experience of reality at the Planck length. What will this reality look like? We don’t know, but we know that it’s a good question.

Other helpful sources:

A gear-train for particle physics

It has come under scrutiny at various times by multiple prominent physicists and thinkers, but it’s not hard to see why, when the idea of ‘grand unification’ first set out, it seemed plausible to so many. The first time it was seriously considered was about four decades ago, shortly after physicists had realised that two of the four fundamental forces of nature were in fact a single unified force if you ramped up the energy at which it acted. (electromagnetic + weak = electroweak). The thought that followed was simply logical: what if, at some extremely high energy (like what was in the Big Bang), all four forces unified into one? This was 1974.

There has been no direct evidence of such grand unification yet. Physicists don’t know how the electroweak force will unify with the strong nuclear force – let alone gravity, a problem that actually birthed one of the most powerful mathematical tools in an attempt to solve it. Nonetheless, they think they know the energy at which such grand unification should occur if it does: the Planck scale, around 1019 GeV. This is about as much energy as is contained in a few litres of petrol, but it’s stupefyingly large when you have to accommodate all of it in a particle that’s 10-15 metres wide.

This is where particle accelerators come in. The most powerful of them, the Large Hadron Collider (LHC), uses powerful magnetic fields to accelerate protons to close to light-speed, when their energy approaches about 7,000 GeV. But the Planck energy is still 10 million billion orders of magnitude higher, which means it’s not something we might ever be able to attain on Earth. Nonetheless, physicists’ theories show that that’s where all of our physical laws should be created, where the commandments by which all that exists does should be written.

… Or is it?

There are many outstanding problems in particle physics, and physicists are desperate for a solution. They have to find something wrong with what they’ve already done, something new or a way to reinterpret what they already know. The clockwork theory is of the third kind – and its reinterpretation begins by asking physicists to dump the idea that new physics is born only at the Planck scale. So, for example, it suggests that the effects of quantum gravity (a quantum-mechanical description of gravity) needn’t necessarily become apparent only at the Planck scale but at a lower energy itself. But even if it then goes on to solve some problems, the theory threatens to present a new one. Consider: If it’s true that new physics isn’t born at the highest energy possible, then wouldn’t the choice of any energy lower than that just be arbitrary? And if nothing else, nature is not arbitrary.

To its credit, clockwork sidesteps this issue by simply not trying to find ‘special’ energies at which ‘important’ things happen. Its basic premise is that the forces of nature are like a set of interlocking gears moving against each other, transmitting energy – rather potential – from one wheel to the next, magnifying or diminishing the way fundamental particles behave in different contexts. Its supporters at CERN and elsewhere think it can be used to explain some annoying gaps between theory and experiment in particle physics, particularly the naturalness problem.

Before the Higgs boson was discovered, physicists predicted based on the properties of other particles and forces that its mass would be very high. But when the boson’s discovery was confirmed at CERN in January 2013, its mass implied that the universe would have to be “the size of a football” – which is clearly not the case. So why is the Higgs boson’s mass so low, so unnaturally low? Scientists have fronted many new theories that try to solve this problem but their solutions often require the existence of other, hitherto undiscovered particles.

Clockwork’s solution is a way in which the Higgs boson’s interaction with gravity – rather gravity’s associated energy – is mediated by a string of effects described in quantum field theory that tamp down the boson’s mass. In technical parlance, the boson’s mass becomes ‘screened’. An explanation for this that’s both physical and accurate is hard to draw up because of various abstractions. So as University of Bruxelles physicist Daniele Teresi suggests, imagine this series: Χ = 0.5 × 0.5 × 0.5 × 0.5 × … × 0.5. Even if each step reduces Χ’s value by only a half, it is already an eighth after three steps; after four, a sixteenth. So the effect can get quickly drastic because it’s exponential.

And the theory provides a mathematical toolbox that allows for all this to be achieved without the addition of new particles. This is advantageous because it makes clockwork relatively more elegant than another theory that seeks to solve the naturalness problem, called supersymmetry, SUSY for short. Physicists like SUSY also because it allows for a large energy hierarchy: a distribution of particles and processes at energies between electroweak unification and grand unification, instead of leaving the region bizarrely devoid of action like the Standard Model does. But then SUSY predicts the existence of 17 new particles, none of which have been detected yet.

Even more, as Matthew McCullough, one of clockwork’s developers, showed at an ongoing conference in Italy, its solutions for a stationary particle in four dimensions exhibit conceptual similarities to Maxwell’s equations for an electromagnetic wave in a conductor. The existence of such analogues is reassuring because it recalls nature’s tendency to be guided by common principles in diverse contexts.

This isn’t to say clockwork theory is it. As physicist Ben Allanach has written, it is a “new toy” and physicists are still playing with it to solve different problems. Just that in the event that it has an answer to the naturalness problem – as well as to the question why dark matter doesn’t decay, e.g. – it is notable. But is this enough: to say that clockwork theory mops up the math cleanly in a bunch of problems? How do we make sure that this is how nature works?

McCullough thinks there’s one way, using the LHC. Very simplistically: clockwork theory induces fluctuations in the probabilities with which pairs of high-energy photons are created at some energies at the LHC. These should be visible as wavy squiggles in a plot with energy on the x-axis and events on the y-axis. If these plots can be obtained and analysed, and the results agree with clockwork’s predictions, then we will have confirmed what McCullough calls an “irreducible prediction of clockwork gravity”, the case of using the theory to solve the naturalness problem.

To recap: No free parameters (i.e. no new particles), conceptual elegance and familiarity, and finally a concrete and unique prediction. No wonder Allanach thinks clockwork theory inhabits fertile ground. On the other hand, SUSY’s prospects have been bleak since at least 2013 (if not earlier) – and it is one of the more favoured theories among physicists to explain physics beyond the Standard Model, physics we haven’t observed yet but generally believe exists. At the same time, and it bears reiterating, clockwork theory will also have to face down a host of challenges before it can be declared a definitive success. Tik tok tik tok tik tok