The not-so-obvious obvious

If your job requires you to pore through a dozen or two scientific papers every month – as mine does – you’ll start to notice a few every now and then couching a somewhat well-known fact in study-speak. I don’t mean scientific-speak, largely because there’s nothing wrong about trying to understand natural phenomena in the formalised language of science. However, there seems to be something iffy – often with humorous effect – about a statement like the following: “cutting emissions of ozone-forming gases offers a ‘unique opportunity’ to create a ‘natural climate solution'”1 (source). Well… d’uh. This is study-speak – to rephrase mostly self-evident knowledge or truisms in unnecessarily formalised language, not infrequently in the style employed in research papers, without adding any new information but often including an element of doubt when there is likely to be none.

1. Caveat: These words were copied from a press release, so this could have been a case of the person composing the release being unaware of the study’s real significance. However, the words within single-quotes are copied from the corresponding paper itself. And this said, there have been some truly hilarious efforts to make sense of the obvious. For examples, consider many of the winners of the Ig Nobel Prizes.

Of course, it always pays to be cautious, but where do you draw the line before a scientific result is simply one because it is required to initiate a new course of action? For example, the Univ. of Exeter study, the press release accompanying which discussed the effect of “ozone-forming gases” on the climate, recommends cutting emissions of substances that combine in the lower atmosphere to form ozone, a compound form of oxygen that is harmful to both humans and plants. But this is as non-“unique” an idea as the corresponding solution that arises (of letting plants live better) is “natural”.

However, it’s possible the study’s authors needed to quantify these emissions to understand the extent to which ambient ozone concentration interferes with our climatic goals, and to use their data to inform the design and implementation of corresponding interventions. Such outcomes aren’t always obvious but they are there – often because the necessarily incremental nature of most scientific research can cut both ways. The pursuit of the obvious isn’t always as straightforward as one might believe.

The Univ. of Exeter group may have accumulated sufficient and sufficiently significant evidence to support their conclusion, allowing themselves as well as others to build towards newer, and hopefully more novel, ideas. A ladder must have rungs at the bottom irrespective of how tall it is. But when the incremental sword cuts the other way, often due to perverse incentives that require scientists to publish as many papers as possible to secure professional success, things can get pretty nasty.

For example, the Cornell University consumer behaviour researcher Brian Wansink was known to advise his students to “slice” the data obtained from a few experiments in as many different ways as possible in search of interesting patterns. Many of the papers he published were later found to contain numerous irreproducible conclusions – i.e. Wansink had searched so hard for patterns that he’d found quite a few even when they really weren’t there. As the British economist Ronald Coase said, “If you torture the data long enough, it will confess to anything.”

The dark side of incremental research, and the virtue of incremental research done right, stems from the fact that it’s non-evidently difficult to ascertain the truth of a finding when the strength of the finding is expected to be so small that it really tests the notion of significance or so large – or so pronounced – that it transcends intuitive comprehension.

For an example of the former, among particle physicists, a result qualifies as ‘fact’ if the chances of it being a fluke are 1 in 3.5 million. So the Large Hadron Collider (LHC), which was built to discover the Higgs boson, had to have performed at least 3.5 million proton-proton collisions capable of producing a Higgs boson and which its detectors could observe and which its computers could analyse to attain this significance.

But while protons are available abundantly and the LHC can theoretically perform 645.8 trillion collisions per second, imagine undertaking an experiment that requires human participants to perform actions according to certain protocols. It’s never going to be possible to enrol billions of them for millions of hours to arrive at a rock-solid result. In such cases, researchers design experiments based on very specific questions, and such that the experimental protocols suppress, or even eliminate, interference, sources of doubt and confounding variables, and accentuate the effects of whatever action, decision or influence is being evaluated.

Such experiments often also require the use of sophisticated – but nonetheless well-understood – statistical methods to further eliminate the effects of undesirable phenomena from the data and, to the extent possible, leave behind information of good-enough quality to support or reject the hypotheses. In the course of navigating this winding path from observation to discovery, researchers are susceptible to, say, misapplying a technique, overlooking a confounder or – like Wansink – overanalysing the data so much that a weak effect masquerades as a strong one but only because it’s been submerged in a sea of even weaker effects.

Similar problems arise in experiments that require the use of models based on very large datasets, where researchers need to determine the relative contribution of each of thousands of causes on a given effect. The Univ. of Exeter study that determined ozone concentration in the lower atmosphere due to surface sources of different gases contains an example. The authors write in their paper (emphasis added):

We have provided the first assessment of the quantitative benefits to global and regional land ecosystem health from halving air pollutant emissions in the major source sectors. … Future large-scale changes in land cover [such as] conversion of forests to crops and/or afforestation, would alter the results. While we provide an evaluation of uncertainty based on the low and high ozone sensitivity parameters, there are several other uncertainties in the ozone damage model when applied at large-scale. More observations across a wider range of ozone concentrations and plant species are needed to improve the robustness of the results.

In effect, their data could be modified in future to reflect new information and/or methods, but in the meantime, and far from being a silly attempt at translating a claim into jargon-laden language, the study eliminates doubt to the extent possible with existing data and modelling techniques to ascertain something. And even in cases where this something is well known or already well understood, the validation of its existence could also serve to validate the methods the researchers employed to (re)discover it and – as mentioned before – generate data that is more likely to motivate political action than, say, demands from non-experts.

In fact, the American mathematician Marc Abrahams, known much more for founding and awarding the Ig Nobel Prizes, identified this purpose of research as one of three possible reasons why people might try to “quantify the obvious” (source). The other two are being unaware of the obvious and, of course, to disprove the obvious.

Science v. tech, à la Cixin Liu

A fascinating observation by Cixin Liu in an interview in Public Books, to John Plotz and translated by Pu Wang (numbers added):

… technology precedes science. (1) Way before the rise of modern science, there were so many technologies, so many technological innovations. But today technology is deeply embedded in the development of science. Basically, in our contemporary world, science sets a glass ceiling for technology. The degree of technological development is predetermined by the advances of science. (2) … What is remarkably interesting is how technology becomes so interconnected with science. In the ancient Greek world, science develops out of logic and reason. There is no reliance on technology. The big game changer is Galileo’s method of doing experiments in order to prove a theory and then putting theory back into experimentation. After Galileo, science had to rely on technology. … Today, the frontiers of physics are totally conditioned on the developments of technology. This is unprecedented. (3)

Perhaps an archaeology or palaeontology enthusiast might have regular chances to see the word ‘technology’ used to refer to Stone Age tools, Bronze Age pots and pans, etc. but I have almost always encountered these objects only as ‘relics’ or such in the popular literature. It’s easy to forget (1) because we have become so accustomed to thinking of technology as pieces of machines with complex electrical, electronic, hydraulic, motive, etc. components. I’m unsure of the extent to which this is an expression of my own ignorance but I’m convinced that our contemporary view of and use of technology, together with the fetishisation of science and engineering education over the humanities and social sciences, also plays a hand in maintaining this ignorance.

The expression of (2) is also quite uncommon, especially in India, where the government’s overbearing preference for applied research has undermined blue-sky studies in favour of already-translated technologies with obvious commercial and developmental advantages. So when I think of ‘science and technology’ as a body of knowledge about various features of the natural universe, I immediately think of science as the long-ranging, exploratory exercise that lays the railway tracks into the future that the train of technology can later ride. Ergo, less glass ceiling and predetermination, and more springboard and liberation. Cixin’s next words offer the requisite elucidatory context: advances in particle physics are currently limited by the size of the particle collider we can build.

(3) However, he may not be able to justify his view beyond specific examples simply because, to draw from the words of a theoretical physicist from many years ago – that they “require only a pen and paper to work” – it is possible to predict the world for a much lower cost than one would incur to build and study the future.

Plotz subsequently, but thankfully briefly, loses the plot when he asks Cixin whether he thinks mathematics belongs in science, and to which Cixin provides a circuitous non-answer that somehow misses the obvious: science’s historical preeminence began when natural philosophers began to encode their observations in a build-as-you-go, yet largely self-consistent, mathematical language (my favourite instance is the invention of non-Euclidean geometry that enabled the theories of relativity). So instead of belonging within one of the two, mathematics is – among other things – better viewed as a bridge.

The symmetry incarnations

This post was originally published on October 6, 2012. I recently rediscovered it and decided to republish it with a few updates.

Geometric symmetry in nature is often a sign of unperturbedness, as if nothing has interfered with a natural process and that its effects at each step are simply scaled-up or scaled-down versions of each other. For this reason, symmetry is aesthetically pleasing, and often beautiful. Consider, for instance, faces. Symmetry of facial features about the central vertical axis is often translated as the face being beautiful, not just by humans but also monkeys.

This is only one example of one of the many forms of symmetry’s manifestation. When it involves geometric features, it’s a case of geometrical symmetry. When a process occurs similarly both forward and backward in time, it is temporal symmetry. If two entities that don’t seem geometrically congruent at first sight rotate, move or scale with similar effects on their forms, it is transformational symmetry. A similar definition applies to all theoretical models, musical progressions, knowledge and many other fields besides.

Symmetry-breaking

One of the first (postulated) instances of symmetry is said to have occurred during the Big Bang, when the universe was born. A sea of particles was perturbed 13.75 billion years ago by a high-temperature event, setting up anecdotal ripples in their system, eventually breaking their distribution in such a way that some particles got mass, some charge, some spin, some all of them, and some none of them. This event is known as electroweak symmetry-breaking. Because of the asymmetric properties of the resultant particles, matter as we know it was conceived.

Many invented scientific systems exhibit symmetry in that they allow for the conception of symmetry in the things they make possible. A good example is mathematics. On the real-number line, 0 marks the median. On either sides of 0, 1 and -1 are equidistant from 0; 5,000 and -5,000 are equidistant from 0; possibly, ∞ and -∞ are equidistant from 0. Numerically speaking, 1 marks the same amount of something that -1 marks on the other side of 0. Characterless functions built on this system also behave symmetrically on either sides of 0.

To many people, symmetry evokes the image of an object that, when cut in half along a specific axis, results in two objects that are mirror-images of each other. Cover one side of your face and place the other side against a mirror, and what a person hopes to see is the other side of the face – despite it being a reflection. Interestingly, this technique was used by neuroscientist V.S. Ramachandran to “cure” the pain of amputees when they tried to move a limb that wasn’t there).

An illustration of V.S. Ramachandran's mirror-box technique: Lynn Boulanger, an occupational therapy assistant and certified hand therapist, uses mirror therapy to help address phantom pain for Marine Cpl. Anthony McDaniel. Caption and credit: US Navy
An illustration of V.S. Ramachandran’s mirror-box technique: Lynn Boulanger, an occupational therapy assistant and certified hand therapist, uses mirror therapy to help address phantom pain for Marine Cpl. Anthony McDaniel. Caption and credit: US Navy

Natural symmetry

Symmetry at its best, however, is observed in nature. Consider germination: when a seed grows into a small plant and then into a tree, the seed doesn’t experiment with designs. The plant is not designed differently from the small tree, and the small tree is not designed differently from the big tree. If a leaf is given to sprout from the node richest in minerals on the stem, then it will. If a branch is given to sprout from the node richest in minerals on the trunk, then it will. So is mineral-deposition in the arbor symmetric? It should be if their transportation out of the soil and into the tree is radially symmetric. And so forth…

At times, repeated gusts of wind may push the tree to lean one way or another, shadowing the leaves from against the force and keeping them form shedding off. The symmetry is then broken, but no matter. The sprouting of branches from branches, and branches from those branches, and leaves from those branches, all follow the same pattern. This tendency to display an internal symmetry is characterised as fractalisation. A well-known example of a fractal geometry is the Mandelbrot set, shown below.

An illustration of recursive self-similarity in Mandelbrot set. Credit: Cuddlyable3/Wikimedia Commons
An illustration of recursive self-similarity in Mandelbrot set. Credit: Cuddlyable3/Wikimedia Commons

If you want to interact with a Mandelbrot set, check out this magnificent visualisation by Paul Neave (defunct now 🙁 ). You can keep zooming in, but at each step, you’ll only see more and more Mandelbrot sets. This set is one of a few exceptional sets that are geometric fractals.

Meta-geometry and Mulliken symbols

It seems like geometric symmetry is the most ubiquitous and accessible example to us. Let’s take it one step further and look at the meta-geometry at play when one symmetrical shape is given an extra dimension. For instance, a circle exists in two dimensions; its three-dimensional correspondent is the sphere. Through such an up-scaling, we are ensuring that all the properties of a circle in two dimensions stay intact in three dimensions, and then we are observing what the three-dimensional shape is.

A circle, thus, becomes a sphere. A square becomes a cube. A triangle becomes a tetrahedron. In each case, the 3D shape is said to have been generated by a 2D shape, and each 2D shape is said to be the degenerate of the 3D shape. Further, such a relationship holds between corresponding shapes across many dimensions, with doubly and triply degenerate surfaces also having been defined.

Credit: Vitaly Ostrosablin/Wikimedia Commons, CC BY-SA 3.0
The three-dimensional cube generates the four-dimensional hypercube, a.k.a. a tesseract. Credit: Vitaly Ostrosablin/Wikimedia Commons, CC BY-SA 3.0

Obviously, there are different kinds of degeneracy, 10 of which the physicist Robert S. Mulliken identified and laid out. These symbols are important because each one defines a degree of freedom that nature possesses while creating entities and this includes symmetrical entities as well. So if a natural phenomenon is symmetrical in n dimensions, then the only way it can be symmetrical in n+1 dimensions also is by transforming through one or many of the degrees of freedom defined by Mulliken.


Symbol Property
A symmetric with respect to rotation around the principal rotational axis (one dimensional representations)
B anti-symmetric with respect to rotation around the principal rotational axis (one dimensional representations)
E degenerate
subscript 1 symmetric with respect to a vertical mirror plane perpendicular to the principal axis
subscript 2 anti-symmetric with respect to a vertical mirror plane perpendicular to the principal axis
subscript g symmetric with respect to a center of symmetry
subscript u anti-symmetric with respect to a center of symmetry
prime (‘) symmetric with respect to a mirror plane horizontal to the principal rotational axis
double prime (”) anti-symmetric with respect to a mirror plane horizontal to the principal rotational axis

Source: LMU Munich


Apart from regulating the perpetuation of symmetry across dimensions, the Mulliken symbols also hint at nature wanting to keep things simple and straightforward. The symbols don’t behave differently for processes moving in different directions, through different dimensions, in different time-periods or in the presence of other objects, etc. The preservation of symmetry by nature is not coincidental. Rather, it is very well-defined.

Anastomosis

Now, if nature desires symmetry, if it is not a haphazard occurrence but one that is well orchestrated if given a chance to be, why don’t we see symmetry everywhere? Why is natural symmetry broken? One answer to this is that it is broken only insofar as it attempts to preserves other symmetries that we cannot observe with the naked eye.

For example, symmetry in the natural order is exemplified by a geological process called anastomosis. This property, commonly of quartz crystals in metamorphic regions of Earth’s crust, allows for mineral veins to form that lead to shearing stresses between layers of rock, resulting in fracturing and faulting. In other terms, geological anastomosis allows materials to be displaced from one location and become deposited in another, offsetting large-scale symmetry in favour of the prosperity of microstructures.

More generally, anastomosis is defined as the splitting of a stream of anything only to reunify sometime later. It sounds simple but it is an exceedingly versatile phenomenon, if only because it happens in a variety of environments and for a variety of purposes. For example, consider Gilbreath’s conjecture. It states that each series of prime numbers to which the forward difference operator – i.e. successive difference between numbers – has been applied always starts with 1. To illustrate:

2 3 5 7 11 13 17 19 23 29 … (prime numbers)

Applying the operator once: 1 2 2 4 2 4 2 4 6 …
Applying the operator twice: 1 0 2 2 2 2 2 2 …
Applying the operator thrice: 1 2 0 0 0 0 0 …
Applying the operator for the fourth time: 1 2 0 0 0 0 0 …

And so forth.

If each line of numbers were to be plotted on a graph, moving upwards each time the operator is applied, then a pattern for the zeros emerges, shown below.

The forest of stunted trees, used to gain more insights into Gilbreath's conjecture. Credit: David Eppstein/Wikimedia Commons
The forest of stunted trees, used to gain more insights into Gilbreath’s conjecture. Credit: David Eppstein/Wikimedia Commons

This pattern is called the forest of stunted trees, as if it were an area populated by growing trees with clearings that are always well-bounded triangles. The numbers from one sequence to the next are anastomosing, parting ways only to come close together after every five lines.

Another example is the vein skeleton on a hydrangea leaf. Both the stunted trees and the hydrangea veins patterns can be simulated using the rule-90 simple cellular automaton that uses the exclusive-or (XOR) function.

Bud and leaves of Hydrangea macrophylla. Credit: Alvesgaspar/Wikimedia Commons, CC BY-SA 3.0
Bud and leaves of Hydrangea macrophylla. Credit: Alvesgaspar/Wikimedia Commons, CC BY-SA 3.0

Nambu-Goldstone bosons

While anastomosis may not have a direct relation with symmetry and only a tenuous one with fractals, its presence indicates a source of perturbation in the system. Why else would the streamlined flow of something split off and then have the tributaries unify, unless possibly to reach out to richer lands? Anastomosis is a sign of the system acquiring a new degree of freedom. By splitting a stream with x degrees of freedom into two new streams each with x degrees of freedom, there are now more avenues through which change can occur.

Particle physics simplifies this scenario by assigning all forces and amounts of energy a particle. Thus, a force is said to be acting when a force-carrying particle is being exchanged between two bodies. Since each degree of freedom also implies a new force acting on the system, it wins itself a particle from a class of particles called the Nambu-Goldstone (NG) bosons. Named for Yoichiro Nambu and Jeffrey Goldstone, the presence of n NG bosons in a system means that, broadly speaking, the system has n degrees of freedom.

How and when an NG boson is introduced into a system is not yet well-understood. In fact, it was only recently that a theoretical physicist, named Haruki Watanabe, developed a mathematical model that could predict the number of degrees of freedom a complex system could have given the presence of a certain number of NG bosons. At the most fundamental level, it is understood that when symmetry breaks, an NG boson is born.

The asymmetry of symmetry

That is, when asymmetry is introduced in a system, so is a degree of freedom. This seems intuitive. But at the same time, you would think the reverse is also true: that when an asymmetric system is made symmetric, it loses a degree of freedom. However, this isn’t always the case because it could violate the third law of thermodynamics (specifically, the Lewis-Randall version of its statement).

Therefore, there is an inherent irreversibility, an asymmetry of the system itself: it works fine one way, it doesn’t work fine another. This is just like the split-off streams, but this time, they are unable to reunify properly. Of course, there is the possibility of partial unification: in the case of the hydrangea leaf, symmetry is not restored upon anastomosis but there is, evidently, an asymptotic attempt.

However, it is possible that in some special frames, such as in outer space, where the influence of gravitational forces is very weak, the restoration of symmetry may be complete. Even though the third law of thermodynamics is still applicable here, it comes into effect only with the transfer of energy into or out of the system. In the absence of gravity and other retarding factors, such as distribution of minerals in the soil for acquisition, etc., it is theoretically possible for symmetry to be broken and reestablished without any transfer of energy.

The simplest example of this is of a water droplet floating around. If a small globule of water breaks away from a bigger one, the bigger one becomes spherical quickly. When the seditious droplet joins with another globule, that globule also quickly reestablishes its spherical shape.

Thermodynamically speaking, there is mass transfer but at (almost) 100% efficiency, resulting in no additional degrees of freedom. Also, the force at play that establishes sphericality is surface tension, through which a water body seeks to occupy the shape that has the lowest volume for the correspondingly highest surface area. Notice how this shape – the sphere – is incidentally also the one with the most axes of symmetry and the fewest redundant degrees of freedom? Manufacturing such spheres is very hard.

An omnipotent impetus

Perhaps the explanation of the roles symmetry assumes seems regressive: every consequence of it is no consequence but itself all over again (i.e., self-symmetry – and it happened again). Indeed, why would nature deviate from itself? And as it recreates itself with different resources, it lends itself and its characteristics to different forms.

A mountain will be a mountain to its smallest constituents, and an electron will be an electron no matter how many of them you bring together at a location (except when quasiparticles show up). But put together mountains and you have ranges, sub-surface tectonic consequences, a reshaping of volcanic activity because of changes in the crust’s thickness, and a long-lasting alteration of wind and irrigation patterns. Bring together an unusual number of electrons to make up a high-density charge, and you have a high-temperature, high-voltage volume from which violent, permeating discharges of particles could occur – i.e., lightning.

Why should stars, music, light, radioactivity, politics, engineering or knowledge be any different?

Exploring what it means to be big

Reading a Nature report titled ‘Step aside CERN: There’s a cheaper way to break open physics‘ (January 10, 2018) brought to mind something G. Rajasekaran, former head of the Institute of Mathematical Sciences, Chennai, told me once: that the future – as the Nature report also touts – belongs to tabletop particle accelerators.

Rajaji (as he is known) said he believed so because of the simple realisation that particle accelerators could only get so big before they’d have to get much, much bigger to tell us anything more. On the other hand, tabletop setups based on laser wakefield acceleration, which could accelerate electrons to higher energies across just a few centimetres, would allow us to perform slightly different experiments such that their outcomes will guide future research.

The question of size is an interesting one (and almost personal: I’m 6’4” tall and somewhat heavy, which means I’ve to start by moving away from seeming intimidating in almost all new relationships). For most of history, humans’ ideas of better included something becoming bigger. From what I can see – which isn’t really much – the impetus for this is founded in five things:

1. The laws of classical physics: They are, and were, multiplicative. To do more or to do better (which for a long time meant doing more), the laws had to be summoned in larger magnitudes and in more locations. This has been true from the machines of industrialisation to scientific instruments to various modes of construction and transportation. Some laws also foster inverse relationships that straightforwardly encourage devices to be bigger to be better.

2. Capitalism, rather commerce in general: Notwithstanding social necessities, bigger often implied better the same way a sphere of volume 4 units has a smaller surface area than four spheres of volume 1 unit each. So if your expenditure is pegged to the surface area – and it often is – then it’s better to pack 400 people on one airplane instead of flying four airplanes with 100 people in each.

3. Sense of self: A sense of our own size and place in the universe, as seemingly diminutive creatures living their lives out under the perennial gaze of the vast heavens. From such a point of view, a show of power and authority would obviously have meant transcending the limitations of our dimensions and demonstrating to others that we’re capable of devising ‘ultrastructures’ that magnify our will, to take us places we only thought the gods could go and achieve simultaneity of effect only the gods could achieve. (And, of course, for heads of state to swing longer dicks at each other.)

4. Politics: Engineers building a tabletop detector and engineers building a detector weighing 50,000 tonnes will obviously run into different kinds of obstacles. Moreover, big things are easier to stake claims over, to discuss, dispute or dislodge. It affects more people even before it has produced its first results.

5. Natural advantages: An example that comes immediately to mind is social networks – not Facebook or Twitter but the offline ones that define cultures and civilisations. Such networks afford people an extra degree of adaptability and improve chances of survival by allowing people to access resources (including information/knowledge) that originated elsewhere. This can be as simple as a barter system where people exchange food for gold, or as complex as a bashful Tamilian staving off alienation in California by relying on the support of the Tamil community there.

(The inevitable sixth impetus is tradition. For example, its equation with growth has given bigness pride of place in business culture, so much so that many managers I’ve met wanted to set up bigger media houses even when it might have been more appropriate to go smaller.)

Against this backdrop of impetuses working together, Ed Yong’s I Contain Multitudes – a book about how our biological experience of reality is mediated by microbes – becomes a saga of reconciliation with a world much smaller, not bigger, yet more consequential. To me, that’s an idea as unintuitive as, say, being able to engineer materials with fantastical properties by sporadically introducing contaminants into their atomic lattice. It’s the sort of smallness whose individual parts amount to very close to nothing, whose sum amounts to something, but the human experience of which is simply monumental.

And when we find that such smallness is able to move mountains, so to speak, it disrupts our conception of what it means to be big. This is as true of microbes as it is of quantum mechanics, as true of elementary particles as it is of nano-electromechanical systems. This is one of the more understated revolutions that happened in the 20th century: the decoupling of bigger and better, a sort of virtualisation of betterment that separated it from additive scale and led to the proliferation of ‘trons’.

I like to imagine what gave us tabletop accelerators also gave us containerised software and a pan-industrial trend towards personalisation – although this would be philosophy, not history, because it’s a trend we compose in hindsight. But in the same vein, both hardware (to run software) and accelerators first became big, riding on the back of the classical and additive laws of physics, then hit some sort of technological upper limit (imposed by finite funds and logistical limitations) and then bounced back down when humankind developed tools to manipulate nature at the mesoscopic scale.

Of course, some would also argue that tabletop particle accelerators wouldn’t be possible, or deemed necessary, if the city-sized ones didn’t exist first, that it was the failure of the big ones that drove the development of the small ones. And they would argue right. But as I said, that’d be history; it’s the philosophy that seems more interesting here.

All the science in ‘The Cloverfield Paradox’

I watched The Cloverfield Paradox last night, the horror film that Paramount pictures had dumped with Netflix and which was then released by Netflix on February 4. It’s a dumb production: unlike H.R. Giger’s existential, visceral horrors that I so admire, The Cloverfield Paradox is all about things going bump in the dark. But what sets these things off in the film is quite interesting: a particle accelerator. However, given how bad the film was, the screenwriter seems to have used this device simply as a plot device, nothing else.

The particle accelerator is called Shepard. We don’t know what particles it’s accelerating or up to what centre-of-mass collision energy. However, the film’s premise rests on the possibility that a particle accelerator can open up windows into other dimensions. The Cloverfield Paradox needs this because, according to its story, Earth has run out of energy sources in 2028 and countries are threatening ground invasions for the last of the oil, so scientists assemble a giant particle accelerator in space to tap into energy sources in other dimensions.

Considering 2028 is only a decade from now – when the Sun will still be shining bright as ever in the sky – and renewable sources of energy aren’t even being discussed, the movie segues from sci-fi into fantasy right there.

Anyway, the idea that a particle accelerator can open up ‘portals’ into other dimensions isn’t new nor entirely silly. Broadly, an accelerator’s purpose is founded on three concepts: the special theory of relativity (SR), particle decay and the wavefunction of quantum mechanics.

According to SR, mass and energy can transform into each other as well as that objects moving closer to the speed of light will become more massive, thus more energetic. Particle decay is what happens when a heavier subatomic particle decomposes into groups of lighter particles because it’s unstable. Put these two ideas together and you have a part of the answer: accelerators accelerate particles to extremely high velocities, the particles become more massive, ergo more energetic, and the excess energy condenses out at some point as other particles.

Next, in quantum mechanics, the wavefunction is a mathematical function: when you solve it based on what information you have available, the answer spit out by one kind of the function gives the probability that a particular particle exists at some point in the spacetime continuum. It’s called a wavefunction because the function describes a wave, and like all waves, this one also has a wavelength and an amplitude. However, the wavelength here describes the distance across which the particle will manifest. Because energy is directly proportional to frequency (E = × ν; h is Planck’s constant) and frequency is inversely proportional to the wavelength, energy is inversely proportional to wavelength. So the more the energy a particle accelerator achieves, the smaller the part of spacetime the particles will have a chance of probing.

Spoilers ahead

SR, particle decay and the properties of the wavefunction together imply that if the Shepard is able to achieve a suitably high energy of acceleration, it will be able to touch upon an exceedingly small part of spacetime. But why, as it happens in The Cloverfield Paradox, would this open a window into another universe?

Spoilers end

Instead of directly offering a peek into alternate universes, a very-high-energy particle accelerator could offer a peek into higher dimensions. According to some theories of physics, there are many higher dimensions even though humankind may have access only to four (three of space and one of time). The reason they should even exist is to be able to solve some conundrums that have evaded explanation. For example, according to Kaluza-Klein theory (one of the precursors of string theory), the force of gravity is so much weaker than the other three fundamental forces (strong nuclear, weak nuclear and electromagnetic) because it exists in five dimensions. So when you experience it in just four dimensions, its effects are subdued.

Where are these dimensions? Per string theory, for example, they are extremely compactified, i.e. accessible only over incredibly short distances, because they are thought to be curled up on themselves. According to Oskar Klein (one half of ‘Kaluza-Klein’, the other half being Theodore Kaluza), this region of space could be a circle of radius 10-32 m. That’s 0.00000000000000000000000000000001 m – over five quadrillion times smaller than a proton. According to CERN, which hosts the Large Hadron Collider (LHC), a particle accelerated to 10 TeV can probe a distance of 10-19 m. That’s still one trillion times larger than where the Kaluza-Klein fifth dimension is supposed to be curled up. The LHC has been able to accelerate particles to 8 TeV.

The likelihood of a particle accelerator tossing us into an alternate universe entirely is a different kind of problem. For one, we have no clue where the connections between alternate universes are nor how they can be accessed. In Nolan’s Interstellar (2014), a wormhole is discovered by the protagonist to exist inside a blackhole – a hypothesis we currently don’t have any way of verifying. Moreover, though the LHC is supposed to be able to create microscopic blackholes, they have a 0% chance of growing to possess the size or potential of Interstellar‘s Gargantua.

In all, The Cloverfield Paradox is a waste of time. In the 2016 film Spectral – also released by Netflix – the science is overwrought, stretched beyond its possibilities, but still stays close to the basic principles. For example, the antagonists in Spectral are creatures made entirely as Bose-Einstein condensates. How this was even achieved boggles the mind, but the creatures have the same physical properties that the condensates do. In The Cloverfield Paradox, however, the accelerator is a convenient insertion into a bland story, an abuse of the opportunities that physics of this complexity offers. The writers might as well have said all the characters blinked and found themselves in a different universe.

Chromodynamics: Gluons are just gonzo

One of the more fascinating bits of high-energy physics is the branch of physics called quantum chromodynamics (QCD). Don’t let the big name throw you off: it deals with a bunch of elementary particles that have a property called colour charge. And one of these particles creates a mess of this branch of physics because of its colour charge – so much so that it participates in the story that it is trying to shape. What could be more gonzo than this? Hunter S. Thompson would have been proud.

Like electrons have electric charge, particles studied by QCD have a colour charge. It doesn’t correspond to a colour of any kind; it’s just a funky name.

(Richard Feynman wrote about this naming convention in his book, QED: The Strange Theory of Light and Matter (pp. 163, 1985): “The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of ‘color,’ which has nothing to do with color in the normal sense.”)

The fascinating thing about these QCD particles is that they exhibit a property called colour confinement. It means that all particles with colour charge can’t ever be isolated. They’re always to be found only in pairs or bigger clumps. They can be isolated in theory if the clumps are heated to the Hagedorn temperature: 1,000 billion billion billion K. But the bigness of this number has ensured that this temperature has remained theoretical. They can also be isolated in a quark-gluon plasma, a superhot, superdense state of matter that has been creating fleetingly in particle physics experiments like the Large Hadron Collider. The particles in this plasma quickly collapse to form bigger particles, restoring colour confinement.

There are two kinds of particles that are colour-confined: quarks and gluons. Quarks come together to form bigger particles called mesons and baryons. The aptly named gluons are the particles that ‘glue’ the quarks together.

The force that acts between quarks and gluons is called the strong nuclear force. But this is misleading. The gluons actually mediate the strong nuclear force. A physicist would say that when two quarks exchange gluons, the quarks are being acted on by the strong nuclear force.

Because protons and neutrons are also made up of quarks and gluons, the strong nuclear force holds the nucleus together in all the atoms in the universe. Breaking this force releases enormous amounts of energy – like in the nuclear fission that powers atomic bombs and the nuclear fusion that powers the Sun. In fact, 99% of a proton’s mass comes from the energy of the strong nuclear force. The quarks contribute the remaining 1%; gluons are massless.

When you pull two quarks apart, you’d think the force between them will reduce. It doesn’t; it actually increases. This is very counterintuitive. For example, the gravitational force exerted by Earth drops off the farther you get away from it. The electromagnetic force between an electron and a proton decreases the more they move apart. But it’s only with the strong nuclear force that the force between two particles on which the force is acting actually increases as they move apart. Frank Wilczek called this a “self-reinforcing, runaway process”. This behaviour of the force is what makes colour confinement possible.

However, in 1973, Wilczek, David Gross and David Politzer found that the strong nuclear force increases in strength only up to a certain distance – around 1 fermi (0.000000000000001 metres, slightly larger than the diameter of a proton). If the quarks are separated by more than a fermi, the force between them falls off drastically, but not completely. This is called asymptotic freedom: the freedom from the force beyond some distance drops off asymptotically towards zero. Gross, Politzer and Wilczek won the Nobel Prize for physics in 2004 for their work.

In the parlance of particle physics, what makes asymptotic freedom possible is the fact that gluons emit other gluons. How else would you explain the strong nuclear force becoming stronger as the quarks move apart – if not for the gluons that the quarks are exchanging becoming more numerous as the distance increases?

This is the crazy phenomenon that you’re fighting against when you’re trying to set off a nuclear bomb. This is also the crazy phenomenon that will one day lead to the Sun’s death.

The first question anyone would ask now is – doesn’t asymptotic freedom violate the law of conservation of energy?

The answer lies in the nothingness all around us.

The vacuum of deep space in the universe is not really a vacuum. It’s got some energy of itself, which astrophysicists call ‘dark energy’. This energy manifests itself in the form of virtual particles: particles that pop in and out of existence, living for far shorter than a second before dissipating into energy. When a charged particle pops into being, its charge attracts other particles of opposite charge towards itself and repels particles of the same charge away. This is high-school physics.

But when a charged gluon pops into being, something strange happens. An electron has one kind of charge, the positive/negative electric charge. But a gluon contains a ‘colour’ charge and an ‘anti-colour’ charge, each of which can take one of three values. So the virtual gluon will attract other virtual gluons depending on their colour charges and intensify the colour charge field around it, and also change its colour according to whichever particles are present. If this had been an electron, its electric charge and the opposite charge of the particle it attracted would cancel the field out.

This multiplication is what leads to the build up of energy when we’re talking about asymptotic freedom.

Physicists refer to the three values of the colour charge as blue, green and red. (This is more idiocy – you might as well call them ‘baboon’, ‘lion’ and ‘giraffe’.) If a blue quark, a green quark and a red quark come together to form a hadron (a class of particles that includes protons and neutrons), then the hadron will have a colour charge of ‘white’, becoming colour-neutral. Anti-quarks have anti-colour charges: antiblue, antigreen, antired. When a red quark and an antired anti-quark meet, they will annihilate each other – but not so when a red quark and an antiblue anti-quark meet.

Gluons complicate this picture further because, in experiments, physicists have found that gluons behave as if they have both colour and anti-colour. In physical terms, this doesn’t make much sense, but they do in mathematical terms (which we won’t get into). Let’s say a proton is made of one red quark, one blue quark and one green quark. The quarks are held together by gluons, which also have a colour charge. So when two quarks exchange a gluon, the colours of the quarks change. If a blue quark emits a blue-antigreen gluon, the quark turns green whereas the quark that receives the gluon will turn blue. Ultimately, if the proton is ‘white’ overall, then the three quarks inside are responsible for maintaining that whiteness. This is the law of conservation of colour charge.

Gluons emit gluons because of their colour charges. When quarks exchange gluons, the quarks’ colour charges also change. In effect, the gluons are responsible for quarks getting their colours. And because the gluons participate in the evolution of the force that they also mediate, they’re just gonzo: they can interact with themselves to give rise to new particles.

A gluon can split up into two gluons or into a quark-antiquark pair. Say a quark and an antiquark are joined together. If you try to pull them apart by supplying some energy, the gluon between them will ‘swallow’ that energy and split up into one antiquark and one quark, giving rise to two quark-antiquark pairs (and also preserving colour-confinement). If you supply even more energy, more quark-antiquark pairs will be generated.

For these reasons, the strong nuclear force is called a ‘colour force’: it manifests in the movement of colour charge between quarks.

In an atomic nucleus, say there is one proton and one neutron. Each particle is made up of three quarks. The quarks in the proton and the quarks in the neutron interact with each other because they are close enough to be colour-confined: the proton-quarks’ gluons and the neutron-quarks’ gluons interact with each other. So the nucleus is effectively one ball of quarks and gluons. However, one nucleus doesn’t interact with that of a nearby atom in the same way because they’re too far apart for gluons to be exchanged.

Clearly, this is quite complicated – not just for you and me but also for scientists, and for supercomputers that perform these calculations for large experiments in which billions of protons are smashed into each other to see how the particles interact. Imagine: there are six types, or ‘flavours’, of quarks, each carrying one of three colour charges. Then there is the one gluon that can carry one of nine combinations of colour-anticolour charges.

The Wire
September 20, 2017

Featured image credit: Alexas_Fotos/pixabay.

A gear-train for particle physics

It has come under scrutiny at various times by multiple prominent physicists and thinkers, but it’s not hard to see why, when the idea of ‘grand unification’ first set out, it seemed plausible to so many. The first time it was seriously considered was about four decades ago, shortly after physicists had realised that two of the four fundamental forces of nature were in fact a single unified force if you ramped up the energy at which it acted. (electromagnetic + weak = electroweak). The thought that followed was simply logical: what if, at some extremely high energy (like what was in the Big Bang), all four forces unified into one? This was 1974.

There has been no direct evidence of such grand unification yet. Physicists don’t know how the electroweak force will unify with the strong nuclear force – let alone gravity, a problem that actually birthed one of the most powerful mathematical tools in an attempt to solve it. Nonetheless, they think they know the energy at which such grand unification should occur if it does: the Planck scale, around 1019 GeV. This is about as much energy as is contained in a few litres of petrol, but it’s stupefyingly large when you have to accommodate all of it in a particle that’s 10-15 metres wide.

This is where particle accelerators come in. The most powerful of them, the Large Hadron Collider (LHC), uses powerful magnetic fields to accelerate protons to close to light-speed, when their energy approaches about 7,000 GeV. But the Planck energy is still 10 million billion orders of magnitude higher, which means it’s not something we might ever be able to attain on Earth. Nonetheless, physicists’ theories show that that’s where all of our physical laws should be created, where the commandments by which all that exists does should be written.

… Or is it?

There are many outstanding problems in particle physics, and physicists are desperate for a solution. They have to find something wrong with what they’ve already done, something new or a way to reinterpret what they already know. The clockwork theory is of the third kind – and its reinterpretation begins by asking physicists to dump the idea that new physics is born only at the Planck scale. So, for example, it suggests that the effects of quantum gravity (a quantum-mechanical description of gravity) needn’t necessarily become apparent only at the Planck scale but at a lower energy itself. But even if it then goes on to solve some problems, the theory threatens to present a new one. Consider: If it’s true that new physics isn’t born at the highest energy possible, then wouldn’t the choice of any energy lower than that just be arbitrary? And if nothing else, nature is not arbitrary.

To its credit, clockwork sidesteps this issue by simply not trying to find ‘special’ energies at which ‘important’ things happen. Its basic premise is that the forces of nature are like a set of interlocking gears moving against each other, transmitting energy – rather potential – from one wheel to the next, magnifying or diminishing the way fundamental particles behave in different contexts. Its supporters at CERN and elsewhere think it can be used to explain some annoying gaps between theory and experiment in particle physics, particularly the naturalness problem.

Before the Higgs boson was discovered, physicists predicted based on the properties of other particles and forces that its mass would be very high. But when the boson’s discovery was confirmed at CERN in January 2013, its mass implied that the universe would have to be “the size of a football” – which is clearly not the case. So why is the Higgs boson’s mass so low, so unnaturally low? Scientists have fronted many new theories that try to solve this problem but their solutions often require the existence of other, hitherto undiscovered particles.

Clockwork’s solution is a way in which the Higgs boson’s interaction with gravity – rather gravity’s associated energy – is mediated by a string of effects described in quantum field theory that tamp down the boson’s mass. In technical parlance, the boson’s mass becomes ‘screened’. An explanation for this that’s both physical and accurate is hard to draw up because of various abstractions. So as University of Bruxelles physicist Daniele Teresi suggests, imagine this series: Χ = 0.5 × 0.5 × 0.5 × 0.5 × … × 0.5. Even if each step reduces Χ’s value by only a half, it is already an eighth after three steps; after four, a sixteenth. So the effect can get quickly drastic because it’s exponential.

And the theory provides a mathematical toolbox that allows for all this to be achieved without the addition of new particles. This is advantageous because it makes clockwork relatively more elegant than another theory that seeks to solve the naturalness problem, called supersymmetry, SUSY for short. Physicists like SUSY also because it allows for a large energy hierarchy: a distribution of particles and processes at energies between electroweak unification and grand unification, instead of leaving the region bizarrely devoid of action like the Standard Model does. But then SUSY predicts the existence of 17 new particles, none of which have been detected yet.

Even more, as Matthew McCullough, one of clockwork’s developers, showed at an ongoing conference in Italy, its solutions for a stationary particle in four dimensions exhibit conceptual similarities to Maxwell’s equations for an electromagnetic wave in a conductor. The existence of such analogues is reassuring because it recalls nature’s tendency to be guided by common principles in diverse contexts.

This isn’t to say clockwork theory is it. As physicist Ben Allanach has written, it is a “new toy” and physicists are still playing with it to solve different problems. Just that in the event that it has an answer to the naturalness problem – as well as to the question why dark matter doesn’t decay, e.g. – it is notable. But is this enough: to say that clockwork theory mops up the math cleanly in a bunch of problems? How do we make sure that this is how nature works?

McCullough thinks there’s one way, using the LHC. Very simplistically: clockwork theory induces fluctuations in the probabilities with which pairs of high-energy photons are created at some energies at the LHC. These should be visible as wavy squiggles in a plot with energy on the x-axis and events on the y-axis. If these plots can be obtained and analysed, and the results agree with clockwork’s predictions, then we will have confirmed what McCullough calls an “irreducible prediction of clockwork gravity”, the case of using the theory to solve the naturalness problem.

To recap: No free parameters (i.e. no new particles), conceptual elegance and familiarity, and finally a concrete and unique prediction. No wonder Allanach thinks clockwork theory inhabits fertile ground. On the other hand, SUSY’s prospects have been bleak since at least 2013 (if not earlier) – and it is one of the more favoured theories among physicists to explain physics beyond the Standard Model, physics we haven’t observed yet but generally believe exists. At the same time, and it bears reiterating, clockwork theory will also have to face down a host of challenges before it can be declared a definitive success. Tik tok tik tok tik tok

Some notes and updates

Four years of the Higgs boson

Missed this didn’t I. On July 4, 2012, physicists at CERN announced that the Large Hadron Collider had found a Higgs-boson-like particle. Though the confirmation would only come in January 2013 (that it was the Higgs boson and not any other particle), July 4 is the celebrated date. I don’t exactly mark the occasion every year except to recap on whatever’s been happening in particle physics. And this year: everyone’s still looking for supersymmetry; there was widespread excitement about a possible new fundamental particle weighing about 750 GeV when data-taking began at the LHC in late May but strong rumours from within CERN have it that such a particle probably doesn’t exist (i.e. it’s vanishing in the new data-sets). Pity. The favoured way to anticipate what might come to be well before the final announcements are made in August is to keep an eye out for conference announcements in mid-July. If they’re made, it’s a strong giveaway that something’s been found.

Live-tweeting and timezones

I’ve a shitty internet connection at home in Delhi which means I couldn’t get to see the live-stream NASA put out of its control room or whatever as Juno executed its orbital insertion manoeuvre this morning. Fortunately, Twitter came to the rescue; NASA’s social media team had done such a great job of hyping up the insertion (deservingly so) that it seemed as if all the 480 accounts I followed were tweeting about it. I don’t believe I missed anything at all, except perhaps the sounds of applause. Twitter’s awesome that way, and I’ll say that even if it means I’m stating the obvious. One thing did strike me: all times (of the various events in the timeline) were published in UTC and EDT. This makes sense because converting from UTC to a local timezone is easy (IST = UTC + 5.30) while EDT corresponds to the US east cost. However, the thing about IST being UTC + 5.30 isn’t immediately apparent to everyone (at least not to me), and every so often I wish an account tweeting from India, such as a news agency’s, uses IST. I do it every time.

New music

https://www.youtube.com/watch?v=F4IwxzU3Kv8

I don’t know why I hadn’t found Yat-kha earlier considering I listen to Huun Huur Tu so much, and Yat-kha is almost always among the recommendations (all bands specialising in throat-singing). And while Huun Huur Tu likes to keep their music traditional and true to its original compositional style, Yat-kha takes it a step further, banding its sound up with rock, and this tastes much better to me. With a voice like Albert Kuvezin’s, keeping things traditional can be a little disappointing – you can hear why in the song above. It’s called Kaa-khem; the same song by Huun Huur Tu is called Mezhegei. Bass evokes megalomania in me, and it’s all the more sensual when its rendition is accomplished with human voice, rising and falling. Another example of what I’m talking about is called Yenisei punk. Finally, this is where I’d suggest you stop if you’re looking for throat-singing made to sound more belligerent: I stumbled upon War horse by Tengger Cavalry, classified as nomadic folk metal. It’s terrible.

Fall of Light, a part 2

In fantasy trilogies, the first part benefits from establishing the premise and the third, from the denouement. If the second part has to benefit from anything at all, then it is the story itself, not the intensity of the stakes within its narrative. At least, that’s my takeaway from Fall of Light, the second book of Steven Erikson’s Kharkanas trilogy. Its predecessor, Forge of Darkness, established the kingdom of Kurald Galain and the various forces that shape its peoples and policies. Because the trilogy has been described as being a prequel (note: not the prequel) to Erikson’s epic Malazan Book of the Fallen series, and because of what we know about Kurald Galain in the series, the last book of the trilogy has its work cut out for it. But in the meantime, Fall of Light was an unexpectedly monotonous affair – and that was awesome. As a friend of mine has been wont to describe the Malazan series: Erikson is a master of raising the stakes. He does that in all of his books (including the Korbal Broach short-stories) and he does it really well. However, Fall of Light rode with the stakes as they were laid down at the end of the first book, through a plot that maintained the tension at all times. It’s neither eager to shed its burden nor is it eager to take on new ones. If you’ve read the Malazan series, I’d say he’s written another Deadhouse Gates, but better.

Oh, and this completes one of my bigger goals for 2016.

A universe out of sight

Two things before we begin:

  1. The first subsection of this post assumes that humankind has colonised some distant extrasolar planet(s) within the observable universe, and that humanity won’t be wiped out in 5 billion years.
  2. Both subsections assume a pessimistic outlook, and neither projections they dwell on might ever come to be while humanity still exists. Nonetheless, it’s still fun to consider them and their science, and, most importantly, their potential to fuel fiction.

Cosmology

Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0
Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

Note: An edited version of this post has been published on The Wire.

A new study whose results were reported this morning made for a disconcerting read: it seems the universe is expanding 5-9% faster than we figured it was.

That the universe is expanding at all is disappointing, that it is growing in volume like a balloon and continuously birthing more emptiness within itself. Because of the suddenly larger distances between things, each passing day leaves us lonelier than we were yesterday. The universe’s expansion is accelerating, too, and that doesn’t simply mean objects getting farther away. It means some photons from those objects never reaching our telescopes despite travelling at lightspeed, doomed to yearn forever like Tantalus in Tartarus. At some point in the future, a part of the universe will become completely invisible to our telescopes, remaining that way no matter how hard we try.

And the darkness will only grow, until a day out of an Asimov story confronts us: a powerful telescope bearing witness to the last light of a star before it is stolen from us for all time. Even if such a day is far, far into the future – the effect of the universe’s expansion is perceptible only on intergalactic scales, as the Hubble constant indicates, and simply negligible within the Solar System – the day exists.

This is why we are uniquely positioned: to be able to see as much as we are able to see. At the same time, it is pointless to wonder how much more we are able to see than our successors because it calls into question what we have ever been able to see. Say the whole universe occupies a volume of X, that the part of it that remains accessible to us contains a volume Y, and what we are able to see today is Z. Then: Z < Y < X. We can dream of some future technological innovation that will engender a rapid expansion of what we are able to see, but with Y being what it is, we will likely forever play catch-up (unless we find tachyons, navigable wormholes, or the universe beginning to decelerate someday).

How fast is the universe expanding? There is a fixed number to this called the deceleration parameter:

q = – (1 + /H2),

where H is the Hubble constant and  is its first derivative. The Hubble constant is the speed at which an object one megaparsec from us is moving away at. So, if q is positive, the universe’s expansion is slowing down. If q is zero, then H is the time since the Big Bang. And if q is negative – as scientists have found to be the case – then the universe’s expansion is accelerating.

The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons
The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

We measure the expansion of the universe from our position: on its surface (because, no, we’re not inside the universe). We look at light coming from distant objects, like supernovae; we work out how much that light is ‘red-shifted’; and we compare that to previous measurements. Here’s a rough guide.

What kind of objects do we use to measure these distances? Cosmologists prefer type Ia supernovae. In a type Ia supernova, a white-dwarf (the core of a dead stare made entirely of electrons) is slowly sucking in matter from an object orbiting it until it becomes hot enough to trigger fusion reaction. In the next few seconds, the reaction expels 1044 joules of energy, visible as a bright fleck in the gaze of a suitable telescope. Such explosions have a unique attribute: the mass of the white-dwarf that goes boom is uniform, which means type Ia supernova across the universe are almost equally bright. This is why cosmologists refer to them as ‘cosmic candles’. Based on how faint these candles are, you can tell how far away they are burning.

After a type Ia supernova occurs, photons set off from its surface toward a telescope on Earth. However, because the universe is continuously expanding, the distance between us and the supernova is continuously increasing. The effective interpretation is that the explosion appears to be moving away from us, becoming fainter. How much it has moved away is derived from the redshift. The wave nature of radiation allows us to think of light as having a frequency and a wavelength. When an object that is moving away from us emits light toward us, the waves of light appear to become stretched, i.e. the wavelength seems to become distended. If the light is in the visible part of the spectrum when starting out, then by the time it reached Earth, the increase in its wavelength will make it seem redder. And so the name.

The redshift, z – technically known as the cosmological redshift – can be calculated as:

z = (λobserved – λemitted)/λemitted

In English: the redshift is the factor by which the observed wavelength is changed from the emitted wavelength. If z = 1, then the observed wavelength is twice as much as the emitted wavelength. If z = 5, then the observed wavelength is six-times as much as the emitted wavelength. The farthest galaxy we know (MACS0647-JD) is estimated to be at a distance wherefrom = 10.7 (corresponding to 13.3 billion lightyears).

Anyway, z is used to calculate the cosmological scale-factor, a(t). This is the formula:

a(t) = 1/(1 + z)

a(t) is then used to calculate the distance between two objects:

d(t) = a(t) d0,

where d(t) is the distance between the two objects at time t and d0 is the distance between them at some reference time t0. Since the scale factor would be constant throughout the universe, d(t) and d0 can be stand-ins for the ‘size’ of the universe itself.

So, let’s say a type Ia supernova lit up at a redshift of 0.6. This gives a(t) = 0.625 = 5/8. So: d(t) = 5/8 * d0. In English, this means that the universe was 5/8th its current size when the supernova went off. Using z = 10.7, we infer that the universe was one-twelfth its current size when light started its journey from MACS0647-JD to reach us.

As it happens, residual radiation from the primordial universe is still around today – as the cosmic microwave background radiation. It originated 378,000 years after the Big Bang, following a period called the recombination epoch, 13.8 billion years ago. Its redshift is 1,089. Phew.

The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you're observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0
The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you’re observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

A curious redshift is z = 1.4, corresponding to a distance of about 4,200 megaparsec (~0.13 trillion trillion km). Objects that are already this far from us will be moving away faster than at the speed of light. However, this isn’t faster-than-light travel because it doesn’t involve travelling. It’s just a case of the distance between us and the object increasing at such a rate that, if that distance was once covered by light in time t0, light will now need t > t0 to cover it*. The corresponding a(t) = 0.42. I wonder at times if this is what Douglas Adams was referring to (… and at other times I don’t because the exact z at which this happens is 1.69, which means a(t) = 0.37. But it’s something to think about).

Ultimately, we will never be able to detect any electromagnetic radiation from before the recombination epoch 13.8 billion years ago; then again, the universe has since expanded, leaving the supposed edge of the observable universe 46.5 billion lightyears away in any direction. In the same vein, we can imagine there will be a distance (closing in) at which objects are moving away from us so fast that the photons from their surface never reach us. These objects will define the outermost edges of the potentially observable universe, nature’s paltry alms to our insatiable hunger.

Now, a gentle reminder that the universe is expanding a wee bit faster than we thought it was. This means that our theoretical predictions, founded on Einstein’s theories of relativity, have been wrong for some reason; perhaps we haven’t properly accounted for the effects of dark matter? This also means that, in an Asimovian tale, there could be a twist in the plot.

*When making such a measurement, Earthlings assume that Earth as seen from the object is at rest and that it’s the object that is moving. In other words: we measure the relative velocity. A third observer will notice both Earth and the object to be moving away, and her measurement of the velocity between us will be different.


Particle physics

Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN
Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

If the news that our universe is expanding 5-9% faster than we thought sooner portends a stellar barrenness in the future, then another foretells a fecundity of opportunities: in the opening days of its 2016 run, the Large Hadron Collider produced more data in a single day than it did in the entirety of its first run (which led to the discovery of the Higgs boson).

Now, so much about the cosmos was easy to visualise, abiding as it all did with Einstein’s conceptualisation of physics: as inherently classical, and never violating the principles of locality and causality. However, Einstein’s physics explains only one of the two infinities that modern physics has been able to comprehend – the other being the world of subatomic particles. And the kind of physics that reigns over the particles isn’t classical in any sense, and sometimes takes liberties with locality and causality as well. At the same time, it isn’t arbitrary either. How then do we reconcile these two sides of quantum physics?

Through the rules of statistics. Take the example of the Higgs boson: it is not created every time two protons smash together, no matter how energetic the protons are. It is created at a fixed rate – once every ~X collisions. Even better: we say that whenever a Higgs boson forms, it decays to a group of specific particles one-Yth of the time. The value of Y is related to a number called the coupling constant. The lower Y is, the higher the coupling constant is, and more often will the Higgs boson decay into that group of particles. When estimating a coupling constant, theoretical physicists assess the various ways in which the decays can happen (e.g., Higgs boson → two photons).

A similar interpretation is that the coupling constant determines how strongly a particle and a force acting on that particle will interact. Between the electron and the electromagnetic force is the fine-structure constant,

α = e2/2ε0hc;

and between quarks and the strong nuclear force is the constant defining the strength of the asymptotic freedom:

αs(k2) = [β0ln(k22)]-1

So, if the LHC’s experiments require P (number of) Higgs bosons to make their measurements, and its detectors are tuned to detect that group of particles, then at least P-times-that-coupling-constant collisions ought to have happened. The LHC might be a bad example because it’s a machine on the Energy Frontier: it is tasked with attaining higher and higher energies so that, at the moment the protons collide, heavier and much shorter-lived particles can show themselves. A better example would be a machine on the Intensity Frontier: its aim would be to produce orders of magnitude more collisions to spot extremely rare processes, such as particles that are formed very rarely. Then again, it’s not as straightforward as just being prolific.

It’s like rolling an unbiased die. The chance that you’ll roll a four is 1/6 (i.e. the coupling constant) – but it could happen that if you roll the die six times, you never get a four. This is because the chance can also be represented as 10/60. Then again, you could roll the die 60 times and still never get a four (though the odds of that happened are even lower). So you decide to take it to the next level: you build a die-rolling machine that rolls the die a thousand times. You would surely have gotten some fours – but say you didn’t get fours one-sixth of the time. So you take it up a notch: you make the machine roll the die a million times. The odds of a four should by now start converging toward 1/6. This is how a particle accelerator-collider aims to work, and succeeds.

And this is why the LHC producing as much data as it already has this year is exciting news. That much data means a lot more opportunities for ‘new physics’ – phenomena beyond what our theories can currently explain – to manifest itself. Analysing all this data completely will take many years (physicists continue to publish papers based on results gleaned from data generated in the first run), and all of it will be useful in some way even if very little of it ends up contributing to new ideas.

The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN
The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

Occasionally, an oddball will show up – like a pentaquark, a state of five quarks bound together. As particles in their own right, they might not be as exciting as the Higgs boson, but in the larger schemes of things, they have a role to call their own. For example, the existence of a pentaquark teaches physicists about what sorts of configurations of the strong nuclear force, which holds the quarks together, are really possible, and what sorts are not. However, let’s say the LHC data throws up nothing. What then?

Tumult is what. In the first run, the LHC used to smash two beams of billions of protons, each beam accelerated to 4 TeV and separated into 2,000+ bunches, head on at the rate of two opposing bunches every 50 nanoseconds. In the second run, after upgrades through early 2015, the LHC smashes bunches accelerated to 6.5 TeV once every 25 nanoseconds. In the process, the number of collisions per sq. cm per second increased tenfold, to 1 × 1034. These heightened numbers are so new physics has fewer places to hide; we are at the verge of desperation to tease them out, to plumb the weakest coupling constants, because existing theories have not been able to answer all of our questions about fundamental physics (why things are the way they are, etc.). And even the barest hint of something new, something we haven’t seen before, will:

  • Tell us that we haven’t seen all that there is to see**, that there is yet more, and
  • Validate this or that speculative theory over a host of others, and point us down a new path to tread

Axiomatically, these are the desiderata at stake should the LHC find nothing, even more so that it’s yielded a massive dataset. Of course, not all will be lost: larger, more powerful, more innovative colliders will be built – even as a disappointment will linger. Let’s imagine for a moment that all of them continue to find nothing, and that persistent day comes to be when the cosmos falls out of our reach, too. Wouldn’t that be maddening?

**I’m not sure of what an expanding universe’s effects on gravitational waves will be, but I presume it will be the same as its effect on electromagnetic radiation. Both are energy transmissions travelling on the universe’s surface at the speed of light, right? Do correct me if I’m wrong.

Prospects for suspected new fundamental particle improve marginally

This image shows a collision event with a photon pair observed by the CMS detector in proton-collision data collected in 2015 with no magnetic field present. The energy deposits of the two photons are represented by the two large green towers. The mass of the di-photon system is between 700 and 800 GeV. The candidates are consistent with what is expected for prompt isolated photons. Caption & credit © 2016 CERN
This image shows a collision event with a photon pair observed by the CMS detector in proton-collision data collected in 2015 with no magnetic field present. The energy deposits of the two photons are represented by the two large green towers. The mass of the di-photon system is between 700 and 800 GeV. The candidates are consistent with what is expected for prompt isolated photons. Caption & credit © 2016 CERN

On December 15 last year, scientists working with the Large Hadron Collider experiment announced that they had found slight whispers of a possible new fundamental particle, and got the entire particle physics community excited. There was good reason: should such a particle’s existence become verified, it would provide physicists some crucial headway in answering questions about the universe that our current knowledge of physics has been remarkably unable to cope with. And on March 17, members of the teams that made the detection presented more details as well as some preliminary analyses at a conference, held every year, in La Thuile, Italy.

The verdict: the case for the hypothesised particle’s existence has got a tad bit stronger. Physicists still don’t know what it could be or if it won’t reveal itself to have been a fluke measurement once more data trickles in by summer this year. At the same time, the bump in the data persists in two sets of measurements logged by two detectors and at different times. In December, the ATLAS detector had presented a stronger case – i.e., a more reliable measurement – than the CMS detector; at La Thuile on March 17, the CMS team also came through with promising numbers.

Because of the stochastic nature of particle physics, the reliability of results is encapsulated by their statistical significance, denoted by σ (sigma). So 3σ would mean the measurements possess a 1-in-350 chance of being a fluke and marks the threshold for considering the readings as evidence. And 5σ would mean the measurements possess a 1-in-3.5 million chance of being a fluke and marks the threshold for claiming a discovery. Additionally, tags called ‘local’ and ‘global’ refer to whether the significance is for a bump exactly at 750 GeV or anywhere in the plot at all.

And right now, particle physicists have this scoreboard, as compiled by Alessandro Strumia, an associate professor of physics at Pisa University, who presented it at the conference:

750_new

Pauline Gagnon, a senior research scientist at CERN, explained on her blog, “Two hypotheses were tested, assuming different characteristics for the hypothetical new particle: the ‘spin 0’ case corresponds to a new type of Higgs boson, while ‘spin 2’ denotes a graviton.” A graviton is a speculative particle carrying the force of gravity. The – rather, a – Higgs boson was discovered at the LHC in July 2012 and verified in January 2013. This was during the collider’s first run, when it accelerated two beams of protons to 4 TeV (1,000 GeV = 1 TeV) each and then smashed them together. The second run kicked off, following upgrades to the collider and detectors during 2014, with a beam energy of 6.5 TeV.

Although none of the significances are as good as they’d have to be for there to be a new ‘champagne bottle boson’moment (alternatively: another summertime hit), it’s encouraging that the data behind them has shown up over multiple data-taking periods and isn’t failing repeated scrutiny. More presentations by physicists from ATLAS and CMS at the conference, which concludes on March 19, are expected to provide clues about other anomalous bumps in the data that could be related to the one at 750 GeV. If theoretical physicists have such connections to make, their ability to zero in on what could be producing the excess photons becomes much better.

But even more than new analyses gleaned from old data, physicists will be looking forward to the LHC waking up from its siesta in the first week of May, and producing results that could become available as early as June. Should the data still continue to hold up – and the 5σ local significance barrier be breached – then physicists will have just what they need to start a new chapter in the study of fundamental physics just as the previous one was closed by the Higgs boson’s discovery in 2012.

For reasons both technical and otherwise, such a chapter has its work already cut out. The Standard Model of particle physics, a theory unifying the behaviours of different species of particles and which requires the Higgs boson’s existence, is flawed despite its many successes. Therefore, physicists have been, and are, looking for ways to ‘break’ the model by finding something it doesn’t have room for. Both the graviton and another Higgs boson are such things although there are other contenders as well.

The Wire
March 19, 2016