The Symmetry Incarnations – Part I

Symmetry in nature is a sign of unperturbedness. It means nothing has interfered with a natural process, and that its effects at each step are simply scaled-up or scaled-down versions of each other. For this reason, symmetry is aesthetically pleasing, and often beautiful. Consider, for instance, faces. Symmetry of facial features about the central vertical axis is often translated as the face being beautiful – not just by humans but also monkeys.

However, this is just an example of one of the many forms of symmetry’s manifestation. When it involves geometric features, it’s a case of geometrical symmetry. When a process occurs similarly both forward and backward in time, it is temporal symmetry. If two entities that don’t seem geometrically congruent at first sight rotate, move or scale with similar effects on their forms, it is transformational symmetry. A similar definition applies to all theoretical models, musical progressions, knowledge, and many other fields besides.

Symmetry-breaking

One of the first (postulated) instances of symmetry is said to have occurred during the Big Bang, when the observable universe was born. A sea of particles was perturbed 13.75 billion years ago by a high-temperature event, setting up anecdotal ripples in their system, eventually breaking their distribution in such a way that some particles got mass, some charge, some a spin, some all of them, and some none of them. In physics, this event is called spontaneous, or electroweak, symmetry-breaking. Because of the asymmetric properties of the resultant particles, matter as we know it was conceived.

Many invented scientific systems exhibit symmetry in that they allow for the conception of symmetry in the things they make possible. A good example is mathematics – yes, mathematics! On the real-number line, 0 marks the median. On either sides of 0, 1 and -1 are equidistant from 0, 5,000 and -5,000 are equidistant from 0; possibly, ∞ and -∞ are equidistant from 0. Numerically speaking, 1 marks the same amount of something that -1 marks on the other side of 0. Not just that, but also characterless functions built on this system also behave symmetrically on either sides of 0.

To many people, symmetry evokes an image of an object that, when cut in half along a specific axis, results in two objects that are mirror-images of each other. Cover one side of your face and place the other side against a mirror, and what a person hopes to see is the other side of the face – despite it being a reflection (interestingly, this technique was used by neuroscientist V.S. Ramachandran to “cure” the pain of amputees when they tried to move a limb that wasn’t there). Like this, there are symmetric tables, chairs, bottles, houses, trees (although uncommon), basic geometric shapes, etc.

A demonstration of V.S. Ramachandran’s mirror-technique

Natural symmetry

Symmetry at its best, however, is observed in nature. Consider germination: when a seed grows into a small plant and then into a tree, the seed doesn’t experiment with designs. The plant is not designed differently from the small tree, and the small tree is not designed differently from the big tree. If a leaf is given to sprout from the mineral-richest node on the stem then it will; if a branch is given to sprout from the mineral-richest node on the trunk then it will. So, is mineral-deposition in the arbor symmetric? It should be if their transportation out of the soil and into the tree is radially symmetric. And so forth…

At times, repeated gusts of wind may push the tree to lean one way or another, shadowing the leaves from against the force and keeping them form shedding off. The symmetry is then broken, but no matter. The sprouting of branches from branches, and branches from those branches, and leaves from those branches all follow the same pattern. This tendency to display an internal symmetry is characterized as fractalization. A well-known example of a fractal geometry is the Mandelbrot set, shown below.

If you want to interact with a Mandelbrot set, check out this magnificent visualization by Paul Neave. You can keep zooming in, but at each step, you’ll only see more and more Mandelbrot sets. Unfortunately, this set is one of a few exceptional sets that are geometric fractals.

Meta-geometry & Mulliken symbols

Now, it seems like geometric symmetry is the most ubiquitous and accessible example to us. Let’s take it one step further and look at the “meta-geometry” at play when one symmetrical shape is given an extra dimension. For instance, a circle exists in two dimensions; its three-dimensional correspondent is the sphere. Through such an up-scaling, we’re ensuring that all the properties of a circle in two dimensions stay intact in three dimensions, and then we’re observing what the three-dimensional shape is.

A circle, thus, becomes a sphere; a square becomes a cube; a triangle becomes a tetrahedron (For those interested in higher-order geometry, the tesseract, or hypercube, may be of special interest!). In each case, the 3D shape is said to have been generated by a 2D shape, and each 2D shape is said to be the degenerate of the 3D shape. Further, such a relationship holds between corresponding shapes across many dimensions, with doubly and triply degenerate surfaces also having been defined.

The tesseract (a.k.a. hypercube)

Obviously, there are different kinds of degeneracy, 10 of which the physicist Robert S. Mulliken identified and laid out. These symbols are important because each one defines a degree of freedom that nature possesses while creating entities, and this includes symmetrical entities as well. In other words, if a natural phenomenon is symmetrical in n dimensions, then the only way it can be symmetrical in n+1 dimensions also is by transforming through one or many of the degrees of freedom defined by Mulliken.

Robert S. Mulliken (1896-1986)

Apart from regulating the perpetuation of symmetry across dimensions, the Mulliken symbols also hint at nature wanting to keep things simple and straightforward. The symbols don’t behave differently for processes moving in different directions, through different dimensions, in different time-periods or in the presence of other objects, etc. The preservation of symmetry by nature is not a coincidental design; rather, it’s very well-defined.

Anastomosis

Now, if that’s the case – if symmetry is held desirable by nature, if it is not a haphazard occurrence but one that is well orchestrated if given a chance to be – why don’t we see symmetry everywhere? Why is natural symmetry broken? Is all of the asymmetry that we’re seeing today the consequence of that electro-weak symmetry-breaking phenomenon? It can’t be because natural symmetry is still prevalent. Is it then implied that what symmetry we’re observing today exists in the “loopholes” of that symmetry-breaking? Or is it all part of the natural order of things, a restoration of past capabilities?

One of the earliest symptoms of symmetry-breaking was the appearance of the Higgs mechanism, which gave mass to some particles but not some others. The hunt for it’s residual particle, the Higgs boson, was spearheaded by the Large Hadron Collider (LHC) at CERN.

The last point – of natural order – is allegorical with, as well as is exemplified by, a geological process called anastomosis. This property, commonly of quartz crystals in metamorphic regions of Earth’s crust, allows for mineral veins to form that lead to shearing stresses between layers of rock, resulting in fracturing and faulting. Philosophically speaking, geological anastomosis allows for the displacement of materials from one location and their deposition in another, thereby offsetting large-scale symmetry in favor of the prosperity of microstructures.

Anastomosis, in a general context, is defined as the splitting of a stream of anything only to rejoin sometime later. It sounds really simple but it is a phenomenon exceedingly versatile, if only because it happens in a variety of environments and for an equally large variety of purposes. For example, consider Gilbreath’s conjecture. It states that each series of prime numbers to which the forward difference operator has been applied always starts with 1. To illustrate:

2 3 5 7 11 13 17 19 23 29 … (prime numbers)

Applying the operator once: 1 2 2 4 2 4 2 4 6 … (successive differences between numbers)
Applying the operator twice: 1 0 2 2 2 2 2 2 …
Applying the operator thrice: 1 2 0 0 0 0 0 …
Applying the operator for the fourth time: 1 2 0 0 0 0 0 …

And so forth.

If each line of numbers were to be plotted on a graph, moving upwards each time the operator is applied, then a pattern for the zeros emerges, shown below.

This pattern is called that of the stunted trees, as if it were a forest populated by growing trees with clearings that are always well-bounded triangles. The numbers from one sequence to the next are anastomosing, only to come close together after every five lines! Another example is the vein skeleton on a hydrangea leaf. Both the stunted trees and the hydrangea veins patterns can be simulated using the rule-90 simple cellular automaton that uses the exclusive-or (XOR) function.

Nambu-Goldstone bosons

Now, what does this have to do with symmetry, you ask? While anastomosis may not have a direct relation with symmetry and only a tenuous one with fractals, its presence indicates a source of perturbation in the system. Why else would the streamlined flow of something split off and then have the tributaries unify, unless possibly to reach out to richer lands? Either way, anastomosis is a sign of the system acquiring a new degree of freedom. By splitting a stream with x degrees of freedom into two new streams each with x degrees of freedom, there are now more avenues through which change can occur.

Water entrainment in an estuary is an example of a natural asymptote or, in other words, a system’s “yearning” for symmetry

Particle physics simplies this scenario by assigning all forces and amounts of energy a particle. Thus, a force is said to be acting when a force-carrying particle is being exchanged between two bodies. Since each degree of freedom also implies a new force acting on the system, it wins itself a particle, actually a class of particles called the Nambu-Goldstone (NG) bosons. Named for Yoichiro Nambu and Jeffrey Goldstone, the particle’s existence’s hypothesizers, the presence of n NG bosons in a system means that, broadly speaking, the system has n degrees of freedom.

Jeffrey Goldstone (L) & Yoichiro Nambu

How and when an NG boson is introduced into a system is not yet a well-understood phenomenon theoretically, let alone experimentally! In fact, it was only recently that a mathematical model was developed by a theoretical physicist at UCal-Berkeley, Haruki Watanabe, capable of predicting how many degrees of freedom a complex system could have given the presence of a certain number of NG bosons. However, at the most basic level, it is understood that when symmetry breaks, an NG boson is born!

The asymmetry of symmetry

In other words, when asymmetry is introduced in a system, so is a degree of freedom. This seems only intuitive. At the same time, you’d think the axiom is also true: that when an asymmetric system is made symmetric, it loses a degree of freedom – but is this always true? I don’t think so because, then, it would violate the third law of thermodynamics (specifically, the Lewis-Randall version of its statement). Therefore, there is an inherent irreversibility, an asymmetry of the system itself: it works fine one way, it doesn’t work fine another – just like the split-off streams, but this time, being unable to reunify properly. Of course, there is the possibility of partial unification: in the case of the hydrangea leaf, symmetry is not restored upon anastomosis but there is, evidently, an asymptotic attempt.

Each piece of a broken mirror-glass reflects an object entirely, shedding all pretensions of continuity. The most intriguing mathematical analogue of this phenomenon is the Banach-Tarski paradox, which, simply put, takes symmetry to another level.

However, it is possible that in some special frames, such as in outer space, where the influence of gravitational forces is weak if not entirely absent, the restoration of symmetry may be complete. Even though the third law of thermodynamics is still applicable here, it comes into effect only with the transfer of energy into or out of the system. In the absence of gravity (and, thus, friction), and other retarding factors, such as distribution of minerals in the soil for acquisition, etc., symmetry may be broken and reestablished without any transfer of energy.

The simplest example of this is of a water droplet floating around. If a small globule of water breaks away from a bigger one, the bigger one becomes spherical quickly; when the seditious droplet joins with another globule, that globule also reestablishes its spherical shape. Thermodynamically speaking, there is mass transfer, but at (almost) 100% efficiency, resulting in no additional degrees of freedom. Also, the force at play that establishes sphericality is surface tension, through which a water body seeks to occupy the shape that has the lowest volume for the correspondingly highest surface area (notice how the shape is incidentally also the one with the most axes of symmetry, or, put another way, no redundant degrees of freedom? Creating such spheres is hard!).

A godless, omnipotent impetus

Perhaps the explanation of the roles symmetry assumes seems regressive: every consequence of it is no consequence but itself all over again (self-symmetry – there, it happened again). This only seems like a natural consequence of anything that is… well, naturally conceived. Why would nature deviate from itself? Nature, it seems, isn’t a deity in that it doesn’t create. It only recreates itself with different resources, lending itself and its characteristics to different forms.

A mountain will be a mountain to its smallest constituents, and an electron will be an electron no matter how many of them you bring together at a location. But put together mountains and you have ranges, sub-surface tectonic consequences, a reshaping of volcanic activity because of changes in the crust’s thickness, and a long-lasting alteration of wind and irrigation patterns. Bring together a unusual number of electrons to make up a high-density charge, and you have a high-temperature, high-voltage volume from which violent, permeating discharges of particles could occur – i.e., lightning. Why should stars, music, light, radioactivity, politics, manufacturing or knowledge be any different?

With this concludes the introduction to symmetry. Yes, there is more, much more…

xkcd #849

When must science give way to religion?

When I saw an article titled ‘Sometimes science must give way to religion‘ in Nature on August 22, 2012, by Daniel Sarewitz, I had to read it. I am agnostic, and I try as much as I can to keep from attempting to proselyte anyone – through argument or reason (although I often fail at controlling myself). However, titled as it was, I had to read the piece, especially since it’d appeared in a publication I subscribe to for their hard-hitting science news, which I’ve always approached as Dawkins might: godlessly.

First mistake.

Dr. Daniel Sarewitz

At first, if anything, I hoped the article would treat the entity known as God as simply an encapsulation of the unknown rather than in the form of an icon or elemental to be worshiped. However, the lead paragraph was itself a disappointment – the article was going to be about something else, I understood.

Visitors to the Angkor temples in Cambodia can find themselves overwhelmed with awe. When I visited the temples last month, I found myself pondering the Higgs boson — and the similarities between religion and science.

The awe is architectural. When pilgrims visit a temple built like the Angkor, the same quantum of awe hits them as it does an architect who has entered a Pritzker-prize winning building. But then, this sort of “reasoning”, upon closer observation or just an extra second of clear thought, is simply nitpicking. It implies that I’m just pissed that Nature decided to publish an article and disappoint ME. So, I continued to read on.

Until I stumbled upon this:

If you find the idea of a cosmic molasses that imparts mass to invisible elementary particles more convincing than a sea of milk that imparts immortality to the Hindu gods, then surely it’s not because one image is inherently more credible and more ‘scientific’ than the other. Both images sound a bit ridiculous. But people raised to believe that physicists are more reliable than Hindu priests will prefer molasses to milk. For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

For a long time, I have understood that science and religion have a lot in common: they’re both frameworks that are understood through some supposedly indisputable facts, the nuclear constituents of the experience born from believing in a world reality that we think is subject to the framework. Yes, circular logic, but how are we to escape it? The presence of only one sentient species on the planet means a uniform biology beyond whose involvement any experience is meaningless.

So how are we to judge which framework is more relevant, more meaningful? To me, subjectively, the answer is to be able to predict what will come, what will happen, what will transpire. For religion, these are eschatological and soteriological considerations. As Hinduism has it: “What goes around comes around!” For science, these are statistical and empirical considerations. Most commonly, scientists will try to spot patterns. If one is found, they will go about pinning the pattern’s geometric whims down to mathematical dictations to yield a parametric function. And then, parameters will be pulled out of the future and plugged into the function to deliver a prediction.

Earlier, I would have been dismissive of religion’s “ability” to predict the future. Let’s face it, some of those predictions and prophecies are too far into the future to be of any use whatsoever, and some other claims are so ad hoc that they sound too convenient to be true… but I digress. Earlier, I would’ve been dismissive, but after Sarewitz’s elucidation of the difference between rationality and faith, I am prompted to explain why, to me, it is more science than religion that makes the cut. Granted, both have their shortcomings: empiricism was smashed by Popper, while statistics and unpredictability are conjugate variables.

(One last point on this matter: If Sarewitz seems to suggest that the metaphorical stands in the way of faith evolving into becoming a conclusion of rationalism, then he also suggests lack of knowledge in one field of science merits a rejection of scientific rationality in that field. Consequently, are we to stand in eternal fear of the incomprehensible, blaming its incomprehensibility on its complexity? He seems to have failed to realize that a submission to the simpler must always be a struggle, never a surrender.)

Sarewitz ploughed on, and drew a comparison more germane and, unfortunately, more personal than logical.

By contrast, the Angkor temples demonstrate how religion can offer an authentic personal encounter with the unknown. At Angkor, the genius of a long-vanished civilization, expressed across the centuries through its monuments, allows visitors to connect with things that lie beyond their knowing in a way that no journalistic or popular scientific account of the Higgs boson can. Put another way, if, in a thousand years, someone visited the ruins of the Large Hadron Collider, where the Higgs experiment was conducted, it is doubtful that they would get from the relics of the detectors and super­conducting magnets a sense of the subatomic world that its scientists say it revealed.

Granted, if a physicist were to visit the ruins of the LHC, he may be able to put two and two together at the sight of the large superconducting magnets, striated with the shadows of brittle wires and their cryostatic sleeves, and guess the nature of the prey. At the same time, an engagement with the unknown at the Angkor Wat (since I haven’t been there, I’ll extrapolate my experience at the Thillai Nataraja Temple, Chidambaram, South India, from a few years back) requires a need to engage with the unknown. A pilgrim visiting millennia-old temples will feel the same way a physicist does when he enters the chamber that houses the Tevatron! Are they not both pleasurable?

I think now that what Sarewitz is essentially arguing against is the incomparability of pleasures, of sensations, of entire worlds constructed on the basis two very different ideologies, rather requirements, and not against the impracticality of a world ruled by one faith, one science. This aspect came in earlier in this post, too, when I thought I was nitpicking when I surmised Sarewitz’s awe upon entering a massive temple was unique: it may have been unique, but only in sensation, not in subject, I realize now.

(Also, I’m sure we have enough of those unknowns scattered around science; that said, Sarewitz seems to suggest that the memorability of his personal experiences in Cambodia are a basis for the foundation of every reader’s objective truth. It isn’t.)

The author finishes with a mention that he is an atheist. That doesn’t give any value to or take away any value from the article. It could have been so were Sarewitz to pit the two worlds against each other, but in his highlighting their unification – their genesis in the human mind, an entity that continues to evade full explicability – he has left much to be desired, much to be yearned for in the form of clarification in the conflict of science with religion. If someday, we were able to fully explain the working and origin of the human mind, and if we find it has a fully scientific basis, then where will that put religion? And vice versa, too.

Until then, science will not give way for religion, nor religion for science, as both seem equipped to explain.

Getting started on superconductivity

After the hoopla surrounding and attention on particle physics subsided, I realized that I’d been riding a speeding wagon all the time. All I’d done is used the lead-up to (the search for the Higgs boson) and the climax itself to teach myself something. Now, it’s left me really excited! Learning about particle physics, I’ve come to understand, is not a single-track course: all the way from making theoretical predictions to having them experimentally verified, particle physics is an amalgamation of far-reaching advancements in a host of other subjects.

One such is superconductivity. Philosophically, it’s a state of existence so far removed from its naturally occurring one that it’s a veritable “freak”. It is common knowledge that everything that’s naturally occurring is equipped to resist change that energizes, to return whenever possible to a state of lower energy. Symmetry and surface tension are great examples of this tendency. Superconductivity, on the other hand, is the desistence of a system to resist the passage of an electric current through it. As a phenomenon that as yet doesn’t manifest in naturally occurring substances, I can’t really opine on its phenomenological “naturalness”.

In particle physics, superconductivity plays a significant role in building powerful particle accelerators. In the presence of a magnetic field, a charged particle moves in a curved trajectory through it because of the Lorentz force acting on it; this fact is used to guide the protons in the Large Hadron Collider (LHC) at CERN through a ring 27 km long. Because moving in a curved path involves acceleration, each “swing” around the ring happens faster than the last, eventually resulting in the particle traveling at close to the speed of light.

A set of superconducting quadrupole-electromagnets installed at the LHC with the cryogenic cooling system visible in the background

In order to generate these extremely powerful magnetic fields – powerful because of the minuteness of each charge and the velocity required to be achieved – superconducting magnets are used that generate fields of the order of 20 T (to compare: the earth’s magnetic field is 25-60 μT, or close to 500,000-times weaker)! Furthermore, the direction of the magnetic field is also switched accordingly to achieve circular motion, to keep the particle from being swung off into the inner wall of the collider at any point!

To understand the role the phenomenon of superconductivity plays in building these magnets, let’s understand how electromagnets work. In a standard iron-core electromagnet, insulated wire is wound around an iron cylinder, and when a current is passed through the wire, a magnetic field is generated around the cross-section of the wire. Because of the coiling, though, the centre of the magnetic field passes through the axis of the cylinder, whose magnetic permeability magnifies the field by a factor of thousands, itself becoming magnetic.

When the current is turned off, the magnetic field instantaneously disappears. When the number of coils is increased, the strength of the magnetic field increases. When the strength of the current is increased, the strength of the magnetic field increases. However, beyond a point, the heat dissipated due to the wire’s electric resistance reduces the amount of current flowing through it, consequently resulting in a weakening of the core’s magnetic field over time.

It is Ohm’s law that establishes proportionality between voltage (V) and electric current (I), calling the proportionality-constant the material’s electrical resistance: R = V/I. To overcome heating due to resistance, resistance itself must be brought down to zero. According to Ohm’s law, this can be done either by passing a ridiculously large current through the wire or bringing the voltage across its ends down to zero. However, performing either of these changes on conventional conductors is impossible: how does one quickly pass a large volume of water through any pipe across which the pressure difference is miniscule?!

Heike Kamerlingh Onnes

The solution to this unique problem, therefore, lay in a new class of materials that humankind had to prepare, a class of materials that could “instigate” an alternate form of electrical conduction such that an electrical current could pass through it in the absence of a voltage difference. In other words, the material should be able to carry large amounts of current without offering up any resistance to it. This class of materials came to be known as superconductors – after Heike Kamerlingh Onnes discovered the phenomenon in 1911.

In a conducting material, the electrons that essentially effect the flow of electric current could be thought of as a charged fluid flowing through and around an ionic 3D grid, an arrangement of positively charged nuclei that all together make up the crystal lattice. When a voltage-drop is established, the fluid begins to get excited and moves around, an action called conducting. However, the electrons constantly collide with the ions. The ions, then, absorb some of the energy of the current, start vibrating, and gradually dissipate it as heat. This manifests as the resistance. In a superconductor, however, the fluid exists as a superfluid, and flows such that the electrons never collide into the ions.

In (a classical understanding of) the superfluid state, each electron repels every other electron because of their charge likeness, and attracts the positively charged nuclei. As a result, the nucleus moves very slightly toward the electron, causing an equally slight distortion of the crystal lattice. Because of the newly increased positive-charge density in the vicinity, some more electrons are attracted by the nucleus.

This attraction, which, across the entirety of the lattice, can cause a long-range but weak “draw” of electrons, results in pairs of electrons overcoming their mutual hatred of each other and tending toward one nucleus (or the resultant charge-centre of some nuclei). Effectively, this is a pairing of electrons whose total energy was shown by Leon Cooper in 1956 to be lesser than the energy of the most energetic electron if it had existed unpaired in the material. Subsequently, these pairs came to be called Cooper pairs, and a fluid composed of Cooper pairs, a superfluid (thermodynamically, a superfluid is defined as a fluid that can flow without dissipating any energy).

Although the sea of electrons in the new superconducting class of materials could condense into a superfluid, the fluid itself can’t be expected to flow naturally. Earlier, the application of an electric current imparted enough energy to all the electrons in the metal (via a voltage difference) to move around and to scatter against nuclei to yield resistance. Now, however, upon Cooper-pairing, the superfluid had to be given an environment in which there’d be no vibrating nuclei. And so: enter cryogenics.

The International Linear Collider – Test Area’s (ILCTA) cryogenic refrigerator room

The thermal energy of a crystal lattice is given by E = kT, where ‘k’ is Boltzmann’s constant and T, the temperature. Demonstrably, to reduce the kinetic energy of all nuclei in the lattice to zero, the crystal itself had to be cooled to absolute zero (0 kelvin). This could be achieved by cryogenic cooling techniques. For instance, at the LHC, the superconducting magnets are electromagnets wherein the coiled wire is made of a superconducting material. When cooled to a really low temperature using a two-stage heat-exchanger composed of liquid helium jacketed with liquid nitrogen, the wires can carry extremely large amounts of current to generate very intense magnetic fields.

At the same time, however, if the energy of the superfluid itself surpassed the thermal energy of the lattice, then it could flow without the lattice having to be cooled down. Because the thermal energy is different for different crystals at different ambient temperatures, the challenge now lies in identifying materials that could permit superconductivity at temperatures approaching room-temperature. Now that would be (even more) exciting!

P.S. A lot of the related topics have not been covered in this post, such as the Meissner effect, electron-phonon interactions, properties of cuprates and lanthanides, and Mott insulators. They will be taken up in the future as they’re topics that require in-depth detailing, quite unlike this post which has been constructed as a superfluous introduction only.

Ramblings on partons

When matter and anti-matter meet, they annihilate each other in a “flash” of energy. Usually, this release of energy is in the form of high-energy photons, or gamma rays, which are then detected, analysed, and interpreted to understand more of the collision’s other properties. In nature, however, matter/anti-matter collisions are ultra-rare if not altogether non-existent because of the unavailability of anti-matter.

Such annihilation processes are important not just to supplant our understanding of particle physics but also because they play a central role in the design of hadron colliders. Such colliders use heavily interacting particles (the superficial definition of hadrons), such as protons and neutrons, to bombard into each other. The target particles, depending on experimental necessities, may be stationary – in which case the collider is said to employ a fixed target – or moving. The Large Hadron Collider (LHC) is the world’s largest and most powerful hadron collider, and it uses moving targets, i.e., both the incident and target hadrons are moving toward each other.

Currently, it is know that a hadronic collision is explicable in terms of their constituent particles, quarks and gluons. Quarks are the snowcloned fundamental building blocks of all matter, and gluons are particles that allow two quarks to “stick” together, behaving like glue. More specifically, gluons mediate the residual strong force (where the strong force itself is one of the four fundamental forces of nature): in other words, quarks interact by exchanging gluons.

Parton distribution functions

Earlier, before the quark-gluon model was known, a hadronic collision was broken down in terms of hypothetical particles called partons. The idea was suggested by Richard Feynman in 1969. At very high energies – such as the ones at which collisions occur at the LHC – equations governing the parton model, which approximates the hadrons as presenting point-targets, evolve into parton-distribution functions (PDFs). PDFs, in turn, allow for the prediction of the composition of the hubris resulting from the collisions. Theoretical calculations pertaining to different collision environments and outcomes are used to derive different PDFs for each process, which are then used by technicians to design hadron-colliders accordingly.

(If you can work on FORTRAN, here are some PDFs to work with.)

Once the quark-gluon model was in place, there were no significant deviations from the parton model. At the same time, because quarks have a corresponding anti-matter “form”,anti-quarks, a model had to be developed that could study quark/anti-quark collisions during the course of a hadronic collision, especially one that could factor in the production of pairs of leptons during such collisions. Such a model was developed by Sidney Drell and Tung-Mow Yan in 1970, and was called the Drell-Yan (DY) process, and further complimented by a phenomenon called Bjorken scaling (Bsc).

(In Bsc, when the energy of an incoming lepton is sufficiently high during a collision process, the cross-section available for collision becomes independent of the electron’s momentum. In other words, the lepton, say, an electron, at very-high energies interacts with a hadron not as if the latter were particle but as if it were composed of point-like targets called partons.)

In a DY process, a quark from one hadron would collide with an anti-quark from another hadron and annihilate each other to produce a virtual photon (γ*). The γ* then decays to form a dilepton pair, which, if we were to treat with as one entity instead of as a paired two, could be said to have a mass M.

Now, if M is large, then Heisenberg’s uncertainty principle tells us that the time of interaction between the quark/anti-quark pair should have been small, essentially limiting its interaction with any other partons in the colliding hadrons. Similarly, in a timeframe that is long in comparison to the timescale of the annihilation, the other spectator-partons would rearrange themselves into resultant hadrons. However, in most cases, the dilepton is detected and momentum-analysed, not the properties of the outgoing hadrons. The DY process results in the production of dilepton pairs at finite energies, but these energies are very closely spaced, resulting in an energy-band, or continuum, being defined in the ambit of which a dilepton-pair might be produced.

In quantum chromodynamics and quark-parton transitions

Quark/anti-quark annihilation is of special significance in quantum chromodynamics (QCD), which studies the colour-force, the force between gluons and quarks and anti-quarks, inside hadrons. The strong field that gluons mediate is, in quantum mechanical terms, called the colour field. Unlike in QED (quantum electrodynamics) or classical mechanics, QCD allows for two strange kinds of behaviour from quarks and gluons. The first kind, called confinement, holds that the force between two interacting quarks does not diminish as they are separated. This doesn’t mean that quarks are strongly interacting at large distances! No, it means that once two quarks have come together, no amount of energy can take them apart. The second kind, called asymptotic freedom (AF), holds that that quarks and gluons interact weakly at high energies.

(If you think about it, colour-confinement implies that gluons can emit gluons, and as the separation between two quarks increases, so also the rate of gluon emission increases. Axiomatically, as the separation decreases, or that the relative four-momentum squared increases, the force holding quarks together decreases monotonically in strength, leading to asymptotic freedom.)

The definitions for both properties are deeply rooted in experimental ontology: colour-confinement was chosen to explain the consistent failure of free-quark searches, while asymptotic freedom doesn’t yield any phase-transition line between high- and low-energy scales while still describing a property transition between the two scales. Therefore, the DY process seemed well-poised to provide some indirect proof for the experimental validity of QCD if some relation could be found between the collision cross-section and the particles’ colour-charge, and this is just what was done.

The QCD factorization theorem can be read as:

Here, as(μ) is the effective chromodynamic (quark-gluon-quark) coupling at a factorization scale μ. Further, fa(xμ) defines the probability of finding a parton a within a nucleon with the Bjorken scaling variable x at the scale μ. Also, { hat { sigma  }  }_{ i }^{ a } (TeX converter) is the hard-scattering cross-section of the electroweak vector boson on the parton. The physical implication is that the nucleonic structure function is derived by the area of overlap between the function describing the probability of finding a parton inside a nucleon and the summa of all functions describing the probabilities of finding all partons within the nucleon.

This scaling behaviour enabled by QCD makes possible predictions about future particle phenomenology.

Putting particle physics research to work

In the whole gamut of comments regarding the Higgs boson, there is a depressingly large number decrying the efforts of the ATLAS and CMS collaborations. Why? Because a lot of people think the Large Hadron Collider (LHC) is a yawning waste of time and money, an investment that serves mankind no practical purpose.

Well, here and here are some cases in point that demonstrate the practical good that the LHC has made possible in the material sciences. Another big area of application is in medical diagnostics: making the point is one article about hunting for the origin of Alzheimer’s, and another about the very similar technology used in particle accelerators and medical imaging devices, meteorology, VLSI, large-scale networking, cryogenics, and X-ray spectroscopy.

Moving on to more germane applications: arXiv has reams of papers that discuss the deployment of

… amongst others.

The LHC, above all else, is the brainchild of the European Centre for Nuclear Research, popularly known as CERN. These guys invented the notion of the internet, developed the first touch-screen devices, and pioneered the earliest high-energy medical imaging techniques.

With experiments like those being conducted at the LHC, it’s easy to forget every other development in such laboratories apart from the discovery of much-celebrated particles. All the applications I’ve linked to in this post were conceived by scientists working with the LHC, if only to argue that everyone, the man whose tax money pays for these giant labs to the man who uses the money to work in the labs, is mindful of practical concerns.

Gunning for the goddamned: ATLAS results explained

Here are some of the photos from the CERN webcast yesterday (July 4, Wednesday), with an adjoining explanation of the data presented in each one and what it signifies.

This first image shows the data accumulated post-analysis of the diphoton decay mode of the Higgs boson. In simpler terms, physicists first put together all the data they had that resulted from previously known processes. This constituted what’s called the background. Then, they looked for signs of any particle that seemed to decay into two energetic photons, or gamma rays, in a specific energy window; in this case, 100-160 GeV.

Finally, knowing how the number of events would vary in a scenario without the Higgs boson, a curve was plotted that fit the data perfectly: the number of events at each energy level v. the energy level at which it was tracked. This way, a bump in the curve during measurement would mean there was a particle previously unaccounted for that was causing an excess of diphoton decay events at a particular energy.

This is the plot of the mass of the particle being looked for (x-axis) versus the confidence level with which it has (or has not, depending n how you look at it) been excluded as an event to focus on. The dotted horizontal line, corresponding to 1μ, marks off a 95% exclusion limit: any events registered above the line can be claimed as having been observed with “more than 95% confidence” (colloquial usage).

Toward the top-right corner of the image are some numbers. 7 TeV and 8 TeV are the values of the total energy going into each collision before and after March, 2012, respectively. The beam energy was driven up to increase the incidence of decay events corresponding to Higgs-boson-like particles, which, given the extremely high energy at which they exist, are viciously short-lived. In experiments that were run between March and July, physicists at CERN reported an increase of almost 25-30% of such events.

The two other numbers indicate the particle accelerator’s integrated luminosity. In particle physics, luminosity is measured as the number of particles that can pass detected through a unit of area per second. The integrated luminosity is the same value but measured over a period of time. In the case of the LHC, after the collision energy was vamped up, the luminosity, too, had to be increased: from about 4.7 fb-1 to 5.8 fb-1. You’ll want to Wiki the unit of area called barn. Some lighthearted physics talk there.

In this plot, the y-axis on the left shows the chances of error, and the corresponding statistical significance on the right. When the chances of an error stand at 1, the results are not statistically significant at all because every observation is an error! But wait a minute, does that make sense? How can all results be errors? Well, when looking for one particular type of event, any event that is not this event is an error.

Thus, as we move toward the ~125 GeV mark, the number of statistically significant results shoot up drastically. Looking closer, we see two results registered just beyond the 5-sigma mark, where the chances of error are 1 in 3.5 million. This means that if the physicists created just those conditions that resulted in this >5σ (five-sigma) observation 3.5 million times, only once will a random fluctuation play impostor.

Also, notice how the differences between each level of statistical significance increases with increasing significance? For chances of errors: 5σ – 4σ > 4σ – 3σ > … > 1σ – 0σ. This means that the closer physicists get to a discovery, the exponentially more precise they must be!

OK, this is a graph showing the mass-distribution for the four-lepton decay mode, referred to as a channel by those working on the ATLAS and CMS collaborations (because there are separate channels of data-taking for each decay-mode). The plotting parameters are the same as in the first plot in this post except for the scale of the x-axis, which goes all the way from 0 to 250 GeV. Now, between 120 GeV and 130 GeV, there is an excess of events (light blue). Physicists know it is an excess and not at par with expectations because theoretical calculations made after discounting a Higgs-boson-like decay event show that, in that 10 GeV, only around 5.3 events are to be expected, as opposed to the 13 that turned up.

After the Higgs-boson-like particle, what’s next?

This article, as written by me, appeared in print in The Hindu on July 5, 2012.

The ATLAS (A Toroidal LHC Apparatus) collaboration at CERN has announced the sighting of a Higgs boson-like particle in the energy window of 125.3 ± 0.6 GeV. The observation has been made with a statistical significance of 5 sigma. This means the chances of error in their measurements are 1 in 3.5 million, sufficient to claim a discovery and publish papers detailing the efforts in the hunt.

Rolf-Dieter Heuer, Director General of CERN since 2009, said at the special conference called by CERN in Geneva, “It was a global effort, it is a global effort. It is a global success.” He expressed great optimism and concluded the conference saying this was “only the beginning.”

With this result, collaborations at the Large Hadron Collider (LHC), the atom-smashing machine, have vastly improved on their previous announcement on December 13, 2011, where the chance of an error was 1-in-50 for similar sightings.

A screenshot from the Dec 13, 2011, presentation by Fabiola Gianotti, leader of the ATLAS collaboration, that shows a global statistical significance of 2.3 sigma, which translates to a 1-in-50 chance of the result being erroneous.

Another collaboration, called CMS (Compact Muon Solenoid), announced the mass of the Higgs-like particle with a 4.9 sigma result. While insufficient to claim a discovery, it does indicate only a one-in-two-million chance of error.

Joe Incandela, CMS spokesman, added, “We’re reaching into the fabric of the universe at a level we’ve never done before.”

The LHC will continue to run its experiments so that results revealed on Wednesday can be revalidated before it shuts down at the end of the year for maintenance. Even so, by 2013, scientists, such as Dr. Rahul Sinha, a participant of the Belle Collaboration in Japan, are confident that a conclusive result will be out.

“The LHC has the highest beam energy in the world now. The experiment was designed to yield quick results. With its high luminosity, it quickly narrowed down the energy-ranges. I’m sure that by the end of the year, we will have a definite word on the Higgs boson’s properties,” he said.

However, even though the Standard Model, the framework of all fundamental particles and the dominating explanatory model in physics today, predicted the particle’s existence, slight deviations have been observed in terms of the particle’s predicted mass. Even more: zeroing in on the mass of the Higgs-like particle doesn’t mean the model is complete when, in fact, it is far from.

While an answer to the question of mass formation took 50 years to be reached, physicists are yet to understand many phenomena. For instance, why aren’t the four fundamental forces of nature equally strong?

The weak, nuclear, electromagnetic, and gravitational forces were born in the first few moments succeeding the Big Bang 13.75 billion years ago. Of these, the weak force is, for some reason, almost 1 billion, trillion, trillion times stronger than the gravitational force! Called the hierarchy problem, it evades a Standard Model explanation.

In response, many theories were proposed. One, called supersymmetry (SUSY), proposed that all fermions, which are particles with half-integer spin, were paired with a corresponding boson, or particles with integer spin. Particle spin is the term quantum mechanics attributes to the particle’s rotation around an axis.

Technicolor was the second framework. It rejects the Higgs mechanism, a process through which the Higgs boson couples stronger with some particles and weaker with others, making them heavier and lighter, respectively.

Instead, it proposes a new form of interaction with initially-massless fermions. The short-lived particles required to certify this framework are accessible at the LHC. Now, with a Higgs-like particle having been spotted with a significant confidence level, the future of Technicolor seems uncertain.

However, “significant constraints” have been imposed on the validity of these and such theories, labeled New Physics, according to Prof. M.V.N. Murthy of the Institute of Mathematical Sciences (IMS), whose current research focuses on high-energy physics.

Some other important questions include why there is more matter than antimatter in this universe, why fundamental particles manifest in three generations and not more or fewer, and the masses of the weakly-interacting neutrinos. State-of-the-art technology worldwide has helped physicists design experiments to study each of these problems better.

For example, the India-based Neutrino Observatory (INO), under construction in Theni, will house the world’s largest static particle detector to study atmospheric neutrinos. Equipped with its giant iron-calorimeter (ICAL) detector, physicists aim to discover which neutrinos are heavier and which lighter.

The LHC currently operates at the Energy Frontier, with high-energy being the defining constraint on experiments. Two other frontiers, Intensity and Cosmic, are also seeing progress. Project X, a proposed proton accelerator at Fermilab in Chicago, Illinois, will push the boundaries of the Intensity Frontier by trying to look for ultra-rare process. On the Cosmic Frontier, dark matter holds the greatest focus.

Hunt for the Higgs boson: A quick update

And it was good news after all! In an announcement made earlier today at the special conference called by CERN near Geneva, the discovery of a Higgs-boson-like particle was announced by physicists from the ATLAS and CMS collaborations that spearheaded the hunt. I say discovery because the ATLAS team spotted an excess of events near the 125-GeV mark with a statistical significance of 5 sigma. This puts the chances of the observation being a random fluctuation at 1 in 3.5 million, a precision that asserts (almost) certainty.

Fabiola Gianotti announced the preliminary results of the ATLAS detector, as she did in December, while Joe Incandela was her CMS counterpart. The CMS results showed an excess of events around 125 GeV (give or take 0.6 GeV) at 4.9 sigma. While the chances of error in this case are 1 in 2 million, it can’t be claimed a discovery. Even so, physicists from both detectors will be presenting their efforts in the hunt as papers in the coming weeks. I’ll keep an eye out for their appearance on arXiv, and will post links to them.

After the beam energy in the Large Hadron Collider (LHC) was increased from 3.5 TeV/beam to 4 TeV/beam in March, only so many collisions could be conducted until July. As a result, the sample set available for detailed analysis was lower than could be considered sufficient. This is the reason some stress is placed on saying “boson-like” instead of attributing the observations to the boson itself. Before the end of the year, when the LHC will shut down for routine maintenance, however, scientists expect a definite word on the particle being the Higgs boson itself.

(While we’re on the subject: too many crass comments have been posted on the web claiming a religious element in the naming of the particle as the “God particle”. To those for whom this monicker makes sense: know that it doesn’t. When it was first suggested by a physicist, it stood as the “goddamn particle”, which a sensitive publisher corrected to the “God particle”).

The mass of the boson-like particle seems to deviate slightly from Standard Model (SM) predictions. This does not mean that SM stands invalidated. In point of fact, SM still holds strong because it has been incredibly successful in being able to predict the existence and properties of a host of other particles. One deviation cannot and will not bring it down. At the same time, it’s far from complete, too. What the spotting of a Higgs-boson-like particle in said energy window has done is assure physicists and others worldwide that the predicted mechanism of mass-generation is valid and within the SM ambit.

Last: the CERN announcement was fixed for today not without another reason. The International Conference on High Energy Physics (ICHEP) is scheduled to commence tomorrow in Melbourne. One can definitely expect discussions on the subject of the Higgs mechanism to be held there. Further, other topics also await to be dissected and their futures laid out – in terms vague or concrete. So, the excitement in the scientific community is set to continue until July 11, when ICHEP is scheduled to close.

Be sure to stay updated. These are exciting times!

So, is it going to be good news tomorrow?

As the much-anticipated lead-up to the CERN announcement on Wednesday unfolds, the scientific community is rife with many speculations and few rumours. In spite of this deluge, it may be that we could expect a confirmation of the God particle’s existence in the seminar called by physicists working on the Large Hadron Collider (LHC).

The most prominent indication of good news is that five of the six physicists who theorized the Higgs mechanism in a seminal paper in 1964 have been invited to the meeting. The sixth physicist, Robert Brout, passed away in May 2011. Peter Higgs, the man for whom the mass-giving particle is named, has also agreed to attend.

The other indication is much more subtle but just as effective. Dr. Rahul Sinha, a professor of high-energy physics and a participant in the Japanese Belle collaboration, said, “Hints of the Higgs boson have already been spotted in the energy range in which LHC is looking. If it has to be ruled out, four-times as much statistical data should have been gathered to back it up, but this has not been done.”

The energy window which the LHC has been combing through was based on previous searches for the particle at the detector during 2010 and at the Fermilab’s Tevatron before that. While the CERN-based machine is looking for signs of two-photon decay of the notoriously unstable boson, the American legend looked for signs of the boson’s decay into two bottom quarks.

Last year, on December 13, CERN announced in a press conference that the particle had been glimpsed in the vicinity of 127 GeV (GeV, or giga-electron-volt, is used as a measure of particle energy and, by extension of the mass-energy equivalence, its mass).

However, scientists working on the ATLAS detector, which is heading the search, could establish only a statistical significance of 2.3 sigma then, or a 1-in-50 chance of error. To claim a discovery, a 5-sigma result is required, where the chances of errors are one in 3.5 million.

Scientists, including Dr. Sinha and his colleagues, are hoping for a 4-sigma result announcement on Wednesday. If they get it, the foundation stone will have been set for physicists to explore further into the nature of fundamental particles.

Dr. M.V.N. Murthy, who is currently conducting research in high-energy physics at the Institute of Mathematical Sciences (IMS), said, “Knowing the mass of the Higgs boson is the final step in cementing the Standard Model.” The model is a framework of all the fundamental particles and dictates their behaviour. “Once we know the mass of the particle, we can move on and explore the nature of New Physics. It is just around the corner,” he added.