The literature of metaphysics (or, ‘Losing your marbles’ )

For a while now, I’ve been intent on explaining stuff from particle physics.

A lot of it is intuitive if you go beyond the mathematics and are ready to look at packets of energy as extremely small marbles. And then, you’ll find out some marbles have some charge, some the opposite charge, and some have no charge at all, and so forth. And then, it’s just a matter of time before you figure out how these properties work with each other (“Like charges repel, unlike charges attract”, etc).

These things are easy to explain. In fact, they’re relatively easy to demonstrate, too, and that’s why not a lot of people are out there who want to read and understand this kind of stuff. They already get it.

Where particle physics gets really messed up is in the math. Why the math, you might ask, and I wouldn’t say that’s a good question. Given how particle physics is studied experimentally – by smashing together those little marbles at almost the speed of light and then furtively looking for exotic fallout from the resulting debris – math is necessary to explain a lot of what happens the way it does.

This is because the marbles, a.k.a. the particles, also differ in ways that cannot be physically perceived in many circumstances but whose consequences are physical enough. These unobservable differences are pretty neatly encapsulated by mathematics.

It’s like a magician’s sleight of hand. He’ll stick a coin into a pocket in his pants and then pull the same coin out from his mouth. If you’re sitting right there, you’re going to wonder “How did he do that?!” Until you figure it out, it’s magic to you.

Theoretical particle physics, which deals with a lot of particulate math, is like that. Weird particles are going to show up in the experiments. The experimental physicists are going to be at a loss to explain why. The theoretician, in the meantime, is going to work out how the “observable” coin that went into the pocket came out of the mouth.

The math just makes this process easy because it helps put down on paper information about something that may or may not exist. And if really doesn’t exist, then the math’s going to come up awry.

Math is good… if you get it. There’s definitely going to be a problem learning math the way it’s generally taught in schools: as a subject. We’re brought up to study math, not really to use it to solve problems. There’s not much to study once you go beyond the basic laws, some set theory, geometry, and the fundamentals of calculus. After that, math becomes a tool and a very powerful one at that.

Math becomes a globally recognised way to put down the most abstract of your thoughts, fiddle around with them, see if they make sense logically, and then “learn” them back into your mind whence they came. When you can use math like this, you’ll be ready to tackle complex equations, too, because you’ll know they’re not complex at all. They’re just somebody else’s thoughts in this alpha-numerical language that’s being reinvented continuously.

Consider, for instance, the quantum chromodynamic (QCD) factorisation theorem from theoretical particle physics:

This hulking beast of an equation implies that *deep breath*, at a given scale (µand a value of the Bjorken scaling variable (x), the nucleonic structure function is derived by the area of overlap between the function describing the probability of finding a parton inside a nucleon (f(x, µ)and the summa (Σ) of all functions describing the probabilities of all partons within the nucleon *phew*.

In other words, it only describes how a fast incoming particle collides with a target particle based on how probable certain outcomes are!

The way I see it, math is the literature of metaphysics.

For instance, when we’re tackling particle physics and the many unobservables that come with it, there’s going to be a lot of creativity and imagination, and thinking, involved. There’s no way we’d have had as much as order as we do in the “zoo of particles” today without some ingenious ideas from some great physicists – or, the way I see it, great philosophers.

For instance, the American philosopher Murray Gell-Mann and the Israeli philosopher Yuval Ne’eman independently observed in the 1960s that their peers were overlooking an inherent symmetry among particles. Gell-Mann’s solution, called the Eightfold Way, demonstrated how different kinds of mesons, a type of particles, were related to each other in simple ways if you laid them around in an octagon.

A complex mechanism of interaction was done away with by Gell-Mann and Ne’eman, and substituted with one that brought to light simpler ones, all through a little bit of creativity and some geometry. The meson octet is well-known today because it brought to light a natural symmetry in the universe. Looking at the octagon, we can see it’s symmetrical across three diagonals that connect directly opposite vertices.

The study of these symmetries, and what the physics could be behind it, gave birth to the quark model as well as won Gell-Mann the 1969 Nobel Prize in physics.

What we perceive as philosophy, mathematics and science today were simply all subsumed under natural philosophy earlier. Before the advent of instruments to interact with the world with, it was easier, and much more logical, for humans to observe what was happening around them, and find patterns. This involved the uses of our senses, and this school of philosophy is called empiricism.

At the time, as it is today, the best way to tell if one process was related to another was by finding common patterns. As more natural phenomena were observed and more patterns came to light, classifications became more organised. As they grew in size and variations, too, something had to be done for philosophers to communicate their observations easily.

And so, numbers and shapes were used first – they’re the simplest level of abstraction; let’s call it “0”. Then, where they knew numbers were involved but not what their values were, variables were brought in: “1”. When many variables were involved, and some relationships between variables came to light, equations were used: “2”. When a group of equations was observed to be able to explain many different phenomena, they became classifiable into fields: “3”. When a larger field could be broken down into smaller, simpler ones, derivatives were born: “4”. When a lot of smaller fields could be grouped in such a way that they could work together, we got systems: “5”. And so on…

Today, we know that there are multitudes of systems – an ecosystem of systems! The construction of a building is a system, the working of a telescope is a system, the breaking of a chair is a system, and the constipation of bowels is a system. All of them are governed by a unifying natural philosophy, what we facilely know today as the laws of nature.

Because of the immense diversification born as a result of centuries of study along the same principles, different philosophers like to focus on different systems so that, in one lifetime, they can learn it, then work with it, and then use it to craft contributions. This trend of specialising gave birth to mathematicians, physicists, chemists, engineers, etc.*

But the logical framework we use to think about our chosen field, the set of tools we use to communicate our thoughts to others within and without the field, is one: mathematics. And as the body of all that thought-literature expands, we get different mathematic tools to work with.

Seen this way, which I do, I’m not reluctant to using equations in what I write. There is no surer way than using math to explain what really someone was thinking when they came up with something. Looking at an equation, you can tell which fields it addresses, and by extension “where the author is coming from”.

Unfortunately, the more popular perception of equations is way uglier, leading many a reader to simply shut the browser-tab if it’s thrown up an equation as part of an answer. Didn’t Hawking, after all, famously conclude that each equation in a book halved the book’s sales?

That belief has to change, and I’m going to do my bit one equation at a time… It could take a while.

(*Here, an instigatory statement by philosopher Paul Feyerabend comes to mind:

The withdrawal of philosophy into a “professional” shell of its own has had disastrous consequences. The younger generation of physicists, the Feynmans, the Schwingers, etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrodinger, Boltzmann, Mach and so on. But they are uncivilized savages, they lack in philosophical depth — and this is the fault of the very same idea of professionalism which you are now defending.“)

(This blog post first appeared at The Copernican on December 27, 2013.)

Another window on ‘new physics’ closes

This reconstructed image of two high-energy protons colliding at the LHC shows a B_s meson (blue) produced that then decays into two muons (pink), about 50 mm from the collision point.
This reconstructed image of two high-energy protons colliding at the LHC shows a B_s meson (blue) produced that then decays into two muons (pink), about 50 mm from the collision point. Image: LHCb/CERN

The Standard Model of particle physics is a theory that has been pieced together over the last 40 years after careful experiments. It accurately predicts the behaviour of various subatomic particles across a range of situations. Even so, it’s not complete: it can explain neither gravity nor anything about the so-called dark universe.

Physicists searching for a theory that can have to pierce through the Standard Model. This can be finding some inconsistent mathematics or detecting something that can’t be explained by it, like looking for particles ‘breaking down’, i.e. decaying, into smaller ones at a rate greater than allowed by the Model.

The Large Hadron Collider, CERN, on the France-Switzerland border, produces the particles, and particle detectors straddling the collider are programmed to look for aberrations in their decay, among other things. One detector in particular, called the LHCb, looks for signs of a particle called B_s (read “B sub s”) meson decaying into two smaller particles called muons.

On July 19, physicists from the LHCb experiment confirmed at an ongoing conference in Stockholm that the B_s meson decays to two muons at a rate consistent with the Model’s predictions (full paperhere). The implication is that one more window through which physicists could have a peek of the physics beyond the Model is now shut.

The B_s meson

This meson has been studied for around 25 years, and its decay-rate to two muons has been predicted to be about thrice every billion times, 3.56 ± 0.29 per billion to be exact. The physicists’ measurements from the LHCb showed that it was happening about 2.9 times per billion. A team working with another detector, the CMS, reported it happens thrice every billion decays. These are number pretty consistent with the Model’s. In fact, scientists think the chance of an error in the LHCb readings is 1 in 3.5 million, low enough to claim a discovery.

However, this doesn’t mean the search for ‘new physics’ is over. There are many other windows, such as the search for dark matter, observations of neutrino oscillations, studies of antimatter and exotic theories like Supersymmetry, to keep scientists going.

The ultimate goal is to find one theory that can explain all phenomena observed in the universe – from subatomic particles to supermassive black holes to dark matter – because they are all part of one nature.

In fact, physicists are fond of Sypersymmetry, a theory that posits that there is one as-yet undetected particle for every one that we have detected, because it promises to retain the naturalness. In contrast, the Standard Model has many perplexing, yet accurate, explanations that is keeping physicists from piecing together the known universe in a smooth way. However, in order to find any evidence for Supersymmetry, we’ll have to wait until at least 2015, when the CERN supercollider will reopen upgraded for higher energy experiments.

And as one window has closed after an arduous 25-year journey, the focus on all the other windows will intensify, too.

(This blog post first appeared at The Copernican on July 19, 2013.)

A different kind of experiment at CERN

This article, as written by me, appeared in The Hindu on January 24, 2012.

At the Large Hadron Collider (LHC) at CERN, near Geneva, Switzerland, experiments are conducted by many scientists who don’t quite know what they will see, but know how to conduct the experiments that will yield answers to their questions. They accelerate beams of particles called protons to smash into each other, and study the fallout.

There are some other scientists at CERN who know approximately what they will see in experiments, but don’t know how to do the experiment itself. These scientists work with beams of antiparticles. According to the Standard Model, the dominant theoretical framework in particle physics, every particle has a corresponding particle with the same mass and opposite charge, called an anti-particle.

In fact, at the little-known AEgIS experiment, physicists will attempt to produce an entire beam composed of not just anti-particles but anti-atoms by mid-2014.

AEgIS is one of six antimatter experiments at CERN that create antiparticles and anti-atoms in the lab and then study their properties using special techniques. The hope, as Dr. Jeffrey Hangst, the spokesperson for the ALPHA experiment, stated in an email, is “to find out the truth: Do matter and antimatter obey the same laws of physics?”

Spectroscopic and gravitational techniques will be used to make these measurements. They will improve upon, “precision measurements of antiprotons and anti-electrons” that “have been carried out in the past without seeing any difference between the particles and their antiparticles at very high sensitivity,” as Dr. Michael Doser, AEgIS spokesperson, told this Correspondent via email.

The ALPHA and ATRAP experiments will achieve this by trapping anti-atoms and studying them, while the ASACUSA and AEgIS will form an atomic beam of anti-atoms. All of them, anyway, will continue testing and upgrading through 2013.

Working principle

Precisely, AEgIS will attempt to measure the interaction between gravity and antimatter by shooting an anti-hydrogen beam horizontally through a vacuum tube and then measuring how it much sags due to the gravitational pull of the Earth to a precision of 1 per cent.

The experiment is not so simple because preparing anti-hydrogen atoms is difficult. As Dr. Doser explained, “The experiments concentrate on anti-hydrogen because that should be the most sensitive system, as it is not much affected by magnetic or electric fields, contrary to charged anti-particles.”

First, antiprotons are derived from the Antiproton Decelerator (AD), a particle storage ring which “manufactures” the antiparticles at a low energy. At another location, a nanoporous plate is bombarded with anti-electrons, resulting in a highly unstable mixture of both electrons and anti-electrons called positronium (Ps).

The Ps is then excited to a specific energy state by exposure to a 205-nanometre laser and then an even higher energy state called a Rydberg level using a 1,670-nanometre laser. Last, the excited Ps traverses a special chamber called a recombination trap, when it mixes with antiprotons that are controlled by precisely tuned magnetic fields. With some probability, an antiproton will “trap” an anti-electron to form an anti-hydrogen atom.

Applications

Before a beam of such anti-hydrogen atoms is generated, however, there are problems to be solved. They involve large electric and magnetic fields to control the speed of and collimate the beams, respectively, and powerful cryogenic systems and ultra-cold vacuums. Thus, Dr. Doser and his colleagues will spend many months making careful changes to the apparatus to ensure these requirements work in tandem by 2014.

While antiparticles were first discovered in 1959, “until recently, it was impossible to measure anything about anti-hydrogen,” Dr. Hangst wrote. Thus, the ALPHA and AEgIS experiments at CERN provide a seminal setting for exploring the world of antimatter.

Anti-particles have been used effectively in many diagnostic devices such as PET scanners. Consequently, improvements in our understanding of them feed immediately into medicine. To name an application: Antiprotons hold out the potential of treating tumors more effectively.

In fact, the feasibility of this application is being investigated by the ACE experiment at CERN.

In the words of Dr. Doser: “Without the motivation of attempting this experiment, the experts in the corresponding fields would most likely never have collaborated and might well never have been pushed to solve the related interdisciplinary problems.”

The strong CP problem: We’re just lost

Unsolved problems in particle physics are just mind-boggling. They usually concern nature at either the smallest or the largest scales, and the smaller the particle whose properties you’re trying to decipher, the closer you are to nature’s most fundamental principles, principles that, in their multitudes, father civilisations, galaxies, and all other kinds of things.

One of the most intriguing such problems is called the ‘strong CP problem’. It has to do with the strong force, one of nature’s four fundamental forces, and what’s called the CP-violation phenomenon.

The strong force is responsible for most of the mass of the human body, most of the mass of the chair you’re sitting on, even most of the mass of our Sun and the moon.

Yes, the Higgs mechanism is the mass-giving mechanism, but it gives mass only to the fundamental particles, and if we were to be weighed by that alone, we’d weigh orders of magnitude lesser. More than 90 per cent of our mass actually comes from the strong nuclear force.

The relationship between the strong nuclear force and our mass is unclear (this isn’t the problem I’m talking about). It’s the force that holds together quarks, a brand of fundamental particles, to form protons and neutrons. As with all other forces in particle physics, its push-and-pull is understood in terms of a force-carrier particle – a messenger of the force’s will, as it were.

This messenger is called a gluon, and the behaviour of all gluons is governed by a set of laws that fall under the subject of quantum chromodynamics (QCD).


Dr. Murray Gell-Mann is an American scientist who contributed significantly to the development of theories of fundamental particles, including QCD

According to QCD, the farther two gluons get away from each other, the stronger the force between them will get. This is counterintuitive to those who’ve grown up working with Newton’s inverse-square laws, etc. An extension of this principle is that gluons can emit gluons, which is also counter-intuitive and sort of like the weird Banach-Tarski paradox.

Protons and neutrons belong to a category called hadrons, which are basically heavy particles that are made up of three quarks. When, instead, a quark and an antiquark are held together, another type of hadron called the meson comes into existence. You’d think the particle and its antiparticle would immediately annihilate each other. However, it doesn’t happen so quickly if the quark and antiquark are of different types (also called flavours).

One kind of meson is the kaon. A kaon comprises one strange quark (or antiquark) and one upantiquark (or quark). Among kaons, there are two kinds, K-short and K-long, whose properties were studied by Orreste Piccioni in 1964. They’re called so because K-long lasts longer than K-short before it decays into a shower of lighter particles, as shown:

Strange antiquark –> up antiquark + W-plus boson (1)

W-plus boson –> down antiquark + up quark

Up quark –> gluon + down quark + down antiquark (2)

The original other up quark remains as an up quark.

Whenever a decay results in the formation of a W-plus/W-minus/Z boson, the weak force is said to be involved. Whenever a gluon is seen mediating, the strong nuclear force is said to be involved.

In the decay shown above, there is one weak-decay (1) and one strong-decay (2). And whenever a weak-decay happens, a strange attitude of nature is revealed: bias.


Handed spin (the up-down arrows indicate the particle’s momentum)

The universe may not have a top or a bottom, but it definitely has a left and a right. At the smallest level, these directions are characterised by spinning particles. If a particle is spinning one way, then another particle with the same properties but spinning the other way is said to be the original’s mirror-image. This way, a right and a left orientation are chosen.

As a conglomeration of such spinning particles, some toward the right and some toward the left, comes together to birth stuff, the stuff will also acquire a handedness with respect to the rest of the universe.

And where the weak-decay is involved, left and right become swapped; parity gets violated.

Consider the K-long decay depicted above (1). Because of the energy conservation law, there must be a way to account for all the properties going into and coming out of the decay. This means if something went in left-handed, it must come out left-handed, too. However, the strange antiquark emerges as anup antiquark with its spin mirrored.


Physicists Tsung-Dao Lee and Chen Ning Yang (Image from the University of Chicago archive)

As Chen Nin Yang and Tsung-Dao Lee investigated in the 1950s, they found that the weak-decay results in particles whose summed up properties were exactly the same as that of the decaying particle, but in a universe in which left and right had been swapped! In addition, the weak-decay also forced any intervening quarks to change their flavour.


In the Feynman diagram shown above, a neutron decays into a proton because a down quark is turned into an up quark (The mediating W-minus decays into an electron and an electron antineutrino).

This is curious behaviour, especially for a force that is considered fundamental, an innate attribute of nature itself. Whatever happened to symmetry, why couldn’t nature maintain the order of things without putting in a twist? Sure, we’re now able to explain how the weak-interaction swaps orientations, but there’s no clue about why it has to happen like that. I mean… why?!

And now, we come to the strong CP problem(!): The laws governing the weak-interaction, brought under electroweak theory (EWT), are very, very similar to QCD. Why then doesn’t the strong nuclear force violate parity?

This is also fascinating because of the similarities it bears to nature’s increasing degree of prejudices. Why an asymmetric force like the weak-interaction was born in an otherwise symmetric universe, no one knows, and why only the weak-interaction gets to violate parity, no one knows. Pfft.

More so, even on the road leading up to this problem, we chanced upon three other problems, and altogether, this provides a good idea of how much humans are lost when it comes to particle physics. It’s evident that we’re only playing catching up, building simulations and then comparing the results to real-life occurrences to prove ourselves right. And just when you ask “Why?”, we’re lost for words.

Even the Large Hadron Collider (LHC), a multi-billion dollar particle sledgehammer in France-Switzerland, is mostly a “How” machine. It smashes together billions of particles and then, using seven detectors positioned along its length, analyses the debris spewn out.


An indicative diagram of the layout of detectors on the LHC

Incidentally, one of the detectors, the LHCb, sifts through the particulate mess to find out how really the weak-interaction affects particle-decay. Specifically, it studies the properties of the B-meson, a kind of meson that has a bottom quark/antiquark (b-quark) as one of its two constituents.

The b-quark has a tendency to weak-decay into its antiparticle, the b*-quark, in the process getting its left and right switched. Moreover, it has been observed the b*-quark is more likely to decay into the b-quark than it is for the b-quark to decay into the b*-quark. This phenomenon, involved in a process called baryogenesis, was responsible for today’s universe being composed of matter and not antimatter, and the LHCb is tasked with finding out… well, why?

(This blog post first appeared at The Copernican on December 14, 2012.)

Window for an advanced theory of particles closes further

A version of this article, as written by me, appeared in The Hindu on November 22, 2012.

On November 12, at the first day of the Hadron Collider Physics Symposium at Kyoto, Japan, researchers presented a handful of results that constrained the number of hiding places for a new theory of physics long believed to be promising.

Members of the team from the LHCb detector on the Large Hadron Collider (LHC) experiment located on the border of France and Switzerland provided evidence of a very rare particle-decay. The rate of the decay process was in fair agreement with an older theory of particles’ properties, called the Standard Model (SM), and deviated from the new theory, called Supersymmetry.

“Theorists have calculated that, in the Standard Model, this decay should occur about 3 times in every billion total decays of the particle,” announced Pierluigi Campana, LHCb spokesperson. “This first measurement gives a value of around 3.2 per billion, which is in very good agreement with the prediction.”

The result was presented at the 3.5-sigma confidence level, which corresponds to an error rate of 1-in-2,000. While not strong enough to claim discovery, it is valid as evidence.

The particle, called a Bsmeson, decayed from a bottom antiquark and strange quark pair into two muons. According to the SM, this is a complex and indirect decay process: the quarks exchange a W boson particle, turn into a top-antitop quark pair, which then decays into a Z boson or a Higgs boson. The boson then decays to two muons.

This indirect decay is called a quantum loop, and advanced theories like Supersymmetry predict new, short-lived particles to appear in such loops. The LHCb, which detected the decays, reported no such new particles.

The solid blue line shows post-decay muons from all events, and the red dotted line shows the muon-decay event from the B(s)0 meson. Because of a strong agreement with the SM, SUSY may as well abandon this bastion.

At the same time, in June 2011, the LHCb had announced that it had spotted hints of supersymmetric particles at 3.9-sigma. Thus, scientists will continue to conduct tests until they can stack 3.5 million-to-1 odds for or against Supersymmetry to close the case.

As Prof. Chris Parkes, spokesperson for the UK participation in the LHCb experiment, told BBC News: “Supersymmetry may not be dead but these latest results have certainly put it into hospital.”

The symposium, which concluded on November 16, also saw the release of the first batch of data generated in search of the Higgs boson since the initial announcement on July 4 this year.

The LHC can’t observe the Higgs boson directly because it quickly decays into lighter particles. So, physicists count up the lighter particles and try to see if some of those could have come from a momentarily existent Higgs.

These are still early days, but the data seems consistent with the predicted properties of the elusive particle, giving further strength to the validity of the SM.

Dr. Rahul Sinha, a physicist at the Institute of Mathematical Sciences, Chennai, said, “So far there is nothing in the Higgs data that indicates that it is not the Higgs of Standard Model, but a conclusive statement cannot be made as yet.”

The scientific community, however, is disappointed as there are fewer channels for new physics to occur. While the SM is fairly consistent with experimental findings, it is still unable to explain some fundamental problems.

One, called the hierarchy problem, asks why some particles are much heavier than others. Supersymmetry is theoretically equipped to provide the answer, but experimental findings are only thinning down its chances.

Commenting on the results, Dr. G. Rajasekaran, scientific adviser to the India-based Neutrino Observatory being built at Theni, asked for patience. “Supersymmetry implies the existence of a whole new world of particles equaling our known world. Remember, we took a hundred years to discover the known particles starting with the electron.”

With each such tightening of the leash, physicists return to the drawing board and consider new possibilities from scratch. At the same time, they also hope that the initial results are wrong. “We now plan to continue analysing data to improve the accuracy of this measurement and others which could show effects of new physics,” said Campana.

So, while the area where a chink might be found in the SM armour is getting smaller, there is hope that there is a chink somewhere nonetheless.

Signs of a slowdown

The way ahead for particle physics seems dully lit after CERN’s fourth-of-July firecracker. The Higgs announcement got everyone in the physics community excited – and spurred a frenzied submission of pre-prints all rushing to explain the particle’s properties. However, that excitement quickly died out after ICHEP ’12 was presented with nothing significant, even with anything a fraction as significant as the ATLAS/CMS results.

(L-R) Gianotti, Heuer & Incandela

Even so, I suppose we must wait at least another 3 months before a a conclusive Higgs-centric theory emerges that completely integrates the Higgs mechanism with the extant Standard Model.

The spotting of the elusive boson – or an impostor – closes a decades-old chapter in particle physics, but does almost nothing in pointing the way ahead apart from verifying the process of mass-formation. Even theoretically, the presence of SM quadratic divergences in the mass of the Higgs boson prove a resilient barrier to correct. How the Higgs field will be used as a tool in detecting other particles and the properties of other entities is altogether unclear.

The tricky part lies in working out the intricacies of the hypotheses that promise to point the way ahead. The most dominant amongst them is supersymmetry (SUSY). In fact, hints of existence of supersymmetric partners were recorded when the LHCb detector at the LHC spotted evidence of CP-violation in muon-decay events (the latter at 3.9σ). At the same time, the physicists I’m in touch with at IMS point out that rigid restrictions have been instituted on the discovery of sfermions and bosinos.

The energies at which these partners could be found are beyond those achievable by the LHC, let alone the luminosity. More, any favourable-looking ATLAS/CMS SUSY-results – which are simply interpretations of strange events – are definitely applicable only in narrow and very special scenarios. Such a condition is inadmissible when we’re actually in the hunt for frameworks that could explain grander phenomena. Like the link itself says,

“The searches leave little room for SUSY inside the reach of the existing data.”

Despite this bleak outlook, there is still a possibility that SUSY may stand verified in the future. Right now: “Could SUSY be masked behind general gauge mediation, R-parity violation or gauge-mediated SUSY-breaking” is the question (gauge-mediated SUSY-breaking (GMSB) is when some hidden sector breaks SUSY and communicates the products to the SM via messenger fields). Also, ZEUS/DESY results (generated by e-p DIS studies) are currently being interpreted.

However, everyone knows that between now and a future that contains a verified-SUSY, hundreds of financial appeals stand in the way. 😀 This is a typical time of slowdown – a time we must use for open-minded hypothesizing, discussion, careful verification, and, importantly, honest correction.

A dilemma of the auto-didact

If publishers could never imagine that there are people who could teach themselves particle physics, why conceive cheaper preliminary textbooks and ridiculously expensive advanced textbooks? Learning vector physics for classical mechanics costs Rs. 245 while progressing then to analytical mechanics involves an incurrence of Rs. 4,520. Does the cost barrier exist because the knowledge is more specialized? If this is the case, then such books should have become cheaper over time. They have not: Analytical Mechanics, which a good friend recommended, has stayed in the vicinity of $75 for the last three years (now, it’s $78.67 for the original paperback and $43 for a used one). This is just a handy example. There are a host of textbooks that detail concepts in advanced physics and cost a fortune: all you have to do is look for those that contain “hadron”, “accelerator”, “QCD”, etc., in their titles.

Getting to a place in time where a student is capable of understanding these subjects is cheap. In other words, the cost of aspirations is low while the price of execution is prohibitive.

Sure, alternatives exist, such as libraries and university archives. However, that misses the point: it seems the costs of the books are higher to prevent their ubiquitous consumption. No other reason seems evident, although I am loth to reach this conclusion. If you, the publisher, want me to read such books only in universities, then you are effectively requiring me to either abstain from reading these books irrespective of my interests if my professional interests reside elsewhere or depend on universities and university-publisher relationships for my progress in advanced physics, not myself. The resulting gap between the layman and the specialist eventually evades spanning, leading to ridiculous results such as not understanding the “God” in “God particle” to questioning the necessity of the LHC without quite understanding what it does and how that helps mankind.

The Indian Bose in the universal boson

Read this article.

Do you think Indians are harping too much about the lack of mention of Satyendra Nath Bose’s name in the media coverage of the CERN announcement last week? The articles in Hindustan Times and Economic Times seemed to be taking things too far with anthropological analyses that have nothing to do with Bose’s work. The boson was named so around 1945 by the great Paul Dirac as a commemoration of Bose’s work with Einstein. Much has happened since; why would we want to celebrate the Bose in the boson again and again?

Dr. Satyendra Nath Bose

The stage now belongs to the ATLAS and the CMS collaborations, and to Higgs, Kibble, Englert, Brout, Guralnik, and Hagen, and to physics itself as a triumph of worldwide cooperation in the face of many problems. Smarting because an Indian’s mention was forgotten is jejune. Then again, this is mostly the layman and the media, because the physicists I met last week seemed to fully understand Bose’s contribution to the field itself instead of count the frequency of his name’s mention.

Priyamvada Natarajan, as she writes in the Hindustan Times, is wrong (and the Economic Times article’s heading is just irritating). That Bose is not a household name like Einstein’s is is not because of post-colonialism – the exceptions are abundant enough to warrant inclusion – but because we place too much faith in a name instead of remembering what the man behind the name did for physics.

Gunning for the goddamned: ATLAS results explained

Here are some of the photos from the CERN webcast yesterday (July 4, Wednesday), with an adjoining explanation of the data presented in each one and what it signifies.

This first image shows the data accumulated post-analysis of the diphoton decay mode of the Higgs boson. In simpler terms, physicists first put together all the data they had that resulted from previously known processes. This constituted what’s called the background. Then, they looked for signs of any particle that seemed to decay into two energetic photons, or gamma rays, in a specific energy window; in this case, 100-160 GeV.

Finally, knowing how the number of events would vary in a scenario without the Higgs boson, a curve was plotted that fit the data perfectly: the number of events at each energy level v. the energy level at which it was tracked. This way, a bump in the curve during measurement would mean there was a particle previously unaccounted for that was causing an excess of diphoton decay events at a particular energy.

This is the plot of the mass of the particle being looked for (x-axis) versus the confidence level with which it has (or has not, depending n how you look at it) been excluded as an event to focus on. The dotted horizontal line, corresponding to 1μ, marks off a 95% exclusion limit: any events registered above the line can be claimed as having been observed with “more than 95% confidence” (colloquial usage).

Toward the top-right corner of the image are some numbers. 7 TeV and 8 TeV are the values of the total energy going into each collision before and after March, 2012, respectively. The beam energy was driven up to increase the incidence of decay events corresponding to Higgs-boson-like particles, which, given the extremely high energy at which they exist, are viciously short-lived. In experiments that were run between March and July, physicists at CERN reported an increase of almost 25-30% of such events.

The two other numbers indicate the particle accelerator’s integrated luminosity. In particle physics, luminosity is measured as the number of particles that can pass detected through a unit of area per second. The integrated luminosity is the same value but measured over a period of time. In the case of the LHC, after the collision energy was vamped up, the luminosity, too, had to be increased: from about 4.7 fb-1 to 5.8 fb-1. You’ll want to Wiki the unit of area called barn. Some lighthearted physics talk there.

In this plot, the y-axis on the left shows the chances of error, and the corresponding statistical significance on the right. When the chances of an error stand at 1, the results are not statistically significant at all because every observation is an error! But wait a minute, does that make sense? How can all results be errors? Well, when looking for one particular type of event, any event that is not this event is an error.

Thus, as we move toward the ~125 GeV mark, the number of statistically significant results shoot up drastically. Looking closer, we see two results registered just beyond the 5-sigma mark, where the chances of error are 1 in 3.5 million. This means that if the physicists created just those conditions that resulted in this >5σ (five-sigma) observation 3.5 million times, only once will a random fluctuation play impostor.

Also, notice how the differences between each level of statistical significance increases with increasing significance? For chances of errors: 5σ – 4σ > 4σ – 3σ > … > 1σ – 0σ. This means that the closer physicists get to a discovery, the exponentially more precise they must be!

OK, this is a graph showing the mass-distribution for the four-lepton decay mode, referred to as a channel by those working on the ATLAS and CMS collaborations (because there are separate channels of data-taking for each decay-mode). The plotting parameters are the same as in the first plot in this post except for the scale of the x-axis, which goes all the way from 0 to 250 GeV. Now, between 120 GeV and 130 GeV, there is an excess of events (light blue). Physicists know it is an excess and not at par with expectations because theoretical calculations made after discounting a Higgs-boson-like decay event show that, in that 10 GeV, only around 5.3 events are to be expected, as opposed to the 13 that turned up.