Some notes on empiricism, etc.

The Wire published a story about the ‘atoms of Acharya Kanad‘ (background here; tl;dr: Folks at a university in Gujarat claimed an ancient Indian sage had put forth the theory of atoms centuries before John Dalton showed up). The story in question was by a professor of philosophy at IISER, Mohali, and he makes a solid case (not unfamiliar to many of us) as to why Kanad, the sage, didn’t talk about atoms specifically because he was making a speculative statement under the Vaisheshika school of Hindu philosophy that he founded. What got me thinking were the last few lines of his piece, where he insists that empiricism is the foundation of modern science, and that something that doesn’t cater to it can’t be scientific. And you probably know what I’m going to say next. “String theory”, right?

No. Well, maybe. While string theory has become something of a fashionable example of non-empirical science, it isn’t the only example. It’s in fact a subset of a larger group of systems that don’t rely on empirical evidence to progress. These systems are called formal systems, or formal sciences, and they include logic, mathematics, information theory and linguistics. (String theory’s reliance on advanced mathematics makes it more formal than natural – as in the natural sciences.) And the dichotomous characterisation of formal and natural sciences (the latter including the social sciences) is superseded by a larger, more authoritative dichotomy*: between rationalism and empiricism. Rationalism prefers knowledge that has been deduced through logic and reasoning; empiricism prioritises knowledge that has been experienced. As a result, it shouldn’t be a surprise at all that debates about which side is right (insofar as it’s possible to be absolutely right – which I don’t think everwill happen) play out in the realm of science. And squarely within the realm of science, I’d like to use a recent example to provide some perspective.

Last week, scientists discovered that time crystals exist. I wrote a longish piece here tracing the origins and evolution of this exotic form of matter, and what it is that scientists have really discovered. Again, a tl;dr version: in 2012, Frank Wilczek and Alfred Shapere posited that a certain arrangement of atoms (a so-called ‘time crystal’) in their ground state could be in motion. This could sound pithy to you if you were unfamiliar with what ground state meant: absolute zero, the thermodynamic condition wherein an object has no energy whatsoever to do anything else but simply exist. So how could such a thing be in motion? The interesting thing here is that though Shapere-Wilczek’s original paper did not identify a natural scenario in which this could be made to happen, they were able to prove that it could happen formally. That is, they found that the mathematics of the physics underlying the phenomenon did not disallow the existence of time crystals (as they’d posited it).

It’s pertinent that Shapere and Wilczek turned out to be wrong. By late 2013, rigorous proofs had showed up in the scientific literature demonstrating that ground-state, or equilibrium, time crystals could not exist – but that non-equilibrium time crystals with their own unique properties could. The discovery made last week was of the latter kind. Shapere and Wilczek have both acknowledged that their math was wrong. But what I’m pointing at here is the conviction behind the claim that forms of matter called time crystals could exist, motivated by the fact that mathematics did not prohibit it. Yes, Shapere and Wilczek did have to modify their theory based on empirical evidence (indirectly, as it contributed to the rise of the first counter-arguments), but it’s undeniable that the original idea was born, and persisted with, simply through a process of discovery that did not involve sense-experience.

In the same vein, much of the disappointment experienced by many particle physicists today is because of a grating mismatch between formalism – in the form of theories of physics that predict as-yet undiscovered particles – and empiricism – the inability of the LHC to find these particles despite looking repeatedly and hard in the areas where the math says they should be. The physicists wouldn’t be disappointed if they thought empiricism was the be-all of modern science; they’d in fact have been rebuffed much earlier. For another example, this also applies to the idea of naturalness, an aesthetically (and more formally) enshrined idea that the forces of nature should have certain values, whereas in reality they don’t. As a result, physicists think something about their reality is broken instead of thinking something about their way of reasoning is broken. And so they’re sitting at an impasse, as if at the threshold of a higher-dimensional universe they may never be allowed to enter.

I think this is important in the study of the philosophy of science because if we’re able to keep in mind that humans are emotional and that our emotions have significant real-world consequences, we’d not only be better at understanding where knowledge comes from. We’d also become more sensitive to the various sources of knowledge (whether scientific, social, cultural or religious) and their unique domains of applicability, even if we’re pretty picky, and often silly, at the moment about how each of them ought to be treated (Related/recommended: Hilary Putnam’s way of thinking).

*I don’t like dichotomies. They’re too cut-and-dried a conceptualisation.

Where’s all the antimatter? New CERN results show the way.

If you look outside your window at the clouds, the stars, the planets, all that you will see is made of matter. However, when the universe was born, there were equal amounts of matter and antimatter. So where has all the antimatter gone?

The answer, if one is found, will be at the Large Hadron Collider (LHC), the world’s most powerful particle physics experiment, now taking a breather while engineers refit it to make it even more powerful by 2015. Then, it will be able to spot tinier, much more shortlived particles than the Higgs boson, which itself is notoriously shortlived.

While it ran from 2008 to early 2013, the LHC was incredibly prolific. It smashed together billions of protons in each experiment at speeds close to light’s, breaking them open. Physicists hoped the things that’d tumble out might show why the universe has come to prefer matter over antimatter.

In fact, from 2013 to 2015, physicists will be occupied gleaning meaningful results from each of these experiments because they simply didn’t have enough time to sift through all of them while the machine was running.

They will present their results as papers in scientific journals. Each paper will will be the product of analysis conducted on experimental data corresponding to some experiment, each with some energy, some luminosity, and other such experimental parameters central to experimental physics.

One such paper was submitted to a journal on April 23, 2013, titled ‘First observation of CP violation in the decays of B_s mesons‘. According to this paper, its corresponding experiment was conducted in 2011, when the LHC was smashing away at 7 TeV centre-of-mass (c.o.m) collision energy. This is the energy at the point inside the LHC circuit where two bunches of protons collide.

The paper also notes that the LHCb detector was used to track the results of the collision. The LHCb is one of seven detectors situated on the LHC’s ring. It has been engineered to study a particle known as the beauty quark, which is more than 4.2 times heavier than a proton, and lasts for about one-hundred-trillionth of a second before breaking down into lighter particles, a process mediated by some of nature’s four fundamental forces.

The beauty is one of six kinds of quarks, and together with other equally minuscule particles called bosons and leptons, they all make up everything in the universe: from entire galaxies to individual atoms.

For example, for as long as it lives, the beauty quark can team up with another quark or antiquark, the antimatter counterpart, to form particles called mesons. Generally, mesons are particles composed of one quark and one antiquark.

Why don’t the quark and antiquark meet and annihilate each other in a flash of energy? Because they’re not of the same type. If a quark of one type and an antiquark of another type meet, they don’t annihilate.

The B_s meson that the April 23 paper talks about is a meson composed of one beauty antiquark and one strange quark. Thus the notation ‘B_s’: A B-meson with an s component. This meson violates a law of the universe physicists long though unbreakable, called the charge-conjugation/parity (CP) invariance. It states that if you took a particle, inverted its charge (‘+’ to ‘-‘ or ‘-‘ to ‘+’), and then interchanged its left and right, its behaviour shouldn’t change in a universe that conserved charge and parity.

Physicists, however, found in the 2011 LHCb data that the B_s meson was flouting the CP invariance rule. Because of the beauty antiquark’s and strange quark’s short lifetimes, the B_s meson only lasted for so long before breaking down into lighter particles, in this case called kaons and pions.

When physicists calculated the kaons‘s and pions‘s charges and compared it to the B_s meson’s, they added up. However, when they calculated the kaons‘s and pions‘s left- and right-handednesses, i.e. parities, in terms of which direction they were spinning in, they found an imbalance.

A force, called the weak force, was pushing a particle to spin one way instead of the other about 27 per cent of the time. According to the physicists’ paper, this result has been reached with a confidence-level of more than 5-sigma. This means that some reading in the data would disagree with their conclusion not more than 0.00001 per cent of the time, sufficient to claim direct evidence.

Of course, this wouldn’t be the first time evidence of CP violation in B-mesons had been spotted. On 17 May, 2010B-mesons composed of a beauty antiquark and a down quark were shown shown to decay at a much slower rate than B-antimesons of the same composition, in the process outlasting them. However, this is the first time evidence of this violation has been found in B_s mesons, a particle that has been called “bizarre”.

While this flies in the face of a natural, intuitive understanding of our universe, it is a happy conclusion because it could explain the aberration that is antimatter’s absence, one that isn’t explained by a theory in physics called the Standard Model.

Here was something in the universe that was showing some sort of a preference, ready to break the symmetry and uniformity of laws that pervade the space-time continuum.

Physicists know that the weak force, one of the fundamental forces of nature like gravity is, is the culprit. It has a preference for acting on left-handed particles and right-handed antiparticles. When such a particle shows itself, the weak force offers to mediate its breakdown into lighter particles, in the process resulting in a preference for one set of products over another.

But in order to fully establish the link between matter’s domination and the weak force’s role in it, physicists have to first figure out why the weak force has such biased preferences.

This post originally appeared in The Copernican science blog at The Hindu on April 25, 2013.

There’s something wrong with this universe.

I’ve gone on about natural philosophy, the philosophy of representation, science history, and the importance of interdisciplinary perspectives when studying modern science. There’s something that unifies all these ideas, and I wouldn’t have thought of it at all hadn’t I spoken to the renowned physicist Dr. George Sterman on January 3.

I was attending the Institute of Mathematical Sciences’ golden jubilee celebrations. A lot of my heroes were there, and believe me when I say my heroes are different from your heroes. I look up to people who are capable of thinking metaphysically, and physicists more than anyone I’ve come to meet are very insightful in that area.

One such physicist is Dr. Ashoke Sen, whose contributions to the controversial area of string theory are nothing short of seminal – if only for how differently it says we can think about our universe and what the math of that would look like. Especially, Sen’s research into tachyon condensation and the phases of string theory is something I’ve been interested in for a while now.

Knowing that George Sterman was around came as a pleasant surprise. Sterman was Sen’s doctoral guide; while Sen’s a string theorist now, his doctoral thesis was in quantum chromodynamics, a field in which the name of Sterman is quite well-known.


– DR. GEORGE STERMAN (IMAGE: UC DAVIS)

When I finally got a chance to speak with Sterman, it was about 5 pm and there were a lot of mosquitoes around. We sat down in the middle of the lawn on a couple of old chairs, and with a perpetual smile on his face that made one of the greatest thinkers of our time look like a kid in a candy store, Sterman jumped right into answering my first question on what he felt about the discovery of a Higgs-like boson.

Where Sheldon Stone was obstinately practical, Sterman was courageously aesthetic. After the (now usual) bit about how the discovery of the boson was a tribute to mathematics and its ability to defy 50 years of staggering theoretical advancements by remaining so consistent, he said, “But let’s think about naturalness for a second…”

The moment he said “naturalness”, I knew what he was getting it, but more than anything else, I was glad. Here was a physicist who was still looking at things aesthetically, especially in an era where lack of money and the loss of practicality by extension could really put the brakes on scientific discovery. I mean it’s easy to jump up and down and be excited about having spotted the Higgs, but there are very few who feel free to still not be happy.

In Sterman’s words, uttered while waving his arms about to swat away the swarming mosquitoes while discussing supersymmetry:

There’s a reason why so many people felt so confident about supersymmetry. It wasn’t just that it’s a beautiful theory – which it is – or that it engages and challenges the most mathematically oriented among physicists, but in another sense in which it appeared to be necessary. There’s this subtle concept that goes by the name of naturalness. Naturalness as it appears in the Standard Model says that if we gave our any reasonable estimate of what the mass of the Higgs particle should be, it should by all rights be huge! It should be as heavy as what we call the Planck mass [~10^19 GeV].”

Or, as Martinus Veltman put it in an interview to Matthew Chalmers for Nature,

Since the energy of the Higgs is distributed all over the universe, it should contribute to the curvature of space; if you do the calculation, the universe would have to curve to the size of a football.

Naturalness is the idea in particle physics specifically, and in nature generally, that things don’t desire to stand out in any way unless something’s really messed up. For instance, consider the mass hierarchy problem in physics: Why is the gravitational force so much more weaker than the electroweak force? If either of them is a fundamental force of nature, then where is the massive imbalance coming from?

Formulaically speaking, naturalness is represented by this equation:

Here, lambda (the mountain) is the cut-off scale, an energy scale at which the theory breaks down. Its influence over the naturalness of an entity h is determined by how many dimensions lambda acts on – with a maximum of 4. Last, c is the helpful scaling constant that keeps lambda from being too weak or too strong in some setting.

In other words, a natural constant h must be comparable to other nature constants like it if they’re all acting in the same setting.

(TeX: hquad =quad c{ Lambda }^{ 4quad -quad d })

However, given how the electroweak and gravitational forces – which do act in the same setting (also known as our universe) – differ so tremendously in strength, the values of these constants are, to put it bluntly, coincidental.

Problems such as this “violate” naturalness in a way that defies the phenomenological aesthetic of physics. Yes, I’m aware this sounds like hot air but bear with me. In a universe that contains one stupendously weak force and one stupendously strong force, one theory that’s capable of describing both forces would possess two disturbing characteristics:

1. It would be capable of angering one William of Ockham

2. It would require a dirty trick called fine-tuning

I’ll let you tackle the theories of that William of Ockham and go right on to fine-tuning. In an episode of ‘The Big Bang Theory’, Dr. Sheldon Cooper drinks coffee for what seems like the first time in his life and goes berserk. One of the things he touches upon in a caffeine-induced rant is a notion related to the anthropic principle.

The anthropic principle states that it’s not odd that the value of the fundamental constants seem to engender the evolution of life and physical consciousness because if those values aren’t what they are, then a consciousness wouldn’t be able to observe them. Starting with the development of the Standard Model of particle physics in the 1960s, it’s become known that these constants are really fine in their value.

So, with the anthropic principle providing a philosophical cushioning, like some intellectual fodder to fall back on when thoughts run low, physicists set about trying to find out why the values are what they are. As the Standard Model predicted more particles – with annoying precision – physicists also realised that given the physical environment, the universe would’ve been drastically different even if the values were slightly off.

Now, as discoveries poured in and it became clear that the universe housed two drastically different forces in terms of their strength, researchers felt the need to fine-tune the values of the constants to fit experimental observations. This sometimes necessitated tweaking the constants in such a way that they’d support the coexistence of the gravitational and electroweak forces!

Scientifically speaking, this just sounds pragmatic. But just think aesthetically and you start to see why this practice smells bad: The universe is explicable only if you make extremely small changes to certain numbers, changes you wouldn’t have made if the universe wasn’t concealing something about why there was one malnourished kid and one obese kid.


Doesn’t the asymmetry bother you?

Put another way, as physicist Paul Davies did,

There is now broad agreement among physicists and cosmologists that the Universe is in several respects ‘fine-tuned’ for life. The conclusion is not so much that the Universe is fine-tuned for life; rather it is fine-tuned for the building blocks and environments that life requires.

(On a lighter note: If the universe includes both a plausible anthropic principle and a Paul Davies who is a physicist and is right, then multiple universes are a possibility. I’ll let you work this one out.)

Compare all of this to the desirable idea of naturalness and what Sterman was getting at and you’d see that the world around us isn’t natural in any sense. It’s made up of particles whose properties we’re sure of, of objects whose behaviour we’re sure of, but also of forces whose origins indicate an amount of unnaturalness… as if something outside this universe poked a finger in, stirred up the particulate pea-soup, and left before anyone evolved enough to get a look.

(This blog post first appeared at The Copernican on January 6, 2013.)