Science journalism, expertise and common sense

On March 27, the Johns Hopkins University said an article published on the website of the Centre For Disease Dynamics, Economics and Policy (CDDEP), a Washington-based think tank, had used its logo without permission and distanced itself from the study, which had concluded that the number of people in India who could test positive for the new coronavirus could swell into the millions by May 2020. Soon after, a basement of trolls latched onto CDDEP founder-director Ramanan Laxminarayan’s credentials as an economist to dismiss his work as a public-health researcher, including denying the study’s conclusions without discussing its scientific merits and demerits.

A lot of issues are wound up in this little controversy. One of them is our seemingly naïve relationship with expertise.

Expertise is supposed to be a straightforward thing: you either have it or you don’t. But just as specialised knowledge is complicated, so too is expertise.

Many of us have heard stories of someone who’s great at something “even though he didn’t go to college” and another someone who’s a bit of a tubelight “despite having been to Oxbridge”. Irrespective of whether they’re exceptions or the rule, there’s a lot of expertise in the world that a deference to degrees would miss.

More importantly, by conflating academic qualifications with expertise, we risk flattening a three-dimensional picture to one. For example, there are more scientists who can speak confidently about statistical regression and the features of exponential growth than there are who can comment on the false vacua of string theory or discuss why protein folding is such a hard problem to solve. These hierarchies arise because of differences in complexity. We don’t have to insist only a virologist or an epidemiologist is allowed to answer questions about whether a clinical trial was done right.

But when we insist someone is not good enough because they have a degree in a different subject, we could be embellishing the implicit assumption that we don’t want to look beyond expertise, and are content with being told the answers. Granted, this argument is better directed at individuals privileged enough to learn something new every day, but maintaining this chasm – between who in the public consciousness is allowed to provide answers and who isn’t – also continues to keep power in fewer hands.

Of course, many questions that have arisen during the coronavirus pandemic have often stood between life and death, and it is important to stay safe. However, there is a penalty to think the closer we drift towards expertise, the safer we become — because then we may be drifting away from common sense and accruing a different kind of burden, especially when we insist only specialised experts can comment on a far less specialist topic. Such convictions have already created a class of people that believes ad hominem is a legitimate argumentative ploy, and won’t back down from an increasingly acrimonious quarrel until they find the cherry-picked data they have been looking for.

Most people occupy a less radical but still problematic position: even when neither life nor fortune is at stake, they claim to wait for expertise to change one’s behaviour and/or beliefs. Most of them are really waiting for something that arrived long ago and are only trying to find new ways to persist with the status quo. The all-or-nothing attitude of the rest – assuming they exist – is, simply put, epistemologically inefficient.

Our deference to the views of experts should be a function of how complex it really is and therefore the extent to which it can be interrogated. So when the topic at hand is whether a clinical trial was done right or whether the Indian Council of Medical Research is testing enough, the net we cast to find independent scientists to speak to can include those who aren’t medical researchers but whose academic or vocational trajectories familiarised them to some parts of these issues as well as who are transparent about their reasoning, methods and opinions. (The CDDEP study is yet to reveal its methods, so I don’t want to comment specifically on it.)

If we can’t be sure if the scientist we’re speaking to is making sense, obviously it would be better to go with someone whose words we can just trust. And if we’re not comfortable having such a negotiated relationship with an expert – sadly, it’s always going to be this way. The only way to make matters simpler is by choosing to deliberately shut ourselves off, to take what we’re hearing and, instead of questioning it further, running with it.

This said, we all shut ourselves off at one time or another. It’s only important that we do it knowing we do it, instead of harbouring pretensions of superiority. At no point does it become reasonable to dismiss anyone based on their academic qualifications alone the way, say, Times of India and OpIndia have done (see below).

What’s more, Dr Giridhar Gyani is neither a medical practitioner nor epidemiologist. He is academically an electrical engineer, who later did a PhD in quality management. He is currently director general at Association of Healthcare Providers (India).

Times of India, March 28

Ramanan Laxminarayanan, who was pitched up as an expert on diseases and epidemics by the media outlets of the country, however, in reality, is not an epidemiologist. Dr Ramanan Laxminarayanan is not even a doctor but has a PhD in economics.

OpIndia, March 22

Expertise has been humankind’s way to quickly make sense of a world that has only been becoming more confusing. But historically, expertise has also been a reason of state, used to suppress dissenting voices and concentrate political, industrial and military power in the hands of a few. The former is in many ways a useful feature of society for its liberating potential while the latter is undesirable because it enslaves. People frequently straddle both tendencies together – especially now, with the government in charge of the national anti-coronavirus response.

An immediately viable way to break this tension is to negotiate our relationship with experts themselves.

Some notes on empiricism, etc.

The Wire published a story about the ‘atoms of Acharya Kanad‘ (background here; tl;dr: Folks at a university in Gujarat claimed an ancient Indian sage had put forth the theory of atoms centuries before John Dalton showed up). The story in question was by a professor of philosophy at IISER, Mohali, and he makes a solid case (not unfamiliar to many of us) as to why Kanad, the sage, didn’t talk about atoms specifically because he was making a speculative statement under the Vaisheshika school of Hindu philosophy that he founded. What got me thinking were the last few lines of his piece, where he insists that empiricism is the foundation of modern science, and that something that doesn’t cater to it can’t be scientific. And you probably know what I’m going to say next. “String theory”, right?

No. Well, maybe. While string theory has become something of a fashionable example of non-empirical science, it isn’t the only example. It’s in fact a subset of a larger group of systems that don’t rely on empirical evidence to progress. These systems are called formal systems, or formal sciences, and they include logic, mathematics, information theory and linguistics. (String theory’s reliance on advanced mathematics makes it more formal than natural – as in the natural sciences.) And the dichotomous characterisation of formal and natural sciences (the latter including the social sciences) is superseded by a larger, more authoritative dichotomy*: between rationalism and empiricism. Rationalism prefers knowledge that has been deduced through logic and reasoning; empiricism prioritises knowledge that has been experienced. As a result, it shouldn’t be a surprise at all that debates about which side is right (insofar as it’s possible to be absolutely right – which I don’t think everwill happen) play out in the realm of science. And squarely within the realm of science, I’d like to use a recent example to provide some perspective.

Last week, scientists discovered that time crystals exist. I wrote a longish piece here tracing the origins and evolution of this exotic form of matter, and what it is that scientists have really discovered. Again, a tl;dr version: in 2012, Frank Wilczek and Alfred Shapere posited that a certain arrangement of atoms (a so-called ‘time crystal’) in their ground state could be in motion. This could sound pithy to you if you were unfamiliar with what ground state meant: absolute zero, the thermodynamic condition wherein an object has no energy whatsoever to do anything else but simply exist. So how could such a thing be in motion? The interesting thing here is that though Shapere-Wilczek’s original paper did not identify a natural scenario in which this could be made to happen, they were able to prove that it could happen formally. That is, they found that the mathematics of the physics underlying the phenomenon did not disallow the existence of time crystals (as they’d posited it).

It’s pertinent that Shapere and Wilczek turned out to be wrong. By late 2013, rigorous proofs had showed up in the scientific literature demonstrating that ground-state, or equilibrium, time crystals could not exist – but that non-equilibrium time crystals with their own unique properties could. The discovery made last week was of the latter kind. Shapere and Wilczek have both acknowledged that their math was wrong. But what I’m pointing at here is the conviction behind the claim that forms of matter called time crystals could exist, motivated by the fact that mathematics did not prohibit it. Yes, Shapere and Wilczek did have to modify their theory based on empirical evidence (indirectly, as it contributed to the rise of the first counter-arguments), but it’s undeniable that the original idea was born, and persisted with, simply through a process of discovery that did not involve sense-experience.

In the same vein, much of the disappointment experienced by many particle physicists today is because of a grating mismatch between formalism – in the form of theories of physics that predict as-yet undiscovered particles – and empiricism – the inability of the LHC to find these particles despite looking repeatedly and hard in the areas where the math says they should be. The physicists wouldn’t be disappointed if they thought empiricism was the be-all of modern science; they’d in fact have been rebuffed much earlier. For another example, this also applies to the idea of naturalness, an aesthetically (and more formally) enshrined idea that the forces of nature should have certain values, whereas in reality they don’t. As a result, physicists think something about their reality is broken instead of thinking something about their way of reasoning is broken. And so they’re sitting at an impasse, as if at the threshold of a higher-dimensional universe they may never be allowed to enter.

I think this is important in the study of the philosophy of science because if we’re able to keep in mind that humans are emotional and that our emotions have significant real-world consequences, we’d not only be better at understanding where knowledge comes from. We’d also become more sensitive to the various sources of knowledge (whether scientific, social, cultural or religious) and their unique domains of applicability, even if we’re pretty picky, and often silly, at the moment about how each of them ought to be treated (Related/recommended: Hilary Putnam’s way of thinking).

*I don’t like dichotomies. They’re too cut-and-dried a conceptualisation.

Has ‘false balance’ become self-evidently wrong?

Featured image credit: mistermoss/Flickr, CC BY 2.0.

Journalism’s engagement with a convergent body of knowledge is an interesting thing in two ways. From the PoV of the body, journalism is typically seen as an enabler, an instrument for furthering goals and which is adjacent at best until it begins to have an adverse effect on the dominant forces of convergence. From the PoV of journalism, the body of knowledge isn’t adjacent but more visceral – the flesh with which the narratives of journalistic expression manifest themselves. Both perspectives are borne out in the interaction between anthropogenic global warming (AGW) and its presence in the news. Especially from the PoV of journalism, covering AGW has been something of a slow burn because the assembly of its facts can’t be catalysed even as it maintains a high propensity to be derailed, requiring journalists to maintain a constant intensity over a longer span of time than would typically be accorded to other news items.

When I call AGW a convergent body of knowledge, I mean that it is trying to achieve consensus on some hypotheses – and the moment that consensus is achieved will be the point of convergence. IIRC, the latest report from the Intergovernmental Panel on Climate Change says that the ongoing spate of global warming is 95% a result of human activities – a level of certainty that we’ll take to be just past the point of convergence. Now, the coverage of AGW until this point was straightforward, that there were two sides which deserved to be represented equally. When the convergence eliminated one side, it was a technical elimination, a group of fact-seekers getting together and agreeing that what they had on their hands was indeed a fact even if they weren’t 100% certain.

What this meant for journalism was that its traditional mode of creating balance was no longer valid. The principal narrative had shifted from being a conflict between AGW-adherents and AGW-deniers (“yes/no”) to becoming a conflict between some AGW-adherents and other AGW-adherents (“less/more”). And if we’re moving in the right direction, less/more is naturally the more important conflict to talk about. But post-convergence, any story that reverted to the yes/no conflict was accused of having succumbed to a sense of false balance, and calling out instances of false balance has since become a thing. Now, to the point of my piece: have we finally entered a period wherein calling out instances of false balance has become redundant, wherein awareness of the fallacies of AGW-denial has matured enough for false-balance to have become either deliberate or the result of mindlessness?

Yes. I think so – that false-balance has finally become self-evidently wrong, and to not acknowledge this is to concede that AGW-denial might still retain some vestiges of potency.

I was prompted to write this post after I received a pitch for an article to be published on The Wire, about using the conclusions of a recently published report to ascertain that AGW-denial was flawed. In other words: new data, old conclusions. And the pitch gave me the impression that the author may have been taking the threat of AGW-deniers too seriously. Had you been the editor reading this, would you have okayed the piece?

Discussing some motivations behind a particle physics FAQ

First, there is information. From information, people distill knowledge, and from knowledge, wisdom. Information is available on a lot of topics and in varying levels of detail. Knowledge on topics is harder to find – and even more hard is wisdom. This is because knowledge and wisdom require work (to fact-check and interpret) on information and knowledge, respectively. And people can be selective on what they choose to work on. One popular consequence of such choices is that most people are more aware of business information, business knowledge and business wisdom than they are of scientific information, scientific knowledge and scientific wisdom. This graduated topical awareness reflects in how we produce and consume the news.

struc

News articles written on business issues rarely see fit to delve into historical motivations or explainer-style elucidations because the audience is understood to be better aware of what business is about. Business information and knowledge are widespread and so is, to some extent, business wisdom, and articles can take advantage of conclusions made in each sphere, jumping between them to tease out more information, knowledge and wisdom. On the other hand, articles written on some topics of science – such as particle physics – have to start from the informational level before wisdom can be presented. This places strong limits on how the article can be structured or even styled.

There are numerous reasons for why this is so, especially for topics like particle physics, which I regularly (try to) write on. I’m drawn toward three of them in particular: legacy, complexity and pacing. Legacy is the size of the body of work that is directly related to the latest developments in that work. So, the legacy of the LHC stretches back to include the invention of the cyclotron in 1932 – and the legacy of the Higgs boson stretches back to 1961. Complexity is just that but becomes more meaningful in the context of pacing.

A consequence of business developments being reported on fervently is that there is at least some (understandable) information in the public domain about all stages of the epistemological evolution. In other words, the news reports are apace of new information, new knowledge, new wisdom. With particle physics, they aren’t – they can’t be. The reports are separated by some time, according to when the bigger developments occurred, and in the intervening span of time, new information/knowledge/wisdom would’ve arisen that the reports will have to accommodate. And how much has to be accommodated can be exacerbated by the complexity of what has come before.

struc1

But there is a catch here – at least as far as particle physics is concerned because it is in a quandary these days. The field is wide open because physicists have realised two things: first, that their theoretical understanding of physics is far, far ahead of what their experiments are capable of (since the 1970s and 1980s); second, that there are inconsistencies within the theories themselves (since the late 1990s). Resolving these issues is going to take a bit of time – a decade or so at least (although we’re likely in the middle of such a decade) – and presents a fortunate upside to communicators: it’s a break. Let’s use it to catch up on all that we’ve missed.

The break (or a rupture?) can also be utilised for what it signifies: a gap in information/knowledge. All the information/knowledge/wisdom that has come before is abruptly discontinued at this point, allowing communicators to collect them in one place, compose them and disseminate them in preparation for whatever particle physics will unearth next. And this is exactly what motivated me to write a ‘particle physics FAQ’, published on The Wire, as something anyone who’s graduated from high-school can understand. I can’t say if it will equip them to read scientific papers – but it will definitely (and hopefully) set them on the road to asking more questions on the topic.

Of small steps and giant leaps of collective imagination

The Wire
July 16, 2015

Is the M5 star cluster really out there? Credit: HST/ESA/NASA
Is the M5 star cluster really out there? Credit: HST/ESA/NASA

We may all harbour a gene that moves us to explore and find new realms of experience but the physical act of discovery has become far removed from the first principles of physics.

At 6.23 am on Wednesday, when a signal from the New Horizons probe near Pluto reached a giant antenna in Madrid, cheers went up around the world – with their epicentre focused on the Applied Physics Laboratory in Maryland, USA.

And the moment it received the signal, the antenna’s computer also relayed a message through the Internet that updated a webpage showing the world that New Horizons had phoned home. NASA TV was broadcasting a scene of celebration at the APL and Twitter was going berserk as usual. Subtract these instruments of communication and the memory of humankind’s rendezvous with Pluto on the morning of July 15 (IST) is delivered not by the bridge of logic but a leap of faith.

In a memorable article in Nature in 2012, the physicist Daniel Sarewitz made an argument that highlighted the strength and importance of good science communication in building scientific knowledge. Sarewitz contended that it was impossible for anyone but trained theoretical physicists to understand what the Higgs boson really was, how the Higgs mechanism that underpins it worked, or how any of them had been discovered at the Large Hadron Collider earlier that year. The reason, he said, was that a large part of high-energy physics is entirely mathematical, devoid of any physical counterparts, and explores nature in states the human condition could never physically encounter.

As a result, without the full knowledge of the mathematics involved, any lay person’s conviction in the existence of the Higgs boson would be punctured here and there with gaps in knowledge – gaps the person will be continuously ignoring in favour of the faith placed in the integrity of thousands of scientists and engineers working at the LHC, and in the comprehensibility of science writing. In other words, most people on the planet won’t know the Higgs boson exists but they’ll believe it does.

Such modularisation of knowledge – into blocks of information we know exist and other blocks we believe exist – becomes more apparent the greater the interaction with sophisticated technology. And paradoxically, the more we are insulated from it, the easier it is to enjoy its findings.

Consider the example of the Hubble space telescope, rightly called one of the greatest astronomical implements to have ever been devised by humankind.

Its impressive suite of five instruments, highly polished mirrors and advanced housing all enable it to see the universe in visible-to-ultraviolet light in exquisite detail. Its opaque engineering is inaccessible to most but this gap in public knowledge has been compensated many times over by the richness of its observations. In a sense, we no longer concern ourselves with how the telescope works because we have drunk our fill with what it has seen of the universe for us – a vast, multihued space filled with the light of a trillion stars. What Hubble has seen makes us comfortable conflating belief and knowledge.

The farther our gaze strays from home, the more we will become reliant on technology that is beyond the average person’s intellect to comprehend, on rules of physics that are increasingly removed from first principles, on science communication that is able to devise cleverer abstractions. Whether we like it or not, our experience, and memory, of exploration is becoming more belief-ridden.

Like the Hubble, then, has New Horizons entered a phase of transience, too? Not yet. Its Long-Range Reconnaissance Imager has captured spectacular images of Pluto, but none yet quite so spectacular as to mask our reliance on non-human actors to obtain them. We know the probe exists because the method of broadcasting an electromagnetic signal is somewhat easily understood, but then again most of us only believe that the probe is functioning normally. And this will increasingly be the case with the smaller scales we want to explore and the larger distances we want to travel.

Space probes have always been sophisticated bits of equipment but with the Internet – especially when NASA TV, DSN Now and  Twitter are the prioritised channels of worldwide information dissemination – there is a perpetual yet dissonant reminder of our reliance on technology, a reminder of the Voyager Moment of our times being a celebration of technological prowess rather than exploratory zeal.

Our moment was in fact a radio signal reaching Madrid, a barely romantic event. None of this is a lament but only a recognition of the growing discernibility of the gaps in our knowledge, of our isolation by chasms of entangled 1s and 0s from the greatest achievements of our times. To be sure, the ultimate benefactor is science but one that is increasingly built upon a body of evidence that is far too specialised to become something that can be treasured equally by all of us.

A future for driverless cars, from a limbo between trolley problems and autopilots

By Anuj  Srivas and Vasudevan Mukunth

What’s the deal with everyone getting worried about artificial intelligence? It’s all the Silicon Valley elite seem willing to be apprehensive about, and Oxford philosopher Nick Bostrom seems to be the patron saint along with his book Superintelligence: Paths, Dangers, Strategies (2014).

Even if Big Data seems like it could catalyze things, they could be overestimating AI’s advent. But thanks to Google’s espied breed of driverless cars, conversations on regulation are already afoot. This is the sort of subject that could benefit from its tech being better understood; it’s not immediately apparent. To make matters worse, now is also the period when not enough data is available for everyone to scrutinize the issue but at the same time there are some opinion-mongers distorting the early hints of a debate with their desires.

In an effort to bypass this, let’s say things happen like they always do: Google doesn’t ask anybody and starts deploying its driverless cars, and then the law is forced to shape around that. Yes, this isn’t something Google can force on people because it’s part of no pre-existing ecosystem. It can’t force participation like it did with Hangouts. Yet, the law isn’t prohibitive.

In the Silicon Valley, Google has premiered its express Shopping service – for delivering purchases made online within three hours of someone placing the order for no extra cost. No extra cost because the goods are delivered using Google’s driverless cars, and the service is a test-bed for them, where they get to ‘learn’ what they will. But when it comes to buying them, who will? What about insurance? What about licenses?

A better trolley problem

It’s been understood for a while that the problem here is liabilities, summarized in many ways by the trolley problem. There’s something unsettling about loss of life due to machine failure, whereas it’s relatively easier to accept when the loss is the consequence of human hands. Theoretically it should make no difference – planes for example are driven more by computers these days than a living, breathing pilot. Essentially, you’re trusting your life to the computers running the plane. And when driverless cars are rolled out, there’s ample reason to believe that will have a similarly low chance of failure as aircrafts run by computer-pilots. But we could be missing something through this simplification.

Even if we’re laughably bad at it at times, having a human behind the wheel makes it predictable, sure, but more importantly it makes liability easier to figure. The problem with a driverless car is not that we’d doubt its logic – the logic could be perfect – but that we’d doubt what that logic dictates. A failure right now is an accident: a car ramming into a wall, a pole, into another car, another person, etc. Are these the only failures, though? A driverless car does seem similar to autopilot, but we must be concerned about what its logic dictates. We consciously say that human decision making skills are inferior, that we can’t be trusted. Though that is true, we cross an epistemological ground when we do so.

Perhaps the trolley problem isn’t well-thought out. The problem with driverless cars is not about 5 lives versus 1 life; that’s an utterly human problem. The updated problem for driverless cars would be: should the algorithm look to save the the passengers of the car or should it look to save bystanders?

And yet even this updated trolley problem is too simplistic. Computers and programmers make these kind of decisions on a daily basis already, by choosing at what time, for instance, an airbag should deploy, especially considering that if deployed unnecessarily, the airbag can also grievously injure a human being.

Therefore, we shouldn’t fall into a Frankenstein complex where our technological creations are automatically assumed to be doing evil things simply because they have no human soul. It’s not a question of “it’s bad if a machine does it and good if a human does it”.

Who programs the programmers?

And yet, the scale and moral ambiguity is pumped up to a hundred when it comes to driverless cars. Things like airbag deployment can often take refuge in physics and statistics – they are often seen in that context. And yet for driverless cars, specific programming decisions will be forced to confront morally ambiguous situations and it is here that the problem starts. If an airbag deploys unintentionally or wrongly it can always be explained away as an unfortunate error, accident or freak situation. Or, more simply, that we can’t program airbags to deploy on a case-by-case basis. Driverless cars however, can’t take refuge behind statistics or simple physics when it it is confronted with its trolley problem.

There is a more interesting question here. If a driverless car has to choose between a) running over a dog, b) swerving your car in order to miss the dog, thereby hitting a tree, and c) freeze and do nothing, what will it do? It will do whatever the programmer tells it to do. Earlier we had the choice, depending on our own moral compass, as to what we should do. People who like dogs wouldn’t kill the animal; people who cared more about their car would kill the dog. So, who programs the programmers?

And as with the simplification to a trolley problem, comparing autonomous cars to autopilot on board an aircraft is similarly short-sighted. In his book Normal Accidents, sociologist Charles Perrow talks about nuclear power plant technology and its implications for insurance policy. NPPs are packed in with redundant safety systems. When accidents don’t happen, these systems make up a bulk of the plant’s dead weight, but when an accident does happen, their failure is often the failure worth talking about.

So, even as the aircraft is flying through the air, control towers are monitoring its progress, the flight data recorders act as a deterrent against complacency, and simply the cost of one flight makes redundant safety systems feasible over a reasonable span of time.

Safety is a human thing

These features together make up the environment in which autopilot functions. On the other hand, an autonomous car doesn’t inspire the same sense of being in secure hands. In fact, it’s like an economy of scale working the other way. What safety systems kick in when the ghost in the machine fails? To continue the metaphor: As Maria Konnikova pointed out in The New Yorker in September 2014, maneuvering an aircraft can be increasingly automated. The problem arises when something about it fails and humans have to take over: we won’t be able to take over as effectively as we think we can because automation encourages our minds to wander, to not pay attention to the differences between normalcy and failure. As a result, a ‘redundancy of airbags’ is encouraged.

In other words, it would be too expensive to include all these foolproof safety measures for driverless cars but at the same time they ought to be. And this is why the first ones likely won’t be owned by individuals. The best way to introduce them would be through taxi services like Uber, effectuating communal car sharing with autonomous drivers. In a world of driverless cars, we may not own the cars themselves, so a company like Uber could internalize the costs involved in producing that ecosystem, and having them around in bulk makes safety-redundancies feasible as well.

And if driverless cars are being touted as the future, owning a car could probably become a thing of the past, too. The thrust of digital has been to share and rent more than to own with pretty much most things. Only essentials like smartphones are owned. Look at music, business software, games, rides (Uber), even apartments (Airbnb). Why not autonomous vehicles?

Can science and philosophy mix constructively?

Quantum mechanics can sometimes be very hard to understand, so much so that even thinking about it becomes difficult. This could be because its foundations lay in the action-centric depiction of reality that slowly rejected its origins and assumed a thought-centric one garb.

In his 1925 paper on the topic, physicist Werner Heisenberg used only observable quantities to denote physical phenomena. He also pulled up Niels Bohr in that great paper, saying, “It is well known that the formal rules which are used [in Bohr’s 1913 quantum theory] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron.”

A true theory

Because of the uncertainty principle, and other principles like it, quantum mechanics started to develop into a set of theories that could be tested against observations, and that, to physicists, left very little to thought experiments. Put another way, there was nothing a quantum-physicist could think up that couldn’t be proved or disproved experimentally. This way of looking at the world – in philosophy – is called logical positivism.

This made quantum mechanics a true theory of reality, as opposed to a hypothetical, unverifiable one.

However, even before Heisenberg’s paper was published, positivism was starting to be rejected, especially by chemists. An important example was the advent of statistical mechanics and atomism in the early 19th century. Both of them interpreted, without actual physical observations, that if two volumes of hydrogen and one volume of oxygen combined to form water vapor, then a water molecule would have to comprise two atoms of hydrogen and one atom of oxygen.

A logical positivist would have insisted on actually observing the molecule individually, but that was impossible at the time. This insistence on submitting physical proof, thus, played an adverse role in the progress of science by delaying/denying success its due.

As time passed, the failures of positivism started to take hold on quantum mechanics. In a 1926 conversation with Albert Einstein, Heisenberg said, “… we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are the frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path.” And since he held that any theory ought only to be a true theory, he concluded that these parameters must feature in the theory, and what it projected, as themselves instead of the unobservable electron path.

This wasn’t the case.

Gaps in our knowledge

Heisenberg’s probe of the granularity of nature led to his distancing from the theory of logical positivism. And Steven Weinberg, physicist and Nobel Laureate, uses just this distancing to harshly argue in a 1994 essay, titled Against Philosophy, that physics has never benefited from the advice of philosophers, and when it does, it’s only to negate the advice of another philosopher – almost suggesting that ‘science is all there is’ by dismissing the aesthetic in favor of the rational.

In doing so, Weinberg doesn’t acknowledge the fact that science and philosophy go hand in hand; what he has done is simply to outline the failure of logical positivism in the advancement of science.

At the simplest, philosophy in various forms guides human thought toward ideals like objective truth and is able to establish their superiority over subjective truths. Philosophy also provides the framework within which we can conceptualize unobservables and contextualize them in observable space-time.

In fact, Weinberg’s conclusion brings to mind an article in Nature News & Comment by Daniel Sarewitz. In the piece, Sarewitz, a physicist, argued that for someone who didn’t really know the physics supporting the Higgs boson, its existence would have to be a matter of faith than one of knowledge. Similarly, for someone who couldn’t translate electronic radiation to ‘mean’ the electron’s path, the latter would have to be a matter of faith or hope, not a bit of knowledge.

Efficient descriptions

A more well-defined example is the theory of quarks and gluons, both of which are particles that haven’t been spotted yet but are believed to exist by the scientific community. The equipment to spot them is yet to be built and will cost hundreds of billions of dollars, and be orders of magnitude more sophisticated than the LHC.

In the meantime, unlike what Weinberg and like what Sarewitz would have you believe, we do rely on philosophical principles, like that of sufficient reasoning (Spinoza 1663Leibniz 1686), to fill up space-time at levels we can’t yet probe, to guide us toward a direction that we ought to probe after investing money in it.

This is actually no different from a layman going from understanding electric fields to supposedly understanding the Higgs field. At the end of the day, efficient descriptions make the difference.

Exchange of knowledge

This sort of dependence also implies that philosophy draws a lot from science, and uses it to define its own prophecies and shortcomings. We must remember that, while the rise of logical positivism may have shielded physicists from atomism, scientific verification through its hallowed method also did push positivism toward its eventual rejection.

The moral is that scientists must not reject philosophy for its passage through crests and troughs of credence because science also suffers the same passage. What more proof of this do we need than Popper’s and Kuhn’s arguments – irrespective of either of them being true?

Yes, we can’t figure things out with pure thought, and yes, the laws of physics underlying the experiences of our everyday lives are completely known. However, in the search for objective truth – whatever that is – we can’t neglect pure thought until, as Weinberg’s Heisenberg-example itself seems to suggest, we know everything there is to know, until science and philosophy, rather verification-by-observation and conceptualization-by-ideation, have completely and absolutely converged toward the same reality.

Until, in short, we can describe nature continuously instead of discretely.

Liberation of philosophical reasoning

By separating scientific advance from contributions from philosophical knowledge, we are advocating for the ‘professionalization’ of scientific investigation, that it must decidedly lack the attitude-born depth of intuition, which is aesthetic and not rational.

It is against such advocacy that American philosopher Paul Feyerabend voiced vehemently: “The withdrawal of philosophy into a ‘professional’ shell of its own has had disastrous consequences.” He means, in other words, that scientists have become too specialized and are rejecting the useful bits of philosophy.

In his seminal work Against Method (1975), Feyerabend suggested that scientists occasionally subject themselves to methodological anarchism so that they may come up with new ideas, unrestricted by the constraints imposed by the scientific method, freed in fact by the liberation of philosophical reasoning. These new ideas, he suggests, can then be reformulated again and again according to where and how observations fit into it.

In the meantime, the ideas are not born from observations but pure thought that is aided by scientific knowledge from the past. As Wikipedia puts it neatly: “Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts.” These ‘known facts’ are akin to Weinberg’s observables.

So, until the day we can fully resolve nature’s granularity, and assume the objective truth of no reality before that, Pierre-Simon Laplace’s two-century old words should show the way: “We may regard the present state of the universe as the effect of its past and the cause of its future” (An Essay on Probabilities, 1814).

This article, as written by me, originally appeared in The Hindu’s science blog, The Copernican, on June 6, 2013.

Can science and philosophy mix constructively?

'The School of Athens', painted by Rafael during the Renaissance in 1509-1511, shows philosophers, mathematicians and scientists of ancient Greece gathered together.
‘The School of Athens’, painted by Rafael during the Renaissance in 1509-1511, shows philosophers, mathematicians and scientists of ancient Greece gathered together. Photo: Wikimedia Commons

Quantum mechanics can sometimes be very hard to understand, so much so that even thinking about it becomes difficult. This could be because its foundations lay in the action-centric depiction of reality that slowly rejected its origins and assumed a thought-centric one garb.

In his 1925 paper on the topic, physicist Werner Heisenberg used only observable quantities to denote physical phenomena. He also pulled up Niels Bohr in that great paper, saying, “It is well known that the formal rules which are used [in Bohr’s 1913 quantum theory] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron.”

A true theory

Because of the uncertainty principle, and other principles like it, quantum mechanics started to develop into a set of theories that could be tested against observations, and that, to physicists, left very little to thought experiments. Put another way, there was nothing a quantum-physicist could think up that couldn’t be proved or disproved experimentally. This way of looking at the world – in philosophy – is called logical positivism.

This made quantum mechanics a true theory of reality, as opposed to a hypothetical, unverifiable one.

However, even before Heisenberg’s paper was published, positivism was starting to be rejected, especially by chemists. An important example was the advent of statistical mechanics and atomism in the early 19th century. Both of them interpreted, without actual physical observations, that if two volumes of hydrogen and one volume of oxygen combined to form water vapor, then a water molecule would have to comprise two atoms of hydrogen and one atom of oxygen.

A logical positivist would have insisted on actually observing the molecule individually, but that was impossible at the time. This insistence on submitting physical proof, thus, played an adverse role in the progress of science by delaying/denying success its due.

As time passed, the failures of positivism started to take hold on quantum mechanics. In a 1926 conversation with Albert Einstein, Heisenberg said, “… we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are the frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path.” And since he held that any theory ought only to be a true theory, he concluded that these parameters must feature in the theory, and what it projected, as themselves instead of the unobservable electron path.This wasn’t the case.

Gaps in our knowledge

Heisenberg’s probe of the granularity of nature led to his distancing from the theory of logical positivism. And Steven Weinberg, physicist and Nobel Laureate, uses just this distancing to harshly argue in a 1994 essay, titled Against Philosophy, that physics has never benefited from the advice of philosophers, and when it does, it’s only to negate the advice of another philosopher – almost suggesting that ‘science is all there is’ by dismissing the aesthetic in favor of the rational.

In doing so, Weinberg doesn’t acknowledge the fact that science and philosophy go hand in hand; what he has done is simply to outline the failure of logical positivism in the advancement of science.

At the simplest, philosophy in various forms guides human thought toward ideals like objective truth and is able to establish their superiority over subjective truths. Philosophy also provides the framework within which we can conceptualize unobservables and contextualize them in observable space-time.

In fact, Weinberg’s conclusion brings to mind an article in Nature News & Comment by Daniel Sarewitz. In the piece, Sarewitz, a physicist, argued that for someone who didn’t really know the physics supporting the Higgs boson, its existence would have to be a matter of faith than one of knowledge. Similarly, for someone who couldn’t translate electronic radiation to ‘mean’ the electron’s path, the latter would have to be a matter of faith or hope, not a bit of knowledge.

Efficient descriptions

A more well-defined example is the theory of quarks and gluons, both of which are particles that haven’t been spotted yet but are believed to exist by the scientific community. The equipment to spot them is yet to be built and will cost hundreds of billions of dollars, and be orders of magnitude more sophisticated than the LHC.

In the meantime, unlike what Weinberg and like what Sarewitz would have you believe, we do rely on philosophical principles, like that of sufficient reasoning (Spinoza 1663Leibniz 1686), to fill up space-time at levels we can’t yet probe, to guide us toward a direction that we ought to probe after investing money in it.

This is actually no different from a layman going from understanding electric fields to supposedly understanding the Higgs field. At the end of the day, efficient descriptions make the difference.

Exchange of knowledge

This sort of dependence also implies that philosophy draws a lot from science, and uses it to define its own prophecies and shortcomings. We must remember that, while the rise of logical positivism may have shielded physicists from atomism, scientific verification through its hallowed method also did push positivism toward its eventual rejection. There was human agency in both these timelines, both motivated by either the support for or the rejection of scientific and philosophical ideas.

The moral is that scientists must not reject philosophy for its passage through crests and troughs of credence because science also suffers the same passage. What more proof of this do we need than Popper’s and Kuhn’s arguments – irrespective of either of them being true?

Yes, we can’t figure things out with pure thought, and yes, the laws of physics underlying the experiences of our everyday lives are completely known. However, in the search for objective truth –whatever that is – we can’t neglect pure thought until, as Weinberg’s Heisenberg-example itself seems to suggest, we know everything there is to know, until science and philosophy, rather verification-by-observation and conceptualization-by-ideation, have completely and absolutely converged toward the same reality.

Until, in short, we can describe nature continuously instead of discretely.

Liberation of philosophical reasoning

By separating scientific advance from contributions from philosophical knowledge, we are advocating for the ‘professionalization’ of scientific investigation, that it must decidedly lack the attitude-born depth of intuition, which is aesthetic and not rational.

It is against such advocacy that American philosopher Paul Feyerabend voiced vehemently: “The withdrawal of philosophy into a ‘professional’ shell of its own has had disastrous consequences.” He means, in other words, that scientists have become too specialized and are rejecting the useful bits of philosophy.

In his seminal work Against Method (1975), Feyerabend suggested that scientists occasionally subject themselves to methodological anarchism so that they may come up with new ideas, unrestricted by the constraints imposed by the scientific method, freed in fact by the liberation of philosophical reasoning.

These new ideas, he suggests, can then be reformulated again and again according to where and how observations fit into it. In the meantime, the ideas are not born from observations but pure thought that is aided by scientific knowledge from the past. As Wikipedia puts it neatly: “Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts.” These ‘known facts’ are akin to Weinberg’s observables.

So, until the day we can fully resolve nature’s granularity, and assume the objective truth of no reality before that, Pierre-Simon Laplace’s two-century old words should show the way: “We may regard the present state of the universe as the effect of its past and the cause of its future” (An Essay on Probabilities, 1814).

(This blog post first appeared at The Copernican on June 6, 2013.)

Bohr and the breakaway from classical mechanics

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the center like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantized and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behavior was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognized this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalization of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, even before he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more successful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalized them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his pathbreaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.

This article, as written by me, originally appeared in The Copernican science blog on May 19, 2013.

Bohr and the breakaway from classical mechanics

Niels Bohr, 1950.
Niels Bohr, 1950. Photo: Blogspot

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the centre like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantised and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behaviour was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognised this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalisation of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, only a few years after he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more succesful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalised them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his path-breaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.