Getting ahead of theory, experiment, ourselves

Science journalist Laura Spinney wrote an article in The Guardian on January 9, 2022, entitled ‘Are we witnessing the dawn of post-theory science?’. This excerpt from the article captures its points well, I thought:

Or take protein structures. A protein’s function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given protein’s action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isn’t the lack of a theory that will stop drug designers using it. “What AlphaFold does is also discovery,” she says, “and it will only improve our understanding of life and therapeutics.”

Essentially, the article is concerned with machine-learning’s ability to parse large amounts of data, find patterns in them and use them to generate theories – taking over an important realm of human endeavour. In keeping with tradition, it doesn’t answer the question in its headline with a definitive ‘yes’ but with a hard ‘maybe’ to a soft ‘no’. Spinney herself ends by quoting Picasso: “Computers are useless. They can only give you answers” – although the para right before belies the painter’s confidence with a prayer that the human way to think about theories is still meaningful and useful:

The final objection to post-theory science is that there is likely to be useful old-style theory – that is, generalisations extracted from discrete examples – that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.

I’m personally cynical about such claims. If we think we are going to be obsolete, there must be a part of the picture we’re missing.

There was an idea partly similar to this ‘post-theory hypothesis’ a few years ago, and pointing the other way. In 2013, philosopher Richard Dawid wrote a 190-page essay attempting to make the case that string theory shouldn’t be held back by the lack of experimental evidence, i.e. that it was post-empirical. Of course, Spinney is writing about machines taking over the responsibility of, but not precluding the need for, theorising – whereas Dawid and others have argued that string theory doesn’t need experimental data to stay true.

The idea of falsifiability is important here. If a theory is flawed and if you can design an experiment that would reveal that flaw, the theory is said to be falsifiable. A theory can be flawless but still falsifiable: for example, Newton’s theory of gravity is complete and useful in a limited context but, for example, can’t explain the precession of the perihelion of Mercury’s orbit. An example of an unfalsifiable theory is the one underlying astrology. In science, falsifiable theories are said to be better than unfalsifiable ones.

I don’t know what impact Dawid’s book-length effort had, although others before and after him have supported the view that scientific theories should no longer be falsifiable in order to be legitimate. Sean Carroll for one. While I’m not familiar enough with criticisms of the philosophy of falsifiability, I found a better reason to consider the case to trust the validity of string theory sans experimental evidence in a June 2017 preprint paper written by Eva Silverstein:

It is sometimes said that theory has strayed too far from experiment/observation. Historically, there are classic cases with long time delays between theory and experiment – Maxwell’s and Einstein’s waves being prime examples, at 25 and 100 years respectively. These are also good examples of how theory is constrained by serious mathematical and thought-experimental con- sistency conditions.

Of course electromagnetism and general relativity are not representative of most theoretical ideas, but the point remains valid. When it comes to the vast theory space being explored now, most testable ideas will be constrained or falsified. Even there I believe there is substantial scientific value to this: we learn something significant by ruling out a valid theoretical possibility, as long as it is internally consistent and interesting. We also learn important lessons in excluding potential alternative theories based on theoretical consistency criteria.

This said, Dawid’s book, entitled String Theory and the Scientific Method, was perhaps the most popular prouncement of his views in recent years (at least in terms of coverage in the non-technical press), even if by then he’d’ been propounding them for nine years and if his supporters included a bevy of influential physicists. Very simply put, an important part of Dawid’s arguments was that string theory, as a theory, has certain characteristics that make it the only possible theory for all the epistemic niches that it fills, so as long as we expect all those niches to filled by a single theory, string theory may be true by virtue of being the sole possible option.

It’s not hard to see the holes in this line of reasoning, but again, I’ve considerably simplified his idea. But this said, physicist Peter Woit has been (from what little I’ve seen) the most vocal critic of string theorists’ appeals to ‘post-empirical realism’ and has often directed his ire against the uniqueness hypothesis, significantly because accepting it would endanger, for the sake of just one theory’s survival, the foundation upon which almost every other valid scientific theory stands. You must admit this is a powerful argument, and to my mind more persuasive than Silverstein’s argument.

In the words of another physicist, Carlo Rovelli, from September 2016:

String theory is a proof of the dangers of relying excessively on non-empirical arguments. It raised great expectations thirty years ago, when it promised to [solve a bunch of difficult problems in physics]. Nothing of this has come true. String theorists, instead, have [made a bunch of other predictions to explain why it couldn’t solve what it set out to solve]. All this was false.

From a Popperian point of view, these failures do not falsify the theory, because the theory is so flexible that it can be adjusted to escape failed predictions. But from a Bayesian point of view, each of these failures decreases the credibility in the theory, because a positive result would have increased it. The recent failure of the prediction of supersymmetric particles at LHC is the most fragrant example. By Bayesian standards, it lowers the degree of belief in string theory dramatically. This is an empirical argument. Still, Joe Polchinski, prominent string theorist, writes in that he evaluates the probability of string to be correct at 98.5% (!).

Scientists that devoted their life to a theory have difficulty to let it go, hanging on non-empirical arguments to save their beliefs, in the face of empirical results that Bayes confirmation theory counts as negative. This is human. A philosophy that takes this as an exemplar scientific attitude is a bad philosophy of science.