A Kuhnian gap between research publishing and academic success

There is a gap in research publishing and how it relates to academic success. On the one hand, there are scientists complaining of low funds, being short-staffed, low-quality or absent equipment, disoptimal employment/tenure terms, bureaucratic incompetence and political interference. On the other, there are scientists who describe their success within academia in terms of being published in XYZ journals (with impact factors of PQR), having high h-indices, having so many papers to their names, etc.

These two scenarios – both very real in India and I imagine in most countries – don’t straightforwardly lead to the other. They require a bridge, a systemic symptom that makes both of them possible even when they’re incompatible with each other. This bridge is those scientists’ attitudes about what it’s okay to do in order to keep the two façades in harmonious coexistence.

What is it okay to do? For starters, keep the research-publishing machinery running in a way that allows them to evaluate other scientists on matters other than their scientific work. This way, lack of resources for research can be decoupled from scientists’ output in journals. Clever, right?

According to a study published a month ago, manuscripts that include a Nobel laureate’s name among the coauthors are six-times more likely to be accepted for publication than those without a laureate’s name in the mast. This finding piles on other gender-related problems with peer-review, including women’s papers being accepted less often as well as men dominating the community of peer-reviewers. Nature News reported:

Knowledge of the high status of a paper’s author might justifiably influence a reviewer’s opinion: it could boost their willingness to accept a counterintuitive result, for example, on the basis of the author’s track record of rigour. But Palan’s study found that reviewers’ opinions changed across all six of the measures they were asked about, including the subject’s worthiness, the novelty of the information and whether the conclusions were supported. These things should not all be affected by knowledge of authorship, [Palan, one of the paper’s coauthors, said].

Palan also said the solution to this problem is for journals to adopt double-anonymised peer-review: the authors don’t who the reviewers and the reviewers don’t know who the authors are. The most common form of peer-review is the single-blind variety, where the reviewers know who the authors are but the authors don’t know who the reviewers are. FWIW, I prefer double-anonymised peer-review plus the journal publishing the peer-reviewers’ anonymised reports along with the paper.

Then again, modifying peer-review would still be localised to journals that are willing to adopt newer mechanisms, and thus be a stop-gap solution that doesn’t address the use of faulty peer-review mechanisms both inside journals and in academic settings. For example, given the resource-mininal context in which many Indian research institutes and universities function, hiring and promotion committees often decide whom to hire or promote based on which journals their papers have been published in and/or the number of times those papers have been cited.

Instead, what we need is systemic change that responds to all the problems with peer-review, instead of one problem at a time in piecemeal fashion, by improving transparency, resources and incentives. Specifically: a) make peer-review more transparent, b) give scientists the resources – including time and freedom – to evaluate each others’ work on factors localised to the context of their research (including the quality of their work and the challenges in their way), and c) incentivise scientists to do so in order to accelerate change and ensure compliance.

The scientometric numbers, originally invented to facilitate the large-scale computational analysis of the scientific literature, have come to subsume the purpose of the scientific enterprise itself: that is, scientists often want to have good numbers instead of want to do good science. As a result, there is often an unusual delay – akin to the magnetic hysteresis – between the resources for research being cut back and the resulting drop in productivity and quality showing in the researchers’ output. Perhaps more fittingly, it’s a Kuhnian response to paradigm change.

Ayurveda is not a science – but what does that mean?

This post has benefited immensely with inputs from Om Prasad.

Calling something ‘not a science’ has become a pejorative, an insult. You say Ayurveda is not a science and suddenly, its loudest supporters demand to know what the problem is, what your problem is, and that you can go fuck yourself.

But Ayurveda is not a science.

First, science itself didn’t exist when Ayurveda was first born (whenever that was but I’m assuming it was at least a millennium ago), and they were both outcomes of different perceived needs. So claiming ‘Ayurveda is a science’ makes little sense. You could counter that 5 didn’t stop being a number just because the number line came much later – but that wouldn’t make sense either because the relationship between 5 and the number line is nothing like the relationship between science and Ayurveda.

It’s more like claiming Carl Linnaeus’s choice of topics to study was normal: it wouldn’t at all be normal today but in his time and his particular circumstances, they were considered acceptable. Similarly, Ayurveda was the product of a different time, technologies and social needs. Transplanting it without ‘updating’ it in any way is obviously going to make it seem inchoate, stunted. At the same time, ‘updating’ it may not be so productive either.

Claiming ‘Ayurveda is a science’ is to assert two things: that science is a qualifier of systems, and that Ayurveda once qualified by science’s methods becomes a science. But neither is true for the same reason: if you want one of them to be like the other, it becomes the other. They are two distinct ways of organising knowledge and making predictions about natural processes, and which grew to assume their most mature forms along different historical trajectories. Part of science’s vaunted stature in society today is that it is an important qualifier of knowledge, but it isn’t of knowledge systems. This is ultimately why Ayurveda and science are simply incompatible.

One of them has become less effective and less popular over time – which should be expected because human technologies and geopolitical and social boundaries have changed dramatically – while the other is relatively more adolescent, more multidisciplinary (with the right opportunities) and more resource-intensive – which should be expected because science, engineering, capitalism and industrialism rapidly co-evolved in the last 150 years.

Second, ‘Ayurveda is a science’ is a curious statement because those who utter it typically wish to elevate it to the status science enjoys and at the same time wish to supplant answers that modern science has provided to some questions with answers by Ayurveda. Of course, I’m speaking about the average bhakt here – more specifically a Bharatiya Janata Party supporter seemingly sick of non-Indian, especially Western, influences on Indian industry, politics, culture (loosely defined) and the Indian identity itself, and who may be actively seeking homegrown substitutes. However, their desire to validate Ayurveda according to the practices of modern science is really an admission that modern science is superior to Ayurveda despite all their objections to it.

The bhakt‘s indignation when confronted with the line that ‘Ayurveda is not a science’ is possibly rooted in the impression that ‘science’ is a status signal – a label attached to a collection of precepts capable of together solving particular problems, irrespective of more fundamental philosophical requirements. However, the only science we know of is the modern one, and to the bhakt the ‘Western’ one – both in provenance and its ongoing administration – and the label and the thing to which it applies, i.e. the thing as well as the name of the thing, are convergent.

There is no other way of doing science; there is no science with a different set of methods that claims to arrive at the same or ‘better’ scientific truths. (I’m curious at this point if, assuming a Kuhnian view, science itself is unfalsifiable as it attributes inconsistencies in its constituent claims to extra-scientific causes than to flaws in its methods themselves – so as a result science as a system can reach wrong conclusions from time to time but still be valid at all times.)

It wouldn’t be remiss to say modern science, thus science itself, is to the nationalistic bhakt as Ayurveda is to the nationalistic far-right American: a foreign way of doing things that must be resisted, and substituted with the ‘native’ way, however that nativity is defined. It’s just that science, specifically allopathy, is more in favour today because, aside from its own efficacy (a necessary but not sufficient condition), all the things it needs to work – drug discovery processes, manufacturing, logistics and distribution, well-trained health workers, medical research, a profitable publishing industry, etc. – are modelled on institutions and political economies exported by the West and embedded around the world through colonial and imperial conquests.

Third: I suspect a part of why saying ‘Ayurveda is not a science’ is hurtful is that Indian society at large has come to privilege science over other disciplines, especially the social sciences. I know too many people who associate the work of many of India’s scientists with objectivity, a moral or political nowhereness*, intellectual prominence, pride and, perhaps most importantly, a willingness to play along with the state’s plans for economic growth. To be denied the ‘science’ tag is to be denied these attributes, desirable for their implicit value as much as for the opportunities they are seen to present in the state’s nationalist (and even authoritarian) project.

On the other hand, social scientists are regularly cast in opposition to these attributes – and more broadly by the BJP in opposition to normative – i.e. pro-Hindu, pro-rich – views of economic and cultural development, and dismissed as such. This ‘science v. fairness’ dichotomy is only a proxy battle in the contest between respecting and denying human rights – which in turn is also represented in the differences between allopathy and Ayurveda, especially when they are addressed as scientific as well as social systems.

Compared to allopathy and allopathy’s intended outcomes, Ayurveda is considerably flawed and very minimally desirable as an alternative. But on the flip side, uptake of alternative traditions is motivated not just by their desirability but also by the undesirable characteristics of allopathy itself. Modern allopathic methods are isolating (requiring care at a designated facility and time away from other tasks, irrespective of the extent to which that is epidemiologically warranted), care is disempowering and fraught with difficult contradictions (“We expect family members to make decisions about their loved ones after a ten-minute briefing that we’re agonising over even with years of medical experience”**), quality of care is cost-stratified, and treatments are condition-specific and so require repeated hospital visits in the course of a lifetime.

Many of those who seek alternatives in the first place do so for these reasons – and these reasons are not problems with the underlying science itself. They’re problems with how medical care is delivered, how medical knowledge is shared, how medical research is funded, how medical workers are trained – all subjects that social scientists deal with, not scientists. As such, any alternative to allopathy will become automatically preferred if it can solve these economic, political, social, welfare, etc. problems while delivering the same standard of care.

Such a system won’t be an entirely scientific enterprise, considering it would combine the suggestions of the sciences as well as the social sciences into a unified whole such that it treated individual ailments without incurring societal ones. Now, say you’ve developed such an alternative system, called PXQY. The care model at its heart isn’t allopathy but something else – and its efficacy is highest when it is practised and administered as part of the PXQY setup, instead of through standalone procedures. Would you still call this paradigm of medical care a science?

* Akin to the ‘view from nowhere’.
** House, S. 2, E 18.

Featured image credit: hue 12 photography/Unsplash.

The calculus of creative discipline

Every moment of a science fiction story must represent the triumph of writing over world-building. World-building is dull. World-building literalises the urge to invent. World-building gives an unnecessary permission for acts of writing (indeed, for acts of reading). World-building numbs the reader’s ability to fulfil their part of the bargain, because it believes that it has to do everything around here if anything is going to get done. Above all, world-building is not technically necessary. It is the great clomping foot of nerdism.

Once I’m awake and have had my mug of tea, and once I’m done checking Twitter, I can quote these words of M. John Harrison from memory: not because they’re true – I don’t believe they are – but because they rankle. I haven’t read any writing of Harrison’s, I can’t remember the names of any of his books. Sometimes I don’t remember his name even, only that there was this man who uttered these words. Perhaps it is to Harrison’s credit that he’s clearly touched a nerve but I’m reluctant to concede anymore than this.

His (partial) quote reflects a narrow view of a wider world, and it bothers me because I remain unable to extend the conviction that he’s seeing only a part of the picture to the conclusion that he lacks imagination; as a writer of not inconsiderable repute, at least according to Wikipedia, I doubt he has any trouble imagining things.

I’ve written about the virtues of world-building before (notably here), and I intend to make another attempt in this post; I should mention what both attempts, both defences, have in common is that they’re not prescriptive. They’re not recommendations to others, they’re non-generalisable. They’re my personal reasons to champion the act, even art, of world-building; my specific loci of resistance to Harrison’s contention. But at the same time, I don’t view them – and neither should you – as inviolable or as immune to criticism, although I suspect this display of a willingness to reason may not go far in terms of eliminating subjective positions from this exercise, so make of it what you will.

There’s an idea in mathematical analysis called smoothness. Let’s say you’ve got a curve drawn on a graph, between the x- and y-axes, shaped like the letter ‘S’. Let’s say you’ve got another curve drawn on a second graph, shaped like the letter ‘Z’. According to one definition, the S-curve is smoother than the Z-curve because it has fewer sharp edges. A diligent high-schooler might take recourse through differential calculus to explain the idea. Say the Z-curve on the graph is the result of a function Z(x) = y. If you differentiate Z(x) where ‘x’ is the point on the x-axis where the Z-curve makes a sharp turn, the derivative Z'(x) has a value of zero. Such points are called critical points. The S-curve doesn’t have any critical points (except at the ends, but let’s ignore them); L-, and T-curves have one critical point each; P- and D-curves have two critical points each; and an E-curve has three critical points.

With the help of a loose analogy, you could say a well-written story is smooth à la an S-curve (excluding the terminal points): it it has an unambiguous beginning and an ending, and it flows smoothly in between the two. While I admire Steven Erikson’s Malazan Book of the Fallen series for many reasons, its first instalment is like a T-curve, where three broad plot-lines abruptly end at a point in the climax that the reader has been given no reason to expect. The curves of the first three books of J.K. Rowling’s Harry Potter series resemble the tangent function (from trigonometry: tan(x) = sin(x)/cosine(x)): they’re individually somewhat self-consistent but the reader is resigned to the hope that their beginnings and endings must be connected at infinity.

You could even say Donald Trump’s presidency hasn’t been smooth at all because there have been so many critical points.

Where world-building “literalises the urge to invent” to Harrison, it spatialises the narrative to me, and automatically spotlights the importance of the narrative smoothness it harbours. World-building can be just as susceptible to non-sequiturs and deus ex machinae as writing itself, all the way to the hubris Harrison noticed, of assuming it gives the reader anything to do, even enjoy themselves. Where he sees the “clomping foot of nerdism”, I see critical points in a curve some clumsy world-builder invented as they went along. World-building can be “dull” – or it can choose to reveal the hand-prints of a cave-dwelling people preserved for thousands of years, and the now-dry channels of once-heaving rivers that nurtured an ancient civilisation.

My principal objection to Harrison’s view is directed at the false dichotomy of writing and world-building, and which he seems to want to impose instead of the more fundamental and more consequential need for creative discipline. Let me borrow here from philosophy of science 101, specifically of the particular importance of contending with contradictory experimental results. You’ve probably heard of the replication crisis: when researchers tried to reproduce the results of older psychology studies, their efforts came a cropper. Many – if not most – studies didn’t replicate, and scientists are currently grappling with the consequences of overturning decades’ worth of research and research practices.

This is on the face of it an important reality check but to a philosopher with a deeper view of the history of science, the replication crisis also recalls the different ways in which the practitioners of science have responded to evidence their theories aren’t prepared to accommodate. The stories of Niels Bohr v. classical mechanicsDan Shechtman v. Linus Pauling and the EPR paradox come first to mind. Heck, the philosophers Karl Popper, Thomas Kuhn, Imre Lakatos and Paul Feyerabend are known for their criticisms of each other’s ideas on different ways to rationalise the transition from one moment containing multiple answers to the moment where one emerges as the favourite.

In much the same way, the disciplined writer should challenge themself instead of presuming the liberty to totter over the landscape of possibilities, zig-zagging between one critical point and the next until they topple over the edge. And if they can’t, they should – like the practitioners of good science – ask for help from others, pressing the conflict between competing results into the service of scouring the rust away to expose the metal.

For example, since June this year, I’ve been participating on my friend Thomas Manuel’s initiative in his effort to compose an underwater ‘monsters’ manual’. It’s effectively a collaborative world-building exercise where we take turns to populate different parts of a large planet with sizeable oceans, seas, lakes and numerous rivers with creatures, habitats and ecosystems. We broadly follow the same laws of physics and harbour substantially overlapping views of magic, but we enjoy the things we invent because they’re forced through the grinding wheels of each other’s doubts and curiosities, and the implicit expectation of one creator to make adequate room for the creations of the other.

I see it as the intersection of two functions: at first, their curves will criss-cross at a point, and the writers must then fashion a blending curve so a particle moving along one can switch to the other without any abruptness, without any of the tired melodrama often used to mask criticality. So the Kularu people are reminded by their oral traditions to fight for their rivers, so the archaeologists see through the invading Gezmin’s benevolence and into the heart of their imperialist ambitions.

Can science and philosophy mix constructively?

Quantum mechanics can sometimes be very hard to understand, so much so that even thinking about it becomes difficult. This could be because its foundations lay in the action-centric depiction of reality that slowly rejected its origins and assumed a thought-centric one garb.

In his 1925 paper on the topic, physicist Werner Heisenberg used only observable quantities to denote physical phenomena. He also pulled up Niels Bohr in that great paper, saying, “It is well known that the formal rules which are used [in Bohr’s 1913 quantum theory] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron.”

A true theory

Because of the uncertainty principle, and other principles like it, quantum mechanics started to develop into a set of theories that could be tested against observations, and that, to physicists, left very little to thought experiments. Put another way, there was nothing a quantum-physicist could think up that couldn’t be proved or disproved experimentally. This way of looking at the world – in philosophy – is called logical positivism.

This made quantum mechanics a true theory of reality, as opposed to a hypothetical, unverifiable one.

However, even before Heisenberg’s paper was published, positivism was starting to be rejected, especially by chemists. An important example was the advent of statistical mechanics and atomism in the early 19th century. Both of them interpreted, without actual physical observations, that if two volumes of hydrogen and one volume of oxygen combined to form water vapor, then a water molecule would have to comprise two atoms of hydrogen and one atom of oxygen.

A logical positivist would have insisted on actually observing the molecule individually, but that was impossible at the time. This insistence on submitting physical proof, thus, played an adverse role in the progress of science by delaying/denying success its due.

As time passed, the failures of positivism started to take hold on quantum mechanics. In a 1926 conversation with Albert Einstein, Heisenberg said, “… we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are the frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path.” And since he held that any theory ought only to be a true theory, he concluded that these parameters must feature in the theory, and what it projected, as themselves instead of the unobservable electron path.

This wasn’t the case.

Gaps in our knowledge

Heisenberg’s probe of the granularity of nature led to his distancing from the theory of logical positivism. And Steven Weinberg, physicist and Nobel Laureate, uses just this distancing to harshly argue in a 1994 essay, titled Against Philosophy, that physics has never benefited from the advice of philosophers, and when it does, it’s only to negate the advice of another philosopher – almost suggesting that ‘science is all there is’ by dismissing the aesthetic in favor of the rational.

In doing so, Weinberg doesn’t acknowledge the fact that science and philosophy go hand in hand; what he has done is simply to outline the failure of logical positivism in the advancement of science.

At the simplest, philosophy in various forms guides human thought toward ideals like objective truth and is able to establish their superiority over subjective truths. Philosophy also provides the framework within which we can conceptualize unobservables and contextualize them in observable space-time.

In fact, Weinberg’s conclusion brings to mind an article in Nature News & Comment by Daniel Sarewitz. In the piece, Sarewitz, a physicist, argued that for someone who didn’t really know the physics supporting the Higgs boson, its existence would have to be a matter of faith than one of knowledge. Similarly, for someone who couldn’t translate electronic radiation to ‘mean’ the electron’s path, the latter would have to be a matter of faith or hope, not a bit of knowledge.

Efficient descriptions

A more well-defined example is the theory of quarks and gluons, both of which are particles that haven’t been spotted yet but are believed to exist by the scientific community. The equipment to spot them is yet to be built and will cost hundreds of billions of dollars, and be orders of magnitude more sophisticated than the LHC.

In the meantime, unlike what Weinberg and like what Sarewitz would have you believe, we do rely on philosophical principles, like that of sufficient reasoning (Spinoza 1663Leibniz 1686), to fill up space-time at levels we can’t yet probe, to guide us toward a direction that we ought to probe after investing money in it.

This is actually no different from a layman going from understanding electric fields to supposedly understanding the Higgs field. At the end of the day, efficient descriptions make the difference.

Exchange of knowledge

This sort of dependence also implies that philosophy draws a lot from science, and uses it to define its own prophecies and shortcomings. We must remember that, while the rise of logical positivism may have shielded physicists from atomism, scientific verification through its hallowed method also did push positivism toward its eventual rejection.

The moral is that scientists must not reject philosophy for its passage through crests and troughs of credence because science also suffers the same passage. What more proof of this do we need than Popper’s and Kuhn’s arguments – irrespective of either of them being true?

Yes, we can’t figure things out with pure thought, and yes, the laws of physics underlying the experiences of our everyday lives are completely known. However, in the search for objective truth – whatever that is – we can’t neglect pure thought until, as Weinberg’s Heisenberg-example itself seems to suggest, we know everything there is to know, until science and philosophy, rather verification-by-observation and conceptualization-by-ideation, have completely and absolutely converged toward the same reality.

Until, in short, we can describe nature continuously instead of discretely.

Liberation of philosophical reasoning

By separating scientific advance from contributions from philosophical knowledge, we are advocating for the ‘professionalization’ of scientific investigation, that it must decidedly lack the attitude-born depth of intuition, which is aesthetic and not rational.

It is against such advocacy that American philosopher Paul Feyerabend voiced vehemently: “The withdrawal of philosophy into a ‘professional’ shell of its own has had disastrous consequences.” He means, in other words, that scientists have become too specialized and are rejecting the useful bits of philosophy.

In his seminal work Against Method (1975), Feyerabend suggested that scientists occasionally subject themselves to methodological anarchism so that they may come up with new ideas, unrestricted by the constraints imposed by the scientific method, freed in fact by the liberation of philosophical reasoning. These new ideas, he suggests, can then be reformulated again and again according to where and how observations fit into it.

In the meantime, the ideas are not born from observations but pure thought that is aided by scientific knowledge from the past. As Wikipedia puts it neatly: “Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts.” These ‘known facts’ are akin to Weinberg’s observables.

So, until the day we can fully resolve nature’s granularity, and assume the objective truth of no reality before that, Pierre-Simon Laplace’s two-century old words should show the way: “We may regard the present state of the universe as the effect of its past and the cause of its future” (An Essay on Probabilities, 1814).

This article, as written by me, originally appeared in The Hindu’s science blog, The Copernican, on June 6, 2013.

Can science and philosophy mix constructively?

'The School of Athens', painted by Rafael during the Renaissance in 1509-1511, shows philosophers, mathematicians and scientists of ancient Greece gathered together.
‘The School of Athens’, painted by Rafael during the Renaissance in 1509-1511, shows philosophers, mathematicians and scientists of ancient Greece gathered together. Photo: Wikimedia Commons

Quantum mechanics can sometimes be very hard to understand, so much so that even thinking about it becomes difficult. This could be because its foundations lay in the action-centric depiction of reality that slowly rejected its origins and assumed a thought-centric one garb.

In his 1925 paper on the topic, physicist Werner Heisenberg used only observable quantities to denote physical phenomena. He also pulled up Niels Bohr in that great paper, saying, “It is well known that the formal rules which are used [in Bohr’s 1913 quantum theory] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron.”

A true theory

Because of the uncertainty principle, and other principles like it, quantum mechanics started to develop into a set of theories that could be tested against observations, and that, to physicists, left very little to thought experiments. Put another way, there was nothing a quantum-physicist could think up that couldn’t be proved or disproved experimentally. This way of looking at the world – in philosophy – is called logical positivism.

This made quantum mechanics a true theory of reality, as opposed to a hypothetical, unverifiable one.

However, even before Heisenberg’s paper was published, positivism was starting to be rejected, especially by chemists. An important example was the advent of statistical mechanics and atomism in the early 19th century. Both of them interpreted, without actual physical observations, that if two volumes of hydrogen and one volume of oxygen combined to form water vapor, then a water molecule would have to comprise two atoms of hydrogen and one atom of oxygen.

A logical positivist would have insisted on actually observing the molecule individually, but that was impossible at the time. This insistence on submitting physical proof, thus, played an adverse role in the progress of science by delaying/denying success its due.

As time passed, the failures of positivism started to take hold on quantum mechanics. In a 1926 conversation with Albert Einstein, Heisenberg said, “… we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are the frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path.” And since he held that any theory ought only to be a true theory, he concluded that these parameters must feature in the theory, and what it projected, as themselves instead of the unobservable electron path.This wasn’t the case.

Gaps in our knowledge

Heisenberg’s probe of the granularity of nature led to his distancing from the theory of logical positivism. And Steven Weinberg, physicist and Nobel Laureate, uses just this distancing to harshly argue in a 1994 essay, titled Against Philosophy, that physics has never benefited from the advice of philosophers, and when it does, it’s only to negate the advice of another philosopher – almost suggesting that ‘science is all there is’ by dismissing the aesthetic in favor of the rational.

In doing so, Weinberg doesn’t acknowledge the fact that science and philosophy go hand in hand; what he has done is simply to outline the failure of logical positivism in the advancement of science.

At the simplest, philosophy in various forms guides human thought toward ideals like objective truth and is able to establish their superiority over subjective truths. Philosophy also provides the framework within which we can conceptualize unobservables and contextualize them in observable space-time.

In fact, Weinberg’s conclusion brings to mind an article in Nature News & Comment by Daniel Sarewitz. In the piece, Sarewitz, a physicist, argued that for someone who didn’t really know the physics supporting the Higgs boson, its existence would have to be a matter of faith than one of knowledge. Similarly, for someone who couldn’t translate electronic radiation to ‘mean’ the electron’s path, the latter would have to be a matter of faith or hope, not a bit of knowledge.

Efficient descriptions

A more well-defined example is the theory of quarks and gluons, both of which are particles that haven’t been spotted yet but are believed to exist by the scientific community. The equipment to spot them is yet to be built and will cost hundreds of billions of dollars, and be orders of magnitude more sophisticated than the LHC.

In the meantime, unlike what Weinberg and like what Sarewitz would have you believe, we do rely on philosophical principles, like that of sufficient reasoning (Spinoza 1663Leibniz 1686), to fill up space-time at levels we can’t yet probe, to guide us toward a direction that we ought to probe after investing money in it.

This is actually no different from a layman going from understanding electric fields to supposedly understanding the Higgs field. At the end of the day, efficient descriptions make the difference.

Exchange of knowledge

This sort of dependence also implies that philosophy draws a lot from science, and uses it to define its own prophecies and shortcomings. We must remember that, while the rise of logical positivism may have shielded physicists from atomism, scientific verification through its hallowed method also did push positivism toward its eventual rejection. There was human agency in both these timelines, both motivated by either the support for or the rejection of scientific and philosophical ideas.

The moral is that scientists must not reject philosophy for its passage through crests and troughs of credence because science also suffers the same passage. What more proof of this do we need than Popper’s and Kuhn’s arguments – irrespective of either of them being true?

Yes, we can’t figure things out with pure thought, and yes, the laws of physics underlying the experiences of our everyday lives are completely known. However, in the search for objective truth –whatever that is – we can’t neglect pure thought until, as Weinberg’s Heisenberg-example itself seems to suggest, we know everything there is to know, until science and philosophy, rather verification-by-observation and conceptualization-by-ideation, have completely and absolutely converged toward the same reality.

Until, in short, we can describe nature continuously instead of discretely.

Liberation of philosophical reasoning

By separating scientific advance from contributions from philosophical knowledge, we are advocating for the ‘professionalization’ of scientific investigation, that it must decidedly lack the attitude-born depth of intuition, which is aesthetic and not rational.

It is against such advocacy that American philosopher Paul Feyerabend voiced vehemently: “The withdrawal of philosophy into a ‘professional’ shell of its own has had disastrous consequences.” He means, in other words, that scientists have become too specialized and are rejecting the useful bits of philosophy.

In his seminal work Against Method (1975), Feyerabend suggested that scientists occasionally subject themselves to methodological anarchism so that they may come up with new ideas, unrestricted by the constraints imposed by the scientific method, freed in fact by the liberation of philosophical reasoning.

These new ideas, he suggests, can then be reformulated again and again according to where and how observations fit into it. In the meantime, the ideas are not born from observations but pure thought that is aided by scientific knowledge from the past. As Wikipedia puts it neatly: “Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts.” These ‘known facts’ are akin to Weinberg’s observables.

So, until the day we can fully resolve nature’s granularity, and assume the objective truth of no reality before that, Pierre-Simon Laplace’s two-century old words should show the way: “We may regard the present state of the universe as the effect of its past and the cause of its future” (An Essay on Probabilities, 1814).

(This blog post first appeared at The Copernican on June 6, 2013.)

A case of Kuhn, quasicrystals & communication – Part IV

Dan Shechtman won the Nobel Prize for chemistry in 2011. This led to an explosion of interest on the subject of QCs and Shechtman’s travails in getting the theory validated.

Numerous publications, from Reuters to The Hindu, published articles and reports. In fact, The Guardian ran an online article giving a blow-by-blow account of how the author, Ian Sample, attempted to contact Shechtman while the events succeeding the announcement of the prize unfolded.

All this attention served as a consummation of the events that started to avalanche in 1982. Today, QCs are synonymous with the interesting possibilities of materials science as much as with perseverance, dedication, humility, and an open mind.

Since the acceptance of the fact of QCs, the Israeli chemist has gone on to win Physics Award of the Friedenberg Fund (1986), the Rothschild Prize in engineering (1990), the Weizmann Science Award (1993), the 1998 Israel Prize for Physics, the prestigious Wolf Prize in Physics (1998), and the EMET Prize in chemistry (2002).

As Pauling’s influence on the scientific community faded with Shechtman’s growing recognition, his death in 1994 did still mark the complete lack of opposition to an idea that had long since gained mainstream acceptance. The swing in Shechtman’s favour, unsurprisingly, began with the observation of QCs and the icosahedral phase in other laboratories around the world.

Interestingly, Indian scientists were among the forerunners in confirming the existence of QCs. As early as in 1985, when the paper published by Shechtman and others in the Physical Review Letters was just a year old, S Ranganathan and Kamanio Chattopadhyay (amongst others), two of India’s preeminent crystallographers, published a paper in Current Science announcing the discovery of materials that exhibited decagonal symmetry. Such materials are two-dimensional QCs with periodicity exhibited in one of those dimensions.

The story of QCs is most important as a post-Second-World-War incidence of a paradigm shift occurring in a field of science easily a few centuries old.

No other discovery has rattled scientists as much in these years, and since the Shechtman-Pauling episode, academic peers have been more receptive of dissonant findings. At the same time, credit must be given to the rapid advancements in technology and human knowledge of statistical techniques: without them, the startling quickness with which each hypothesis can be tested today wouldn’t have been possible.

The analysis of the media representation of the discovery of quasicrystals with respect to Thomas Kuhn’s epistemological contentions in his The Structure of Scientific Revolutions was an attempt to understand his standpoints by exploring more of what went on in the physical chemistry circles of the 1980s.

While there remains the unresolved discrepancy – whether knowledge is non-accumulative simply because the information founding it has not been available before – Kuhn’s propositions hold in terms of the identification of the anomaly, the mounting of the crisis period, the communication breakdown within scientific circles, the shift from normal science to cutting-edge science, and the eventual acceptance of a new paradigm and the discarding of the old one.

Consequently, it appears that science journalists have indeed taken note of these developments in terms of The Structure. Thus, the book’s influence on science journalism can be held to be persistent, and is definitely evident.

A case of Kuhn, quasicrystals & communication – Part I

Dan Shechtman’s discovery of quasi-crystals, henceforth abbreviated as QCs, in 1982 was a landmark achievement that invoked a paradigm-shift in the field of physical chemistry.

However, at the time, the discovery faced stiff resistance from the broader scientific community and an eminent chemist of the time. Such things made it harder for Shechtman to prove his findings as being credible, but he persisted and succeeded in doing so.

We know his story today because of its fairly limited coverage in the media, and especially from the comments of his peers, students and friends; its revolutionary characteristic was well reflected in many reports and essays.

Because such publications indicated the onset of a new kind of knowledge, what merits consideration is if the media internalized Thomas Kuhn’s philosophy of science in the way it approached the incident.

Broadly, the question is: Did the media reports reflect Kuhn’s paradigm-shifting hypothesis? Specifically, in the 1980s,

  1. Did science journalists find QCs anomalous?
  2. Did science journalists identify the crisis period when it happened or was it reported as an isolated incident?
  3. Does the media’s portrayal of the crisis period reflect any incommensurability (be it in terms of knowledge or communication)?

Finally: How did science journalism behave when reporting stories from the cutting edge?

The Structure of Scientific Revolutions

Thomas S. Kuhn’s (July 18, 1922 – June 17, 1996) book, The Structure of Scientific Revolutions, published in 1962, was significantly influential in academic circles as well as the scientific community. It introduced the notion of a paradigm-shift, which has since become a principal when describing the evolution of scientific knowledge.

Thomas Kuhn, Harvard University, 1949

Kuhn defined a paradigm based on two properties:

  1. The paradigm must be sufficiently unprecedented to attract researchers to study it, and
  2. It must be sufficiently open-ended to allow for growth and debate

By this definition, most of the seminal findings of the greatest thinkers and scientists of the past are paradigmatic. Nicholas Copernicus’s De Revolutionibus Orbium Coelestium (1543) and Isaac Newton’s Philosophiae Naturalis Principia Mathematica (1687) are both prime examples that illustrate what paradigms can be and how they shift perceptions and interests in the subject.

Such paradigms, Kuhn said (p. 25), work with three attributes that are inherent to their conception. The first of the three attributes is the determination of significant fact, whereby facts accrued through observation and experimentation are measured and recorded more accurately.

Even though they are the “pegs” of any literature concerning the paradigm, activities such as their measurement and records are independent of the dictates of the paradigm. Instead, they are, in a colloquial sense, conducted anyway.

Why this is so becomes evident in the second of the three foci: matches of fact with theory. Kuhn claims (p. 26) that this class of activity is rarer in reality, where predictions of the reigning theory are compared to the (significant) facts measured in nature.

Consequently, good agreement between the two would establish the paradigm’s robustness, whereas disagreement would indicate the need for further refinement. In fact, on the same page, Kuhn illustrates the rarity of such agreement by stating

… no more than three such areas are even yet accessible to Einstein’s general theory of relativity.

The third and last focus is on the articulation of theory. In this section, Kuhn posits that the academician conducts experiments to

  1. Determine physical constants associated with the paradigm
  2. Determine quantitative laws (so as to provide a physical quantification of the paradigm)
  3. Determine the applications of the paradigm in various fields

In The Structure, one paradigm replaces another through a process of contention. At first, a reigning paradigm exists that, to an acceptable degree of reasonableness, explains empirical observations. However, in time, as technology improves and researchers find results that don’t quite agree with the reigning paradigm, the results are listed as anomalies.

This refusal to immediately induct the findings and modify the paradigm is illustrated by Kuhn as proof toward our expectations clouding our perception of the world.

Instead, researchers hold the position of the paradigm as fixed and immovable, and attempt to check for errors with the experimental/observed data. An example of this is the superluminal neutrinos that were “discovered”, rather stumbled upon, at the OPERA experiment in Italy that works with the CERN’s Large Hadron Collider (LHC).

When the experiment logs from that fateful day, September 23, 2011, were examined, nothing suspicious was found with the experimental setup. However, despite this assurance of the instruments’ stability, the theory (of relativity) that prohibits this result was held superior.

On October 18, then, experimental confirmation was received that the neutrinos could not have traveled faster than light because the theoretically predicted energy signature of a superluminal neutrino did not match with the observed signatures.

As Kuhn says (p. 77):

Though they [scientists] may begin to lose faith and then to consider alternatives, they do not renounce the paradigm that has led them into crisis. They do not, that is, treat anomalies as counterinstances, though in the vocabulary of philosophy of science that is what they are.

However, this state of disagreement is not perpetual because, as Kuhn concedes above, an accumulation of anomalies forces a crisis in the scientific community. During a period of crisis, the paradigm reigns, yes, but is also now and then challenged by alternately conceived paradigms that

  1. Are sufficiently unprecedented
  2. Are open-ended to provide opportunities for growth
  3. Are able to explain those anomalies that threatens the reign of the extant paradigm

The new paradigm imposes a new framework of ideals to contain the same knowledge that dethroned the old paradigm, and because of a new framework, new relations between different bits of information become possible. Therefore, paradigm shifts are periods encompassing rejection and re-adoption as well as restructuring and discovery.

Kuhn ties together here three postulates: incommensurability, scientific communication, and knowledge being non-accumulative. When a new paradigm takes over, there is often a reshuffling of subjects – some are relegated to a different department, some departments are broadened to include more subjects than were there previously, while other subjects are confined to illogicality.

During this phase, some areas of knowledge may no longer be measured with the same standards that have gone before them.

Because of this incommensurability, scientific communication within their community breaks down, but only for the period of the crisis. For one, because of the new framework, some scientific terms change their meaning, and because multiple revolutions have happened in the past, Kuhn assumes the liberty here to conclude that scientific knowledge is non-accumulative. This facet of evolution was first considered by Herbert Butterfield in his The Origins of Modern Science, 1300-1800. Kuhn, in his work, then drew a comparison to visual gestalt (p. 85).

The Gestalt principles of visual perception seek to explain why the human mind sees two faces before it can identify the vase in the picture.

Just as in politics, when during a time of instability the people turn to conservative ideals to recreate a state of calm, scientists get back to a debate over the fundamentals of science to choose a successor paradigm. This is a gradual process, Kuhn says, that may or may not yield a new paradigm that is completely successful in explaining all the anomalies.

The discovery of QCs

On April 8, 1982, Dan Shechtman, a crystallographer working at the U. S. National Bureau of Standards (NBS), made a discovery that would nothing less than shatter the centuries-old assumptions of physical chemistry. Studying the molecular structure of an alloy of aluminium and manganese using electron diffraction, Shechtman noted an impossible arrangement of the molecules.

In electron diffraction, electrons are used to study extremely small objects, such as atoms and molecules, because the wavelength of electrons – which determines the resolution of the image produced – can be controlled by their electric charge. Photons lack this charge and are therefore unsuitable for high-precision observation at the atomic level.

When accelerated electrons strike the object under study, their wave nature takes over and they form an interference pattern on the observer lens when they are scattered. The device then works backward to reproduce the surface that may have generated the recorded pattern, in the process yielding an image of the surface. On that day in April, this is what Shechtman saw (note: the brightness of each node is only an indication of how far it is from the observer lens).

The electron-diffraction pattern exposing a quasicrystal’s lattice structure (Image from Ars Technica)

The diffraction pattern shows the molecules arranged in repeating pentagonal rings. That meant that the crystal exhibited 5-fold symmetry, i.e. an arrangement that was symmetrical about five axes. At the time, molecular arrangements were restricted by the then-36-year old crystallographic restriction theorem, which held that arrangements with only 2-, 3-, 4- and 6-fold symmetries were allowed. In fact, Shechtman had passed his university exams proving that 5-fold symmetries couldn’t exist!

At the time of discovery, Shechtman couldn’t believe his eyes because it was an anomaly. In keeping with tradition, in fact, he proceeded to look for experimental errors. Only after he could find none did he begin to consider reporting the discovery.

A photograph showing the pages from Shechtman’s logbook from the day he made the seemingly anomalous observation. Observe the words “10 Fold???”

In the second half of the 20th century, the field of crystallography was beginning to see some remarkable discoveries, but none of them as unprecedented as that of QCs would turn out to be. This was because of the development of spectroscopy, a subject that studied the interaction of matter and radiation.

Using devices such as X-ray spectrometers and tunneling electron microscopes (TEM), scientists could literally look at a molecule instead of having to determine its form via chemical reactions. In such a period, there was tremendous growth in physical chemistry because of the imaginative mind of one man who would later be called one of the greatest chemists of all time as well as make life difficult for Shechtman: Linus Carl Pauling.

Pauling epitomized the aspect of Kuhn’s philosophy that refused to let an old paradigm die, and therefore posed a significant hindrance to Shechtman’s radical new idea. While Shechtman attempted to present his discovery of QCs as an anomaly that he thought prompted crisis, Pauling infamously declared, “There is no such thing as quasi-crystals, only quasi-scientists.

Media reportage

The clash between Pauling and Shechtman, rather the “old school” and the “new kid”, created some attrition within universities in the United States and Israel, who with Shechtman was affiliated. While a select group of individuals who were convinced of the veracity of the radical claims set about studying it further, others – perhaps under the weight of Pauling’s credibility – dismissed the work as erroneous and desperate. The most important entity classifiable under the latter was the Journal of Applied Physics, which refused to publish Shechtman’s finding.

In this turmoil, there was a collapse of communication between scientists of the two factions. Unfortunately, the media’s coverage of this incident was limited: a few articles appeared in the mid-1980s in newspapers, magazines and journals; in 1988 when Pauling published his own paper on QCs; in 1999 when Shechtman won the prestigious Wolf Prize in mathematics; and in 2011, when he won the Nobel Prize in chemistry.

Despite the low coverage, the media managed to make known the existence of such things as QCs to a wider community as well as to a less-sophisticated one. The rift between Pauling and Shechtman was notable because, apart from reflecting Kuhn’s views, it also brought to light the mental block scientists professed when it came to falsification of their work, and how that prevented science as such from progressing rapidly. Anyway, such speculations are all based in the media’s representation of the events.