A latent monadology: An extended revisitation of the mind-body problem

Image by Genis Carreras

In an earlier post, I’d spoken about a certain class of mind-body interfacing problems (the way I’d identified it): evolution being a continuous process, can psychological changes effected in a certain class of people identified solely by cultural practices “spill over” as modifications of evolutionary goals? There were some interesting comments on the post, too. You may read them here.

However, the doubt was only the latest in a series of others like it. My interest in the subject was born with a paper I’d read quite a while ago that discussed two methods either of which humankind could possibly use to recreate the human brain as a machine. The first method, rather complexly laid down, was nothing but the ubiquitous recourse called reverse-engineering. Study the brain, understand what it’s made of, reverse all known cause-effect relationships associated with the organ, then attempt to recreate the cause using the effect in a laboratory with suitable materials to replace the original constituents.

The second method was much more interesting (this bias could explain the choice of words in the previous paragraph). Essentially, it described the construction of a machine that could perform all the known functions of the brain. Then, this machine would have to be subjected to a learning process, through which it would acquire new skills while it retained and used the skills it’s already been endowed with. After some time, if the learnt skills, so chosen to reflect real human skills, are deployed by the machine to recreate human endeavor, then the machine is the brain.

Why I like this method better than the reverse-engineered brain is because it takes into account the ability to learn as a function of the brain, resulting in a more dynamic product. The notion of the brain as a static body is definitively meaningless as, axiomatically, conceiving of it as a really powerful processor stops short of such Leibnizian monads as awareness and imagination. While these two “entities” evade comprehension, subtracting the ability to, yes, somehow recreate them doesn’t yield a convincing brain as it is. And this is where I believe the mind-body problem finds solution. For the sake of argument, let’s discuss the issue differentially.

Spherical waves coming from a point source. The solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the solution for a spherical wave through the use of partial differential equations. (Image by Oleg Alexandrov on Wikimedia, including MATLAB source code.)

Hold as constant: Awareness
Hold as variable: Imagination

The brain is aware, has been aware, must be aware in the future. It is aware of the body, of the universe, of itself. In order to be able to imagine, therefore, it must concurrently trigger, receive, and manipulate different memorial stimuli to construct different situations, analyze them, and arrive at a conclusion about different operational possibilities in each situation. Note: this process is predicated on the inability of the brain to birth entirely original ideas, an extension of the fact that a sleeping person cannot be dreaming of something he has not interacted with in some way.

Hold as constant: Imagination
Hold as variable: Awareness

At this point, I need only prove that the brain can arrive at an awareness of itself, the body, and the universe, through a series of imaginative constructs, in order to hold my axiom as such. So, I’m going to assume that awareness came before imagination did. This leaves open the possibility that with some awareness, the human mind is able to come up with new ways to parse future stimuli, thereby facilitating understanding and increasing the sort of awareness of everything that better suits one’s needs and environment.

Now, let’s talk about the process of learning and how it sits with awareness, imagination, and consciousness, too. This is where I’d like to introduce the metaphor called Leibniz’s gap. In 1714, Gottfried Leibniz’s ‘Principes de la Nature et de la Grace fondés en raison‘ was published in the Netherlands. In the work, which would form the basis of modern analytic philosophy, the philosopher-mathematician argues that there can be no physical processes that can be recorded or tracked in any way that would point to corresponding changes in psychological processes.

… supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.

If any technique was found that could span the distance between these two concepts – the physical and the psychological – then Leibniz says the technique will effectively bridge Leibniz’s gap: the symbolic distance between the mind and the body.

Now it must be remembered that the German was one of the three greatest, and most fundamentalist, rationalists of the 17th century: the other two were Rene Descartes and Baruch Spinoza (L-D-S). More specifically: All three believed that reality was composed fully of phenomena that could be explained by applying principles of logic to a priori, or fundamental, knowledge, subsequently discarding empirical evidence. If you think about it, this approach is flawless: if the basis of a hypothesis is logical, and if all the processes of development and experimentation on it are founded in logic, then the conclusion must also be logical.

(L to R) Gottfried Leibniz, Baruch Spinoza, and Rene Descartes

However, where this model does fall short is in describing an anomalous phenomenon that is demonstrably logical but otherwise inexplicable in terms of the dominant logical framework. This is akin to Thomas Kuhn’s philosophy of science: a revolution is necessitated when enough anomalies accumulate that defy the reign of an existing paradigm, but until then, the paradigm will deny the inclusion of any new relationships between existing bits of data that don’t conform to its principles.

When studying the brain (and when trying to recreate it in a lab), Leibniz’s gap, as understood by L-D-S, cannot be applied for various reasons. First: the rationalist approach doesn’t work because, while we’re seeking logical conclusions that evolve from logical starts, we’re in a good position to easily disregard the phenomenon called emergence that is prevalent in all simple systems that have high multiplicity. In fact, ironically, the L-D-S approach might be more suited for grounding empirical observations in logical formulae because it is only then that we run no risk of avoiding emergent paradigms.

“Some dynamical systems are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions will lead to orbits that converge to this chaotic region.” – Wikipedia

Second: It is important to not disregard that humans do not know much about the brain. As elucidated in the less favored of the two-methods I’ve described above, were we to reverse-engineer the brain, we can still only make the new-brain do what we already know that it already does. The L-D-S approach takes complete knowledge of the brain for granted, and works post hoc ergo propter hoc (“correlation equals causation”) to explain it.

[youtube http://www.youtube.com/watch?v=MygelNl8fy4?rel=0]

Therefore, in order to understand the brain outside the ambit of rationalism (but still definitely within the ambit of empiricism), introspection need not be the only way. We don’t always have to scrutinize our thoughts to understand how we assimilated them in the first place, and then move on from there, when we can think of the brain itself as the organ bridging Leibniz’s gap. At this juncture, I’d like to reintroduce the importance of learning as a function of the brain.

To think of the brain as residing at a nexus, the most helpful logical frameworks are the computational theory of the mind (CTM) and the Copenhagen interpretation of quantum mechanics (QM).

xkcd #45 (depicting the Copenhagen interpretation)

In the CTM-framework, the brain is a processor, and the mind is the program that it’s running. Accordingly, the organ works on a set of logical inputs, each of which is necessarily deterministic and non-semantic; the output, by extension, is the consequence of an algorithm, and each step of the algorithm is a mental state. These mental states are thought to be more occurrent than dispositional, i.e., more tractable and measurable than the psychological emergence that they effect. This is the break from Leibniz’s gap that I was looking for.

Because the inputs are non-semantic, i.e., interpreted with no regard for what they mean, it doesn’t mean the brain is incapable of processing meaning or conceiving of it in any other way in the CTM-framework. The solution is a technical notion called formalization, which the Stanford Encyclopedia of Philosophy describes thus:

… formalization shows us how semantic properties of symbols can (sometimes) be encoded in syntactically-based derivation rules, allowing for the possibility of inferences that respect semantic value to be carried out in a fashion that is sensitive only to the syntax, and bypassing the need for the reasoner to have employ semantic intuitions. In short, formalization shows us how to tie semantics to syntax.

A corresponding theory of networks that goes with such a philosophy of the brain is connectionism. It was developed by Walter Pitts and Warren McCulloch in 1943, and subsequently popularized by Frank Rosenblatt (in his 1957 conceptualization of the Perceptron, a simplest feedforward neural network), and James McClelland and David Rumelhart (‘Learning the past tenses of English verbs: Implicit rules or par­allel distributed processing’, In B. MacWhinney (Ed.), Mechanisms of Language Acquisition (pp. 194-248). Mah­wah, NJ: Erlbaum) in 1987.

(L to R) Walter Pitts (L-top), Warren McCulloch (L-bottom), David Rumelhart, and James McClelland

As described, the L-D-S rationalist contention was that fundamental entities, or monads or entelechies, couldn’t be looked for in terms of physiological changes in brain tissue but in terms of psychological manifestations. The CTM, while it didn’t set out to contest this, does provide a tensor in which the inputs and outputs are correlated consistently through an algorithm with a neural network for an architecture and a Turing/Church machine for an algorithmic process. Moreover, this framework’s insistence on occurrent processes is not the defier of Leibniz: the occurrence is presented as antithetical to the dispositional.

Jerry Fodor

The defier of Leibniz is the CTM itself: if all of the brain’s workings can be elucidated in terms of an algorithm, inputs, a formalization module, and outputs, then there is no necessity to suppress any thoughts to the purely-introspectionist level (The domain of CTM, interestingly, ranges all the way from the infraconscious to the set of all modular mental processes; global mental processes, as described by Jerry Fodor in 2000, are excluded, however).

Where does quantum mechanics (QM) come in, then? Good question. The brain is a processor. The mind is a program. The architecture is a neural network. The process is that of a Turing machine. But how is the information between received and transmitted? Since we were speaking of QM, more specifically the Copenhagen interpretation of it, I suppose it’s obvious that I’m talking about electrons and electronic and electrochemical signals being transmitted through sensory, motor, and interneurons. While we’re assuming that the brain is definable by a specific processual framework, we still don’t know if the interaction between the algorithm and the information is classical or quantum.

While the classical outlook is more favorable because almost all other parts of the body are fully understand in terms of classical biology, there could be quantum mechanical forces at work in the brain because – as I’ve said before – we’re in no way to confirm or deny if it’s purely classical or purely non-classical operationally. However, assuming that QM is at work, then associated aspects of the mind, such as awareness, consciousness, and imagination, can be described by quantum mechanical notions such as the wavefunction-collapse and Heisenberg’s uncertainty principle – more specifically, by strong and weak observations on quantum systems.

The wavefunction can be understood as an avatar of the state-function in the context of QM. However, while the state-function can be constantly observable in the classical sense, the wavefunction, when subjected to an observation, collapses. When this happens, what was earlier a superposition of multiple eigenstates, metaphorical to physical realities, becomes resolved, in a manner of speaking, into one. This counter-intuitive principle was best summarized by Erwin Schrodinger in 1935 as a thought experiment titled…

[youtube http://www.youtube.com/watch?v=IOYyCHGWJq4?rel=0]

This aspect of observation, as is succinctly explained in the video, is what forces nature’s hand. Now, we pull in Werner Heisenberg and his notoriously annoying principle of uncertainty: if either of two conjugate parameters of a particle is measured, the value of the other parameter is altered. However, when Heisenberg formulated the principle heuristically in 1927, he also thankfully formulated a limit of uncertainty. If a measurement could be performed within the minuscule leeway offered by the constant limit, then the values of the conjugate parameters could be measured simultaneously without any instantaneous alterations. Such a measurement is called a “weak” measurement.

Now, in the brain, if our ability to imagine could be ascribed – figuratively, at least – to our ability to “weakly” measure the properties of a quantum system via its wavefunction, then our brain would be able to comprehend different information-states and eventually arrive at one to act upon. By extension, I may not be implying that our brain could be capable of force-collapsing a wavefunction into a particular state… but what if I am? After all, the CTM does require inputs to be deterministic.

How hard is it to freely commit to a causal chain?

By moving upward from the infraconscious domain of applicability of the CTM to the more complex cognitive functions, we are constantly teaching ourselves how to perform different kinds of tasks. By inculcating a vast and intricately interconnected network of simple memories and meanings, we are engendering the emergence of complexity and complex systems. In this teaching process, we also inculcate the notion of free-will, which is simply a heady combination of traditionalism and rationalism.

While we could be, with the utmost conviction, dreaming up nonsensical images in our heads, those images could just as easily be the result of parsing different memories and meanings (that we already know), simulating them, “weakly” observing them, forcing successive collapses into reality according to our traditional preferences and current environmental stimuli, and then storing them as more memories accompanied by more semantic connotations.

A muffling of the monsoons

New research conducted at the Potsdam Institute for Climate Impact research suggests that global warming could cause frequent and severe failures of the Indian summer monsoon in the next two centuries.

The study joins a growing body of work conducted by different research groups across the last five years that demonstrate a negative relationship between the two phenomena.

The researchers, Jacob Schewe and Anders Levermann, defined failure as a decrease in rainfall by 40 to 70 per cent below normal levels. Their findings, published on November 6 in the Environmental Research Letters, show that as we move into the 22nd century, increasing temperatures contribute to a strengthening Pacific Walker circulation that brings higher pressures over eastern India, which weaken the monsoon.

The Walker circulation was first proposed by Sir Gilbert Walker over 70 years go. It dictates that over regions such as the Indian peninsula, changes in temperature and changes in pressure and rainfall feedback into each other to bring a cyclic variation in rainfall levels.  The result of this is a seasonal high pressure over the western Indian Ocean.

Now, almost once every five years, the eastern Pacific Ocean undergoes a warm phase that leads to a high air pressure over it. This is called the El Nino Southern Oscillation.

In years when El Nino occurs, the high pressure over the western Indian Ocean shifts eastward and brings high pressure over land, suppressing the monsoon.

The researchers’ simulation showed that as temperatures increased in the future, the Walker circulation brings more high pressure over India on average, even though the strength of El Nino isn’t shown to increase.

The researchers described the changes they observed as unprecedented in the Indian Meteorological Department’s data, which dates back to the early-1900s. As Schewe, lead author of the study, commented to Phys Org, “Our study points to the possibility of even more severe changes to monsoon rainfall caused by climatic shifts that may take place later this century and beyond.”

A study published in 2007 by researchers at the Lawrence Livermore National Laboratory, California, and the International Pacific Research Centre, Hawaii, showed an increase in rainfall levels throughout much of the 21st century followed by a rapid decrease. This is consistent with the findings of Schewe and Levermann.

Similarly, a study published in April 2012 by the Chinese Academy of Sciences demonstrated the steadily weakening nature of the Indian summer monsoon since 1860 owing to rising global temperatures.

The Indian economy, being predominantly agrarian, depends greatly on the summer monsoon which lasts from June to September. The country last faced a widespread drought due to insufficient rainfall in the summer of 2009, when it had to import sugar and pushed world prices for the commodity to a 30-year high.

Eschatology

The meaning of the day of the blog is changing. While some argue that long-form journalism is in, I think it’s about extreme-form journalism. By this, I only mean that long-forms and short-forms are increasingly doing better, while the in-betweens are having to constantly redefine themselves, nebulously poised as they are between one mode that takes seconds to go viral and another mode that engages the intellectual and the creative for periods long enough to prompt protracted introspection on all kinds of things.

Having said this, it is inevitable that this blog, trapped between an erstwhile obsessed blogger and a job that demands most of his time, eventually cascade into becoming an archive, a repository of links, memories, stories, and a smatter of comments on a variety of subjects – as and when each one caught this blogger’s fancy. I understand I’m making a mountain out of a molehill. However, this episode concludes a four-year old tradition of blogging at least 2,000 words a week, something that avalanched into a habit, and ultimately into a career.

Thanks for reading. Much more will come, but just not as often as it has.

Backfiring biofuels in the EU

A version of this article as written by me appeared in The Hindu on November 8, 2012.

The European Union (EU) announced on October 17 that the amount of biofuels that will be required to make up the transportation energy mix by 2020 has been halved from 10 per cent to 5 per cent. The rollback mostly affects first-generation biofuels, which are produced from food crops such as corn, sugarcane, and potato.

The new policy is in place to mitigate the backfiring of switching from less-clean fossil fuels to first-generation biofuels. An impact assessment study conducted in 2009-2012 by the EU found that greenhouse gas emissions were on the rise because of conversion of agricultural land to land for planting first-generation biofuel crops. In the process, authorities found that large quantities of carbon stock had been released into the atmosphere because of forest clearance and peatland-draining.

Moreover, because food-production has now been shifted to take place in another location, transportation and other logistic fuel costs will have been incurred, emissions due to which will also have to be factored in. These numbers fall under the causal ambit of indirect land-use change (ILUC), which also includes the conversion of previously untenable land to fertile land. On October 17, The Guardian published an article that called the EU’s proposals “watered down” because it had chosen not to penalize fuel suppliers involved in the ILUC.

This writer believes that’s only fair – that the EU not penalize the fuel suppliers – considering the “farming” of first-generation biofuels was enabled, rather incentivized by the EU, which would have well known that agricultural processes would be displaced and that agricultural output would drop in certain pockets. The backfiring happened only because the organization had underestimated the extent to which collateral emissions would outweigh the carbon-credits saved due to biofuel-use. (As for not enforcing legal consequences on those who manufacture only first-generation biofuels but go on to claim carbon credits arising from second-generation biofuel use as well: continue reading.)

Anyway, as a step toward achieving the new goals, the EU will impose an emissions threshold on the amount of carbon stock that can be released when some agricultural land is converted for the sake of biofuel crops. Essentially, this will exclude many biofuels from entering the market.

While this move reduces the acreage because “fuel-farming” eats it up, it is not without criticisms. As Tracy Carty, a spokeswoman for the poverty action group Oxfam, said over Twitter, “The cap is lower than the current levels of biofuels use and will do nothing to reduce high food prices.” Earlier, especially in the US, the recourse of first-generation biofuels such as biodiesel had been resorted to by farmers looking to cash in on their steady (and state-assured) demand as opposed to the volatility of food prices.

The October 17 announcement effectively revises the Renewable Energy Directive (RED), 2009, which first required that biofuels constitute 10 per cent of the alternate energy mix by 2020.

The EU is now incentivising second-generation biofuels, mostly in an implied manner, which are manufactured from crop residues such as organic waste, algae, and woody materials, and do not interfere with food-production. The RED also requires that biofuels that replace fossil fuels be at least 35 per cent more efficient. Now, the EU has revised that number to increase to 50 per cent in 2017, and to 60 per cent after 2020. This is a clear sign that first-generation biofuels, which enter the scene with a bagful of emissions, will be phased out while their second-generation counterparts will take their places – at least, this ought to happen considering the profitability of first-generation alternatives is set to go down.

However, the research concerning high-performance biofuels is still nascent. As of now, it has been aimed at extracting the maximum amount of fuel from available stock, not as much at improving their efficiency. This is especially observed with the extraction of ethanol from wood, high-efficiency microalgae for biodiesel production, production of butanol from biomass with help from acetobutylicum, etc. – where more is known about the extraction efficiency and process economics than the performance of the fuel itself. Perhaps the new proposals will siphon some research out of the biotech. community in the near future.

Like the EU, the USA also has a biofuel-consumption target set for 2022, by when it requires that 36 billion gallons of renewable fuel be mixed with transport fuel, up from the 9 billion gallons mandated by 2008. More specifically, under the Energy Independence and Security Act (EISA) of 2007,

  • RFS program to include diesel, in addition to gasoline;
  • Establish new categories of renewable fuel, and set separate volume requirements for each one.
  • EPA to apply lifecycle greenhouse gas performance threshold standards to ensure that each category of renewable fuel emits fewer greenhouse gases than the petroleum fuel it replaces.

(The underlined bit’s what the EU has now included in its policies.)

However, a US National Research Council report released on October 24 found that if algal biofuels, second-generation fluids whose energy capacity lies between petrol’s and diesel’s, have to constitute as much as 5 percent of the country’s alternate energy mix, “unsustainable demands” would be placed on “energy, water and nutrients”.

Anyway, two major energy blocs – the USA and the EU – are leading the way to phase out first-generation biofuels and replace them completely with their second-generation counterparts. In fact, two other large-scale producers of biofuels, Indonesia and Argentina, wherefrom the EU imports 22 per cent of its biofuels, could also be forced to ramp up investment and research inline with their buyer’s interests. As Gunther Oettinger, the EU Energy Commissioner, remarked, “This new proposal will give new incentives for best-performing biofuels.” The announcement also affirms that till 2020, no major changes will be effected in the biofuels sector, and post-2020, only second-generation biofuels will be supported, paving the way for sustained and focused development of high-efficiency, low-emission alternatives to fossil fuels.

(Note: The next progress report of the European Commission on the environmental impact of the production and consumption of biofuels in the EU is due on December 31, 2014.)

A cultured evolution?

Can perceptions arising out of cultural needs override evolutionary goals in the long-run? For example, in India, the average marriage-age is in the late 20s now. Here, the (popular) tradition is to frown down upon, and even ostracize, those who would engage in premarital sex. So, after 10,000 years, say, are Indians more likely to have the development of their sexual desires postponed to occur in their late 20s (if they are not exposed to any avenues of sexual expression)? This question arose as a consequence of a short discussion with some friends on an article that appeared in SciAm: about if (heterosexual) men and women could stay “just friends”. To paraphrase the principal question in the context of the SciAm-featured “study”:

  1. Would you agree that the statistical implications of gender-sensitive studies will vary from region to region simply because the reasons on the basis of which such relationships can be established vary from one socio-political context to another?
  2. Assuming you have agreed to the first question: Would you contend that the underlying biological imperatives can, someday, be overridden altogether in favor of holding up cultural paradigms (or vice versa)?

Is such a thing even possible? (To be clear: I’m not looking for hypotheses and conjectures; if you can link me to papers that support your point of view, that’d be great.)

Plotting a technological history of journalism

Electric telegraph

  • July 27, 1866 – SS Great Eastern completes laying of Transatlantic telegraphic cables
  • By 1852, miles of American telegraphic wires had grown from 40 in 1846 to 23,000
  • In 1849-1869, telegraphic mileage had increased by 108,000 miles

Cost of information transmission fell with its increasing ubiquity as well as instantization of global communication.

  • Usefulness of information was preserved through transmission-time, increasing its shelf-life, making production of information a significant task
  • Led to a boost in trade as well

Advent of war – especially political turmoil in Europe and the American Civil War – pushed rapid developments in its technology.

These last mentioned events led to establishment of journalism as a recognized profession

  • Because it focused finally on locating and defining local information,
  • Because transmission of information could now be secured through other means,
  • And prompted newspaper establishments to install information-transmission services of their own –
  • Leading to proliferation of competition and an emphasis on increase of the quality of reportage

The advent of the electric telegraph, a harbinger of the “small world” phenomenon, did not contribute to the refinement of journalistic genres as much as it helped establish them.

In the same period, rather from 1830 to 1870, significant political events that transpired alongside the evolution of communication, and were revolutionized by it, too, included the rapid urbanization in the USA and Great Britain (as a result of industrialization), the Belgian revolution, the first Opium War, the July revolution, the Don Pacifico affair, and the November uprising.

Other notable events include the laying of the Raleigh-Gaston railroad in North Carolina and advent of the first steam locomotives in England. Essentially, the world was ready to receive its first specialized story-tellers.

Photography

Picture on the web from mousebilenadam

Photography developed from the mid-19th century onward. While it did not have as drastic an impact as did the electric telegraph, it has instead been undergoing a slew of changes the impetus of which comes from technological advancement. While black-and-white photography was prevalent for quite a while, it was color photography that refocused interested in using the technology to augment story-telling.

  • Using photography to tell a story involves a trade-off between neutrality and subjective opinions
  • A photographer, in capturing his subject, first identifies the subject such that it encapsulates emotions that he is looking for

Photography establishes a relationship between some knowledge of some reality and prevents interpretations from taking any other shape:

  • As such a mode of story-telling, it is a powerful tool only when the right to do so is well-exercised, and there is no given way of determining that absolutely
  • Through a lens is a powerful way to capture socio-history, and this preserve it in a columbarium of other such events, creating, in a manner of speaking, something akin to Asimov’s psycho-history
  • What is true in the case of photo-journalism is only partly true in the case of print-based story-telling

Photography led to the establishment of perspectives, of the ability of mankind to preserve events as well as their connotations, imbuing new power into large-scale movements and revolutions. Without the ability to visualize connotations, adversarial journalism, and the establishment of the Fourth Estate as it were, may not be as powerful as it currently is because of its ability to provide often unambiguous evidence toward or against arguments.

  • A good birthplace of the discussion on photography’s impact on journalism is Susan Sontag’s 1977 book, On Photography.
  • Photography also furthered interest in the arts, starting with the contributions of William Talbot.

Television

Although television sets were introduced in the USA in the 1930s, a good definition of its impact came in the famous Wasteland Speech in 1961 by Newton Minow, speaking at a convention of the National Association of Broadcasters.

When television is good, nothing — not the theater, not the magazines or newspapers — nothing is better.

But when television is bad, nothing is worse. I invite each of you to sit down in front of your own television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland.

You will see a procession of game shows, formula comedies about totally unbelievable families, blood and thunder, mayhem, violence, sadism, murder, western bad men, western good men, private eyes, gangsters, more violence, and cartoons. And endlessly commercials — many screaming, cajoling, and offending. And most of all, boredom. True, you’ll see a few things you will enjoy. But they will be very, very few. And if you think I exaggerate, I only ask you to try it.

It is this space, the “vast wasteland”, upon the occupation of which came journalism and television together to redefine news-delivery.

It is a powerful tool for the promotion of socio-political agendas: this was most effectively demonstrated during the Vietnam War during which, as Michael Mandelbaum wrote in 1982,

… regular exposure to the early realities of battle is thought to have turned the public against the war, forcing the withdrawal of American troops and leaving the way clear for the eventual Communist victory.

This opinion, as expressed by then-president Lyndon Johnson, was also defended by Mandelbaum as a truism in the same work (Print Culture and Video Culture, vol. 111, no. 4, Daedalus, pp. 157-158).

In the entertainment versus informative programming debate, an important contribution was made by Neil Postman in his 1985 work Amusing Ourselves to Death, wherein he warned of the decline in humankind’s ability to communicate and share serious ideas and the role television played in this decline because of its ability to only transfer information, not interaction.

Watch here…

[youtube http://www.youtube.com/watch?v=FRabb6_Gr2Y?rel=0]

And continued here…

[youtube http://www.youtube.com/watch?v=zHd31L6XPEQ?rel=0]

.

Arguing along similar veins in his landmark speech in 1990 at a computer science meeting in Germany, Postman said,

Everything from telegraphy and photography in the 19th century to the silicon chip in the twentieth has amplified the din of information, until matters have reached such proportions today that, for the average person, information no longer has any relation to the solution of problems.

In his conclusion, he blamed television for severing the tie between information and action.

The advent of the television also played a significant role in American feminism.

A case of Kuhn, quasicrystals & communication – Part IV

Dan Shechtman won the Nobel Prize for chemistry in 2011. This led to an explosion of interest on the subject of QCs and Shechtman’s travails in getting the theory validated.

Numerous publications, from Reuters to The Hindu, published articles and reports. In fact, The Guardian ran an online article giving a blow-by-blow account of how the author, Ian Sample, attempted to contact Shechtman while the events succeeding the announcement of the prize unfolded.

All this attention served as a consummation of the events that started to avalanche in 1982. Today, QCs are synonymous with the interesting possibilities of materials science as much as with perseverance, dedication, humility, and an open mind.

Since the acceptance of the fact of QCs, the Israeli chemist has gone on to win Physics Award of the Friedenberg Fund (1986), the Rothschild Prize in engineering (1990), the Weizmann Science Award (1993), the 1998 Israel Prize for Physics, the prestigious Wolf Prize in Physics (1998), and the EMET Prize in chemistry (2002).

As Pauling’s influence on the scientific community faded with Shechtman’s growing recognition, his death in 1994 did still mark the complete lack of opposition to an idea that had long since gained mainstream acceptance. The swing in Shechtman’s favour, unsurprisingly, began with the observation of QCs and the icosahedral phase in other laboratories around the world.

Interestingly, Indian scientists were among the forerunners in confirming the existence of QCs. As early as in 1985, when the paper published by Shechtman and others in the Physical Review Letters was just a year old, S Ranganathan and Kamanio Chattopadhyay (amongst others), two of India’s preeminent crystallographers, published a paper in Current Science announcing the discovery of materials that exhibited decagonal symmetry. Such materials are two-dimensional QCs with periodicity exhibited in one of those dimensions.

The story of QCs is most important as a post-Second-World-War incidence of a paradigm shift occurring in a field of science easily a few centuries old.

No other discovery has rattled scientists as much in these years, and since the Shechtman-Pauling episode, academic peers have been more receptive of dissonant findings. At the same time, credit must be given to the rapid advancements in technology and human knowledge of statistical techniques: without them, the startling quickness with which each hypothesis can be tested today wouldn’t have been possible.

The analysis of the media representation of the discovery of quasicrystals with respect to Thomas Kuhn’s epistemological contentions in his The Structure of Scientific Revolutions was an attempt to understand his standpoints by exploring more of what went on in the physical chemistry circles of the 1980s.

While there remains the unresolved discrepancy – whether knowledge is non-accumulative simply because the information founding it has not been available before – Kuhn’s propositions hold in terms of the identification of the anomaly, the mounting of the crisis period, the communication breakdown within scientific circles, the shift from normal science to cutting-edge science, and the eventual acceptance of a new paradigm and the discarding of the old one.

Consequently, it appears that science journalists have indeed taken note of these developments in terms of The Structure. Thus, the book’s influence on science journalism can be held to be persistent, and is definitely evident.

A case of Kuhn, quasicrystals & communication – Part III

The doctrine of incommensurability arises out of the conflict between two paradigms and the faltering of communications between the two adherent factions.

According to Kuhn, scientists are seldom inclined to abandon the paradigm at the first hint of crisis – as elucidated in the previous section – and instead denounce the necessity for a new paradigm. However, these considerations aside, the implications for a scientist who proposes the introduction of a new paradigm, as Shechtman did, are troublesome.

Such a scientist will find himself ostracized by the community of academicians he belongs to because of the anomalous nature of his discovery and, thus, his suddenly questionable credentials. At the same time, because of such ostracism, the large audience required to develop the discovery and attempt to inculcate its nature into the extant paradigm becomes inaccessible.

As a result, there is a communication breakdown between the old faction and the new faction, whereby the former rejects the finding and continues to further the extant paradigm while the latter rejects the paradigm and tries to bring in a new one.

Incommensurability exists only during the time of crisis, when a paradigm shift is foretold. A paradigm shift is called so because there is no continuous evolution from the old paradigm to the new one. As Kuhn puts it (p. 103),

… the reception of a new paradigm often necessitates a redefinition of the corresponding science.

For this reason, what is incommensurable is not only the views of warring scientists but also the new knowledge and the old one. In terms of a finding, the old knowledge could be said to be either incomplete or misguided, whereas the new one could be remedial or revolutionary.

In Shechtman’s case, because icosahedral symmetries were altogether forbidden by the old theory, the new finding was not remedial but revolutionary. Therefore, the new terms that the finding introduced were not translatable in terms of the old one, leading to a more technical form of communication breakdown and the loss of the ability of scientists to predict what could happen next.

A final corollary of the doctrine is that because of the irreconcilable nature of the new and old knowledge, its evolution cannot be held to be continuous, only contiguous. In this sense, knowledge becomes a non-cumulative entity, one that cannot have been accumulated continuously over the centuries, but one that underwent constant redefinition to become what it is today.

As for Dan Shechtman, the question is this: Does the media’s portrayal of the crisis period reflect any incommensurability (be it in terms of knowledge or communication)?

How strong was the paradigm shift?

In describing the difference between “seeing” and “seeing as”, Kuhn speaks about two kinds of incommensurability as far as scientific knowledge is concerned. Elegantly put as “A pendulum is not a falling stone, nor is oxygen dephlogisticated air,” the argument is that when a paradigm shift occurs, the empirical data will remain unchanged even as the relationship between the data changes. In Shechtman’s and Levine’s cases, the discovery of “forbidden” 3D icosahedral point symmetry does not mean that the previous structures are faulty but simply that the new structure is now one of the possibilities.

However, there is some discrepancy regarding how much the two paradigms are really incommensurable. For one, Kuhn’s argument that an old paradigm and a new paradigm will be strongly incommensurable can be disputed: he says that during a paradigm shift, there can be no reinterpretation of the old theory that can transform to being commensurable with the new one.

However, this doesn’t seem to be the case: five-fold axes of symmetry were forbidden by the old theory because they had been shown mathematically to lack translational symmetry, and because the thermodynamics of such a structure did not fall in line with the empirical data corresponding to crystals that were perfectly crystalline or perfectly amorphous.

Therefore, the discovery of QCs established a new set of relationships between the parameters that influenced the formation of one crystal structure over another. At the same time, they did permit a reinterpretation of the old theory because the finding did not refute the old laws – it just introduced an addition.

For Kuhn to be correct a paradigm shift should have occurred that introduced a new relationship between different bits of data; in Shechtman’s case, the data was not available in the first place!

Here, Shechtman can be attributed with making a fantastic discovery and no more. There is no documented evidence to establish that someone observed QC before Shechtman did but interpreted it according to the older paradigm.

In this regard, what is thought to be a paradigm shift can actually be argued to be an enhancement of the old paradigm: no shift need have occurred. However, this was entirely disregarded by science journalists and commentators such as Browne and Eugene Garfield, who regarded the discovery of QCs as simply being anomalous and therefore crisis-prompting, indicating a tendency to be historicist – in keeping with the antirealism argument against scientific realism as put forth by Richard Boyd.

Thus, the comparison to The Structure that held up all this time fails.

There are many reasons why this could have been so, not the least of which is the involvement of Pauling and his influence in postponing the announcement of the discovery (Pauling’s credentials were, at the time, far less questionable than Shechtman’s were).

Linus Carl Pauling (1901-1994) (Image from Wikipedia)

As likely as oobleck

Alan I. Goldman, a professor of physics at the Iowa State University, wrote in the 84th volume of the American Scientist,

Quasicrystals … are rather like oobleck, a form of precipitation invented by Dr. Seuss. Both the quasicrystals and the oobleck are new and unexpected. Since the discovery of a new class of materials is about as likely as the occurrence of a new form of precipitation, quasicrystals, like oobleck, suffered at first from a credibility problem.

There were many accomplished chemists who thought that QCs were nothing more than as-yet not fully understood crystal structures, and some among them even believed that QCs were an anomalous form of glass.

The most celebrated among those accomplished was Linus Pauling, who died in 1994 after belatedly acknowledging the existence of QCs. It was his infamous remark in 1982 that bought a lot of trouble for Shechtman, who was subsequently asked to leave the research group because he was “bringing disgrace” on its members and the paper he sought to publish was declined by journals.

Perhaps this was because he took immense pride in his works and in his contributions to the field of physical chemistry; otherwise, his “abandonment” of the old paradigm would have come easier – and here, the paradigm that did include an observation of QCs is referred to as old.

In fact, Pauling was so adamant that he proposed a slew of alternate crystal structures that would explain the structure of QCs as well as remain conformant with the old paradigm, with a paper appearing in 1988, long after QCs had become staple knowledge.

Order and periodicity

Insofar as the breakdown in communication is concerned, it seems to have stemmed from the tying-in of order and periodicity: crystallography’s handing of crystalline and amorphous substances had ingrained into the chemist’s psyche the coexistence of structures and repeatability.

Because the crystal structures of QCs were ordered but not periodical, even those who could acknowledge their existence had difficulty believing that QCs “were just as ordered as” crystals were, in the process isolating Shechtman further.

John Cahn, a senior crystallographer at NBS at the time of the fortuitous discovery, was one such person. Like Pauling, Cahn also considered possible alternate explanations before he could agree with Shechtman and ultimately co-author the seminal PRL paper with him.

His contention was that forbidden diffraction patterns – like the one Shechtman had observed – could be recreated by the superposition of multiple allowed but rotated patterns (because of the presence of five-fold symmetry, the angle of rotation could have been 72°).

A crystal-twinning pattern in a leucite crystal

This was explained through a process called twinning, whereby the growth vector of a crystal, during its growth phase, could suddenly change direction without any explanation or prior indication. In fact, Cahn’s exact response was,

Go away, Danny. These are twins and that’s not terribly interesting.

This explanation involving twinning was soon adopted by many of Shechtman’s peers, and he was repeatedly forced to return with results from the diffraction experiment to attempt to convince those who disagreed with the finding. His attempts were all in vain, and he was eventually dismissed from the project group at NBS.

Conclusion

All these events are a reflection of the communication breakdown within the academic community and, for a time, the two sides were essentially Shechtman and all the others.

The media portrayal of this time, however, seems to be completely factual and devoid of deduction or opining because of the involvement of the likes of Pauling and Cahn, who, in a manner of speaking, popularized the incident among media circles: that there was a communication breakdown became ubiquitous fact.

Shechtman himself, after winning the Nobel Prize for chemistry in 2011 for the discovery of QCs, admitted that he was isolated for a time before acceptance came his way – after the development of a crisis became known.

At the same time, there is the persisting issue of knowledge as being non-accumulative: as stated earlier, journalists have disregarded the possibility, not unlike many scientists, unfortunately, that the old paradigm did not make way for a new one as much as it became the new one.

That this was not the focus of their interest is not surprising because it is a pedantic viewpoint, one that serves to draw attention to the “paradigm shift” not being “Kuhnian” in nature, after all. Just because journalists and other writers constantly referred to the discovery of QCs as being paradigm-shifting need not mean that a paradigm-shift did occur there.

A case of Kuhn, quasicrystals & communication – Part II

Did science journalists find QCs anomalous? Did they report the crisis period as it happened or as an isolated incident? Whether they did or did not will be indicative of Kuhn’s influence on science journalism as well as a reflection of The Structure’s influence on the scientific community.

In the early days of crystallography, when the arrangements of molecules was thought to be simpler, each one was thought to occupy a point in two-dimensional (2D) space, which were then stacked one on top of another to give rise to the crystal. However, as time passed and imaginative chemists and mathematicians began to participate in the attempts to deduce perfectly the crystal lattice, the idea of a three-dimensional (3D) lattice began to catch on.

At the same time, scientists also found that there were many materials, like some powders, which did not restrict their molecules to any arrangement and instead left them to disperse themselves chaotically. The former were called crystalline, the latter amorphous (“without form”).

All substances, it was agreed, had to be either crystalline – with structure – or amorphous – without it. A more physical definition was adopted from Euclid’s Stoicheia (Elements, c. 300 BC): that the crystal lattice of all crystalline substances had to exhibit translational symmetry and rotational symmetry, and that all amorphous substances couldn’t exhibit either.

An arrangement exhibits translational symmetry if it looks the same after being moved in any direction through a specific distance. Similarly, rotational symmetry is when the arrangement looks the same after being rotated through some angle.)

In an article titled ‘Puzzling Crystals Plunge Scientists Into Uncertainty’ published in The New York Times on July 30, 1985, Pulitzer-prize winning science journalist Malcolm W Browne wrote that “the discovery of a new type of crystal that violates some of the accepted rules has touched off an explosion of conjecture and research…” referring to QCs.

Malcolm W. Browne

Paper a day on the subject

In the article, Browne writes that Shechtman’s finding (though not explicitly credited) has “galvanized microstructure analysts, mathematicians, metallurgists and physicists in at least eight countries.”

This observation points at the discovery’s anomalous nature since, from an empirical point of view, Browne suggests that such a large number of scientists from fields as diverse have not come together to understand anything in recent times. In fact, he goes on to remark that according to one estimate, a paper a day was being published on the subject.

Getting one’s paper published by an academic journal worldwide is important to any scientist because it formally establishes primacy. That is, once a paper has been published by a journal, then the contents of the paper are attributed to the paper’s authors and none else.

Since no two journals will accept the same paper for publication (a kind of double jeopardy), a paper a day implies that distinct solutions were presented each day. Therefore, Browne seems to claim in his article, in the framework of Kuhn’s positions, that scientists were quite excited about the discovery of a phenomenon that violated a longstanding paradigm.

Shechtman’s paper had been published in the prestigious Physical Review Letters, which is in turn published by the American Physical Society from Maryland, USA, in the 20th issue of its 53rd volume, 1984 – but not without its share of problems.

Istvan Hargittai, a reputed crystallographer with the Israel Academy of Sciences and Humanities, described a first-hand account of the years 1982 to 1984 in Shechtman’s life in the April 2011 issue of Structural Chemistry. In these accounts, he says that,

Once Shechtman had completed his experiment, he became very lonely as every scientific discoverer does: the discoverer knows something nobody else does.

In Shechtman’s case, however, this loneliness was compounded by two aspects of his discovery that made it difficult for him to communicate with his peers about it. First: To him, it was such an important discovery that he wanted desperately to inquire about its possibilities to those established in the field – and the latter dismissed his claims as specious.

Second: the fact that he couldn’t conclusively explain what he himself had found troubled him, kept him from publishing his results.

At the time, Hargittai was a friend of a British crystallographer named Alan Mackay, from the Birkbeck College in London. Mackay had, a few years earlier, noted the work of mathematician Roger Penrose, who had created a pattern in which pentagons of different sizes were used to tile a 2D space completely (Penrose had derived inspiration from the work of the 16th century astronomer Johannes Kepler).

In other words, Penrose had produced theoretically a planar version of what Shechtman was looking for, what would help him resolve his personal crisis. Mackay, in turn, had attempted to produce a diffraction pattern simulated on the Penrose tiles, assuming that what was true for 2D-space could be true for 3D-space as well.

An example of a Penrose tiling

By the time Mackay had communicated this development to Hargittai, Shechtman had – unaware of them – already discovered QCs.

There was another investigation ongoing at the University of Pennsylvania’s physics department: Dov Levine, pursuing his PhD under the guidance of Paul Steinhardt, had developed a 3D model of the Penrose tiles – again, unaware of Shechtman’s and Mackay’s works.

Thus, it is conspicuous how the anomalous nature of discoveries – which are unprecedented by definition because, otherwise, they would be expected – facilitates a communication-breakdown within the scientific community. In the case of Levine, who was eager to publish his findings, Steinhardt advised caution to avoid the ignominy that might arise out of publishing findings that are not fully explicable.

In the meantime, Shechtman had found an interested listener in Ilan Blech, another crystallographer at NBS. They prepared a paper together to send to the Journal of Applied Physics in 1984 after deciding that it was imperative to get across to as many scientists as possible in the search for an explanation for the structure of QCs.

However, since they had no explanation of their own, the paper had to be buried “under a mountain of information about alloys,” which prompted the Journal to write back saying the paper “would not interest physicists.”

Shechtman and Blech realized that, as a consequence of reporting such a result, they would have to spruce up its presentation. Shechtman invited veteran NBS crystallographer John Cahn, and Cahn in turn invited Denis Gratias, a French crystallographer, to join the team.

Even though Cahn had been sceptical of the possibility of QCs, he had since changed his mind in the last two years, and his presence awarded some credibility to the contents of the paper. After Gratias restructured the mathematics in the paper, it was finally accepted for publication in the Physical Review Letters on November 12, 1984.

(Clockwise from top-left corner) Danny Shechtman, Istvan Hargittai, Roger Penrose, Paul Steinhardt, and Dov Levine with Steinhardt

And by the time Browne’s article appeared a year later, it is safe to assume that at least 50-70 papers on the subject were published in the period. Whether this was a rush to accumulate anomalies or to discredit the finding is immaterial: the threat to the existing paradigm was perceptible and scientists felt the need to do something about it; and Browne’s noting of the same is proof that science journalists noted the need, too.

In fact, how much of an anomaly is a finding that has been accepted for publication? Because after it has been carefully vetted and published, it becomes as good as fact: other scientists can now found their work upon on it, and at the time of publication of their papers, cite the parent paper as authority.

However, it must be noted that there are important exceptions, such as the infamous Fleischmann-Pons experiment in cold fusion in 1989-1990. For these reasons, let it be that a paradigm is considered to have entered a crisis period only after it is established that it cannot be “tweaked” after each discovery and allowed to continue.

Three years of falsifications

Browne, too, seems to conclude that despite a definite discovery having been made three years earlier,

… only recently has experimental evidence overwhelmed the initial skepticism of the scientific community that such a form of matter could exist.

For three years, the community could not allow a discovery to pass, and subjected it repeatedly to tests of falsifications. A similar remark comes from science writer and crystallographer Paul Steinhardt, Levine’s PhD mentor, who, in a paper titled ‘New perspectives on forbidden symmetries, quasicrystals and Penrose tilings’, remarked upon the need for “a new appreciation for the subtleties of crystallographically forbidden symmetries.”

Shechtman’s QCs exhibited rotational symmetry but not a translational one. In other words, they demanded to be placed squarely between crystalline and amorphous substances, sending researchers scurrying for an explanation.

In a period of such turmoil, Browne’s article states that some researchers were willing to consider the arrangement as existing in six-dimensional (6D) hyperspace rather than in 3D space-time.

A hexeract (or, a geopeton)

Now, someone within the community had considered physical hyperspace to be an explanation way back in 1985. Even though mathematical hyperspace as a theory had been around since the days of Bernhard Riemann (Habilitationsschrift, 1854) and Ludwig Schläfli (Theorie der vielfachen Kontinuität, 1852), the notion of physical hyperspatial theory with a correspondence to physical chemistry is still nascent at best.

Therefore, Browne’s suggestion only seems to supplant his narrative of intellectual turbulence, that scientists had stumbled upon a phenomenon so anomalous that it alone was prompting crisis.

Conclusion

Did science journalists find QCs anomalous? Yes, they did. Browne, Hargittai and Steinhardt, amongst others, were quick to identify the anomalous nature of the newly discovered material and point it out through newspaper reports and articles published within the scientific community.

Thomas Kuhn’s position that scientists will attempt to denounce a paradigm-shift-inducing theory before they themselves are forced to shift is reflected in the writers’ accounts of Dan Shechtman in the days leading up to and just after his discovery.

Did they, the journalists, report the crisis period as it happened or as an isolated incident? That they could identify the onset of a crisis as it happened indicates that they did recognize it for what it was. However, it remains to be seen whether these confirmations validate Kuhn’s hypothesis in their entirety.

A case of Kuhn, quasicrystals & communication – Part I

Dan Shechtman’s discovery of quasi-crystals, henceforth abbreviated as QCs, in 1982 was a landmark achievement that invoked a paradigm-shift in the field of physical chemistry.

However, at the time, the discovery faced stiff resistance from the broader scientific community and an eminent chemist of the time. Such things made it harder for Shechtman to prove his findings as being credible, but he persisted and succeeded in doing so.

We know his story today because of its fairly limited coverage in the media, and especially from the comments of his peers, students and friends; its revolutionary characteristic was well reflected in many reports and essays.

Because such publications indicated the onset of a new kind of knowledge, what merits consideration is if the media internalized Thomas Kuhn’s philosophy of science in the way it approached the incident.

Broadly, the question is: Did the media reports reflect Kuhn’s paradigm-shifting hypothesis? Specifically, in the 1980s,

  1. Did science journalists find QCs anomalous?
  2. Did science journalists identify the crisis period when it happened or was it reported as an isolated incident?
  3. Does the media’s portrayal of the crisis period reflect any incommensurability (be it in terms of knowledge or communication)?

Finally: How did science journalism behave when reporting stories from the cutting edge?

The Structure of Scientific Revolutions

Thomas S. Kuhn’s (July 18, 1922 – June 17, 1996) book, The Structure of Scientific Revolutions, published in 1962, was significantly influential in academic circles as well as the scientific community. It introduced the notion of a paradigm-shift, which has since become a principal when describing the evolution of scientific knowledge.

Thomas Kuhn, Harvard University, 1949

Kuhn defined a paradigm based on two properties:

  1. The paradigm must be sufficiently unprecedented to attract researchers to study it, and
  2. It must be sufficiently open-ended to allow for growth and debate

By this definition, most of the seminal findings of the greatest thinkers and scientists of the past are paradigmatic. Nicholas Copernicus’s De Revolutionibus Orbium Coelestium (1543) and Isaac Newton’s Philosophiae Naturalis Principia Mathematica (1687) are both prime examples that illustrate what paradigms can be and how they shift perceptions and interests in the subject.

Such paradigms, Kuhn said (p. 25), work with three attributes that are inherent to their conception. The first of the three attributes is the determination of significant fact, whereby facts accrued through observation and experimentation are measured and recorded more accurately.

Even though they are the “pegs” of any literature concerning the paradigm, activities such as their measurement and records are independent of the dictates of the paradigm. Instead, they are, in a colloquial sense, conducted anyway.

Why this is so becomes evident in the second of the three foci: matches of fact with theory. Kuhn claims (p. 26) that this class of activity is rarer in reality, where predictions of the reigning theory are compared to the (significant) facts measured in nature.

Consequently, good agreement between the two would establish the paradigm’s robustness, whereas disagreement would indicate the need for further refinement. In fact, on the same page, Kuhn illustrates the rarity of such agreement by stating

… no more than three such areas are even yet accessible to Einstein’s general theory of relativity.

The third and last focus is on the articulation of theory. In this section, Kuhn posits that the academician conducts experiments to

  1. Determine physical constants associated with the paradigm
  2. Determine quantitative laws (so as to provide a physical quantification of the paradigm)
  3. Determine the applications of the paradigm in various fields

In The Structure, one paradigm replaces another through a process of contention. At first, a reigning paradigm exists that, to an acceptable degree of reasonableness, explains empirical observations. However, in time, as technology improves and researchers find results that don’t quite agree with the reigning paradigm, the results are listed as anomalies.

This refusal to immediately induct the findings and modify the paradigm is illustrated by Kuhn as proof toward our expectations clouding our perception of the world.

Instead, researchers hold the position of the paradigm as fixed and immovable, and attempt to check for errors with the experimental/observed data. An example of this is the superluminal neutrinos that were “discovered”, rather stumbled upon, at the OPERA experiment in Italy that works with the CERN’s Large Hadron Collider (LHC).

When the experiment logs from that fateful day, September 23, 2011, were examined, nothing suspicious was found with the experimental setup. However, despite this assurance of the instruments’ stability, the theory (of relativity) that prohibits this result was held superior.

On October 18, then, experimental confirmation was received that the neutrinos could not have traveled faster than light because the theoretically predicted energy signature of a superluminal neutrino did not match with the observed signatures.

As Kuhn says (p. 77):

Though they [scientists] may begin to lose faith and then to consider alternatives, they do not renounce the paradigm that has led them into crisis. They do not, that is, treat anomalies as counterinstances, though in the vocabulary of philosophy of science that is what they are.

However, this state of disagreement is not perpetual because, as Kuhn concedes above, an accumulation of anomalies forces a crisis in the scientific community. During a period of crisis, the paradigm reigns, yes, but is also now and then challenged by alternately conceived paradigms that

  1. Are sufficiently unprecedented
  2. Are open-ended to provide opportunities for growth
  3. Are able to explain those anomalies that threatens the reign of the extant paradigm

The new paradigm imposes a new framework of ideals to contain the same knowledge that dethroned the old paradigm, and because of a new framework, new relations between different bits of information become possible. Therefore, paradigm shifts are periods encompassing rejection and re-adoption as well as restructuring and discovery.

Kuhn ties together here three postulates: incommensurability, scientific communication, and knowledge being non-accumulative. When a new paradigm takes over, there is often a reshuffling of subjects – some are relegated to a different department, some departments are broadened to include more subjects than were there previously, while other subjects are confined to illogicality.

During this phase, some areas of knowledge may no longer be measured with the same standards that have gone before them.

Because of this incommensurability, scientific communication within their community breaks down, but only for the period of the crisis. For one, because of the new framework, some scientific terms change their meaning, and because multiple revolutions have happened in the past, Kuhn assumes the liberty here to conclude that scientific knowledge is non-accumulative. This facet of evolution was first considered by Herbert Butterfield in his The Origins of Modern Science, 1300-1800. Kuhn, in his work, then drew a comparison to visual gestalt (p. 85).

The Gestalt principles of visual perception seek to explain why the human mind sees two faces before it can identify the vase in the picture.

Just as in politics, when during a time of instability the people turn to conservative ideals to recreate a state of calm, scientists get back to a debate over the fundamentals of science to choose a successor paradigm. This is a gradual process, Kuhn says, that may or may not yield a new paradigm that is completely successful in explaining all the anomalies.

The discovery of QCs

On April 8, 1982, Dan Shechtman, a crystallographer working at the U. S. National Bureau of Standards (NBS), made a discovery that would nothing less than shatter the centuries-old assumptions of physical chemistry. Studying the molecular structure of an alloy of aluminium and manganese using electron diffraction, Shechtman noted an impossible arrangement of the molecules.

In electron diffraction, electrons are used to study extremely small objects, such as atoms and molecules, because the wavelength of electrons – which determines the resolution of the image produced – can be controlled by their electric charge. Photons lack this charge and are therefore unsuitable for high-precision observation at the atomic level.

When accelerated electrons strike the object under study, their wave nature takes over and they form an interference pattern on the observer lens when they are scattered. The device then works backward to reproduce the surface that may have generated the recorded pattern, in the process yielding an image of the surface. On that day in April, this is what Shechtman saw (note: the brightness of each node is only an indication of how far it is from the observer lens).

The electron-diffraction pattern exposing a quasicrystal’s lattice structure (Image from Ars Technica)

The diffraction pattern shows the molecules arranged in repeating pentagonal rings. That meant that the crystal exhibited 5-fold symmetry, i.e. an arrangement that was symmetrical about five axes. At the time, molecular arrangements were restricted by the then-36-year old crystallographic restriction theorem, which held that arrangements with only 2-, 3-, 4- and 6-fold symmetries were allowed. In fact, Shechtman had passed his university exams proving that 5-fold symmetries couldn’t exist!

At the time of discovery, Shechtman couldn’t believe his eyes because it was an anomaly. In keeping with tradition, in fact, he proceeded to look for experimental errors. Only after he could find none did he begin to consider reporting the discovery.

A photograph showing the pages from Shechtman’s logbook from the day he made the seemingly anomalous observation. Observe the words “10 Fold???”

In the second half of the 20th century, the field of crystallography was beginning to see some remarkable discoveries, but none of them as unprecedented as that of QCs would turn out to be. This was because of the development of spectroscopy, a subject that studied the interaction of matter and radiation.

Using devices such as X-ray spectrometers and tunneling electron microscopes (TEM), scientists could literally look at a molecule instead of having to determine its form via chemical reactions. In such a period, there was tremendous growth in physical chemistry because of the imaginative mind of one man who would later be called one of the greatest chemists of all time as well as make life difficult for Shechtman: Linus Carl Pauling.

Pauling epitomized the aspect of Kuhn’s philosophy that refused to let an old paradigm die, and therefore posed a significant hindrance to Shechtman’s radical new idea. While Shechtman attempted to present his discovery of QCs as an anomaly that he thought prompted crisis, Pauling infamously declared, “There is no such thing as quasi-crystals, only quasi-scientists.

Media reportage

The clash between Pauling and Shechtman, rather the “old school” and the “new kid”, created some attrition within universities in the United States and Israel, who with Shechtman was affiliated. While a select group of individuals who were convinced of the veracity of the radical claims set about studying it further, others – perhaps under the weight of Pauling’s credibility – dismissed the work as erroneous and desperate. The most important entity classifiable under the latter was the Journal of Applied Physics, which refused to publish Shechtman’s finding.

In this turmoil, there was a collapse of communication between scientists of the two factions. Unfortunately, the media’s coverage of this incident was limited: a few articles appeared in the mid-1980s in newspapers, magazines and journals; in 1988 when Pauling published his own paper on QCs; in 1999 when Shechtman won the prestigious Wolf Prize in mathematics; and in 2011, when he won the Nobel Prize in chemistry.

Despite the low coverage, the media managed to make known the existence of such things as QCs to a wider community as well as to a less-sophisticated one. The rift between Pauling and Shechtman was notable because, apart from reflecting Kuhn’s views, it also brought to light the mental block scientists professed when it came to falsification of their work, and how that prevented science as such from progressing rapidly. Anyway, such speculations are all based in the media’s representation of the events.