The weakening measurement

Unlike the special theory of relativity that the superluminal-neutrinos fiasco sought to defy, Heisenberg’s uncertainty principle presents very few, and equally iffy, measurement techniques to stand verified. While both Einstein’s and Heisenberg’s foundations are close to fundamental truths, the uncertainty principle has more guided than dictated applications that involved its consequences. Essentially, a defiance of Heisenberg is one for the statisticians.

And I’m pessimistic. Let’s face it, who wouldn’t be?

Anyway, the parameters involved in the experiment were:

  1. The particles being measured
  2. Weak measurement
  3. The apparatus

The experimenters claim that a value of the photon’s original polarization, X, was obtained upon a weak measurement. Then, a “stronger” measurement was made, yielding a value A. However, according to Heisenberg’s principle, the observation should have changed the polarization from A to some fixed value A’.

Now, the conclusions they drew:

  1. Obtaining X did not change A: X = A
  2. A’ – A < Limits set by Heisenberg

The terms of the weak measurement are understood with the following formula in mind:

(The bra-ket, or Dirac, notation signifies the dot-product between two vectors or vector-states.)

Here, φ(1,2) denote the pre- and post-selected states, A-hat the observable system, and Aw the value of the weak-measurement. Thus, when the pre-selected state tends toward becoming orthogonal to the post-selected state, the value of the weak measurement increases, becoming large, or “strong”, enough to affect the being-measured value of A-hat.

In our case: Aw = A – X; φ(1) = A; φ(2) = A’.

As listed above, the sources of error are:

  1. φ(1,2)
  2. X

To prove that Heisenberg was miserly all along, Aw would have been increased until φ(1) • φ(2) equaled 0 (through multiple runs of the same experiment), and then φ(2) – φ(1), or A’ – A, measured and compared to the different corresponding values of X. After determining the strength of the weak measurement thus, A’ – X can be determined.

I am skeptical because X signifies the extent of coupling between the measuring device and the system being measured, and its standard deviation, in the case of this experiment, is dependent on the standard deviation of A’ – A, which is in turn dependent on X.

The common tragedy

I have never been able to fathom poetry. Not because it’s unensnarable—which it annoyingly is—but because it never seems to touch upon that all-encompassing nerve of human endeavour supposedly running through our blood, transcending cultures and time and space. Is there a common trouble that we all share? Is there a common tragedy that is not death that we all quietly await that so many claim is described by poetry?

I, for one, think that that thread of shared memory is lost, forever leaving the feeble grasp of our comprehension. In fact, I believe that there is more to be shared, more to be found that will speak to the mind’s innermost voices, in a lonely moment of self-doubting. Away from a larger freedom, a “shared freedom”, we now reside in a larger prison, an invisible cell that assumes various shapes and sizes.

Sometimes, it’s in your throat, blocking your words from surfacing. Sometimes, it has your skull in a death-grip, suffocating all thoughts. Sometimes, it holds your feet to the ground and keeps you from flying, or sticks your fingers in your ears and never lets you hear what you might want to hear. Sometimes, it’s a cock in a cunt, a blade against your nerves, a catch on your side, a tapeworm in your intestines, or that cold sensation that kills wet dreams.

Today, now, this moment, the smallest of freedoms, the freedoms that belong to us alone, are what everyone shares, what everyone experiences. It’s simply an individuation of an idea, rather a belief, and the truth of that admission—peppered as it is with much doubt—makes us hold on more tightly to it. And as much as we partake of that individuation, like little gluons that emit gluons, we inspire more to pop into existence.

Within the confines of each small freedom, we live in worlds of our own fashioning. Poetry is, to me, the voice of those worlds. It is the resultant voice, counter-resolved into one expression of will and intention and sensation, that cannot, in turn, be broken down into one man or one woman, but only into whole histories that have bred them. Poetry is, to me, no longer a contiguous spectrum of pandered hormones or a conflict-indulged struggle, but an admission of self-doubt.

Credibility on the web

There are a finite number of sources from which anyone receives information. The most prominent among them are media houses (incl. newspapers, news channels, radio stations, etc.) and scientific journals (at least w.r.t. the subjects I work with).

Seen one way, these establishments generate the information that we receive. Without them, stories would remain localized, centralized, away from the ears that could accord them gravity.

Seen another way, these establishments are also motors: sans their motive force, information wouldn’t move around as it does, although this is assuming that they don’t mess with the information itself.

With more such “motors” in the media mix, the second perspective is becoming the norm of things. Even if information isn’t picked up by one house, it could be set sailing through a blog or a CJ initiative. The means through which we learn something, or stumble upon it for that matter, are growing to be more overlapped, lines crossing each others’ paths more often.

Veritably, it’s a maze. In such a labyrinthine setup, the entity that stands to lose the most is faith of a reader/viewer/consumer in the credibility of the information received.

In many cases, with a more interconnected web – the largest “supermotor” – the credibility of one bit of information is checked in one location, by one entity. Then, as it moves around, all following entities inherit that credibility-check.

For instance, on Wikipedia, credibility is established by citing news websites, newspaper/magazine articles, journals, etc. Jimmy Wales’ enterprise doesn’t have its own process of verification in place. Sure, there are volunteers who almost constantly police its millions of pages, but all they can do is check if the citation is valid, and if there are any contrarious reports, too, to the claims being staked.

One way or another, if a statement has appeared in a publication, it can be used to have the reader infer a fact.

In this case, Wikipedia has inherited the credibility established by another entity. If the verification process had failed in the first place, the error would’ve been perpetrated by different motors, each borrowing from the credibility of the first.

Moreover, the more strata that the information percolates through, the harder it will be to establish a chain of accountability.

*

My largest sources of information are:

  1. Wikipedia
  2. Journals
  3. Newspapers
  4. Blogs

(The social media is just a popular aggregator of news from these sources.)

Wikipedia cites news reports and journal articles.

News reports are compiled with the combined efforts of reporters and editors. Reporters verify the information they receive by checking if it’s repeated by different sources under (if possible) different circumstances. Editors proofread the copy and are (or must remain) sensitive to factual inconsistencies.

Journals have the notorious peer-reviewing mechanism. Each paper is subject to a thorough verification process intended to wean out all mistakes, errors, information “created” by lapses in the scientific method, and statistical manipulations and misinterpretations.

Blogs borrow from such sources and others.

Notice: Even in describing the passage of information through these ducts, I’ve vouched for reporters, editors, and peer-reviews. What if they fail me? How would I find out?

*

The point of this post was to illustrate

  1. The onerous yet mandatory responsibility that verifiers of information must assume,
  2. That there aren’t enough of them, and
  3. That there isn’t a mechanism in place that periodically verifies the credibility of some information across its lifetime.

How would you ensure the credibility of all the information you receive?

Weekly science quiz

My weekly science quiz debuted in The Hindu today, in its In School edition. Here’s the first installment.

Questions

  1. Neil Armstrong, the first man to step on the moon on July 21, 1969, passed away on August 25 this year. Who was the second man to step on the moon?
  2. When the car-sized robotic rover Curiosity landed on Mars on August 6, 2012, it was only the fourth rover to achieve the feat. Can you name the other three rovers, two of which are considered “twins” of each other?
  3. This installation, when it went live in April 2012, reduced carbon dioxide emissions by 80 lakh tonnes, saved 9 lakh tonnes of coal and natural gas per year, and smashed a Chinese record held since October, 2011. What are we talking about?
  4. This “vehicle” was designed in Switzerland, built in Italy, owned by USA, and crewed by a Belgian on January 23, 1960, when it became the first vessel to descend into Mariana Trench, the deepest point in Earth’s crust. The Belgian’s father himself once held the world record for the highest altitude reached in a hot-air balloon. Name the vessel.
  5. Horizontal slickwater hydraulic fracturing is a technique, common in the USA since the 2000’s, which releases natural gas locked under sub-surface rock formations by cracking up rock under the pressure of large quantities of water. What is the method’s common name?
  6. Last week, Michael Roukes and his team at Caltech built a highly sensitive weighing scale that uses a vibrating arm that is sensitive to small changes in its frequency. Called a nanoelectromechanical resonator, what can it measure?
  7. Netscape Navigator was the dominant web-browser of the 1990s, and its only competitor at the time was another browser named Mosaic. Since Netscape was being developed to beat Mosaic, its codename was a portmanteau of “Mosaic” and “killer”. What is the name?
  8. The ___________ lay their eggs in the months of February and March, and the hatchlings emerge after a 45-55 day incubation period, just before the hotter days of summer set in. Their nesting grounds include the coasts of Mexico, Nicaragua, Costa Rica, Orissa and Tamil Nadu, while each nesting batch is called an arribada. Fill in the blank.
  9. In the exosphere, highly energetic particles collide with atoms in the earth’s atmosphere and release a shower of less energetic particles. What are the highly energetic particles collectively called? Hint: 2012 is being celebrated as the 100th year of their discovery.
  10. The fictitious version of this contraption is a modified street-bike with a liquid-cooled V-4 engine. Its real version has a water-cooled single-cylinder engine, is made of steel, aluminium and magnesium, and is steered by the shoulders. What are we talking about?

Answers

  1. Edwin “Buzz” Aldrin
  2. Spirit & Opportunity, Sojourner
  3. The 214-MW Gujarat Solar Power Field, Patan district
  4. Trieste
  5. Fracking
  6. The weight of individual molecules
  7. Mozilla, the creator of Firefox
  8. Olive Ridley turtles
  9. Cosmic rays
  10. Batman’s Batpod

Who runs science in India?

Click on the image for the article.

Right now, Colin Macilwain cannot be more on top of the problem: the role of a Chief Scientific Adviser has shifted toward leveraging science and technology to reap rewards through economic and industrial policy, away from bridging the gap between the ruling elite and the academically engaged.

A contrast with India, unfortunately, is meaningless in this regard. While New York and Berlin may face off over what it means to have one person at the top versus what it means to have several people engaged throughout, scientific policy in India is in a shambles more because the Chief Scientific Adviser, C.N.R. Rao, has professed no inclination toward either agenda.

Instead, given that the country is oriented primarily toward tackling the energy crisis, Rao’s role in influencing the government to institute decentralization policies, cross-generation power tariffs, and subsidization of alternative energy sources pales in comparison to the industrial lobby that subsumes his voice with just a lot of money.

While we leave universities to tackle their loss of autonomy – chiefly because the boards indulge public interference in order to maximise public-funding – and our engineers to bridge the infrastructural gap between low consumption, lower private-sector investment, and invitations to greater foreign direct-investment, who is really running science in India?

Superconductivity: From Feshbach to Fermi

(This post is continued from this one.)

After a bit of searching on Wikipedia, I found that the fundamental philosophical underpinnings of superconductivity were to be found in a statistical concept called the Feshbach resonance. If I had to teach superconductivity to those who only knew of the phenomenon superfluously, that’s where I’d begin. So.

Imagine a group of students who have gathered in a room to study together for a paper the next day. Usually, there is that one guy among them who will be hell-bent on gossiping more than studying, affecting the performance of the rest of the group. In fact, given sufficient time, the entire group’s interest will gradually shift in the direction of the gossip and away from its syllabus. The way to get the entire group back on track is to introduce a Feshbach resonance: cut the bond between the group’s interest and the entity causing the disruption. If done properly, the group will turn coherent in its interest and to focusing on studying for the paper.

In multi-body systems, such as a conductor harboring electrons, the presence of a Feshbach resonance renders an internal degree of freedom independent of those coordinates “along” which dissociation is most like to occur. And in a superconductor, a Feshbach resonance results in each electron pairing up with another (i.e., electron-vibrations are quelled by eliminating thermal excitation) owing to both being influenced by an attractive potential that arises out of the electron’s interaction with the vibrating lattice.

Feshbach resonance & BCS theory

For particulate considerations, the lattice-vibrations are quantized in the form of hypothetical particles called phonons. As for why the Feshbach resonance must occur the way it does in a superconductor: that is the conclusion, rather implication, of the BCS theory formulated in 1957 by John Bardeen, Leon Neil Cooper, and John Robert Schrieffer.

(Arrows describe the direction of forces acting on each entity) When a nucleus, N, pulls electrons, e, toward itself, it may be said that the two electrons are pulled toward a common target by a common force. Therefore, the electrons’ engagement with each other is influenced by N. The energy of N, in turn, is quantified as a phonon (p), and the electrons are said to interact through the phonons.

The BCS theory essentially treats electrons like rebellious, teenage kids (I must be getting old). As negatively charged electrons pass through the crystal lattice, they draw the positively charged nuclei toward themselves, creating an increase in the positive charge density in their vicinity that attracts more electrons in turn. The resulting electrostatic pull is stronger near nuclei and very weak at larger distances. The BCS theory states that two electrons that would otherwise repel each other will pair up in the face of such a unifying electrostatic potential, howsoever weak it is.

This is something like rebellious teens who, in the face of a common enemy, will unite with each other no matter what the differences between them earlier were.

Since electrons are fermions, they bow down to Pauli’s exclusion principle, which states that no two fermions may occupy the same quantum state. As each quantum state is defined by some specific combination of state variables called quantum numbers, at least one quantum number must differ between the two co-paired electrons.

Prof. Wolfgang Pauli (1900-1958)

In the case of superconductors, this is particle spin: the electrons in the member-pair will have opposite spins. Further, once such unions have been achieved between different pairs of electrons, each pair becomes indistinguishable from the other, even in principle. Imagine: they are all electron-pairs with two opposing spins but with the same values for all other quantum numbers. Each pair, called a Cooper pair, is just the same as the next!

Bose-Einstein condensates

This unification results in the sea of electrons displaying many properties normally associated with Bose-Einstein condensates (BECs). In a BEC, the particles that attain the state of indistinguishability are bosons (particles with integer spin), not fermions (particles with half-integer spin). The phenomenon occurs at temperatures close to absolute zero and in the presence of an external confining potential, such as an electric field.

In 1995, at the Joint Institute for Laboratory Astrophysics, physicists cooled rubidium atoms down to 170 billionths of a degree above absolute zero. They observed that the atoms, upon such cooling, condensed into a uniform state such that their respective velocities and distribution began to display a strong correlation (shown above, L to R with decreasing temp.). In other words, the multi-body system had condensed into a homogenous form, called a Bose-Einstein condensate (BEC), where the fluid behaved as a single, indivisible entity.

Since bosons don’t follow Pauli’s exclusion principle, a major fraction of the indistinguishable entities in the condensate may and do occupy the same quantum state. This causes quantum mechanical effects to become apparent on a macroscopic scale.

By extension, the formulation and conclusions of the BCS theory, alongside its success in supporting associated phenomena, imply that superconductivity may be a quantum phenomenon manifesting in a macroscopic scale.

Note: If even one Cooper pair is “broken”, the superconducting state will be lost as the passage of electric current will be disrupted, and the condensate will dissolve into individual electrons, which means the energy required to break one Cooper pair is the same as the energy required to break the composition of the condensate. So thermal vibrations of the crystal lattice, usually weak, become insufficient to interrupt the flow of Cooper pairs, which is the flow of electrons.

The Meissner effect in action: A magnet is levitated by a superconductor because of the expulsion of the magnetic field from within the material

The Meissner effect

In this context, the Meissner effect is simply an extrapolation of Lenz’s law but with zero electrical resistance.

Lenz’s law states that the electromotive force (EMF) because of a current in a conductor acts in a direction that always resists a change in the magnetic flux that causes the EMF. In the absence of resistance, the magnetic fields due to electric currents at the surface of a superconductor cancel all magnetic fields inside the bulk of the material, effectively pushing magnetic field lines of an external magnetic potential outward. However, the Meissner effect manifests only when the externally applied field is weaker than a certain critical threshold: if it is stronger, then the superconductor returns to its conducting state.

Now, there are a class of materials called Type II superconductors – as opposed to the Type I class described earlier – that only push some of the magnetic field outward, the rest remaining conserved inside the material in filaments while being surrounded by supercurrents. This state is called the vortex state, and its occurrence means the material can withstand much stronger magnetic fields and continue to remain superconducting while also exhibiting the hybrid Meissner effect.

Temperature & superconductivity

There are also a host of other effects that only superconductors can exhibit, including Cooper-pair tunneling, flux quantization, and the isotope effect, and it was by studying them that a strong relationship was observed between temperature and superconductivity in various forms.

(L to R) John Bardeen, Leon Cooper, and John Schrieffer

In fact, Bardeen, Cooper, and Schrieffer hit upon their eponymous theory after observing a band gap in the electronic spectra of superconductors. The electrons in any conductor can exist at specific energies, each well-defined. Electrons above a certain energy, usually in the valence band, become free to pass through the entire material instead of staying in motion around the nuclei, and are responsible for conduction.

The trio observed that upon cooling the material to closer and closer to absolute zero, there was a curious gap in the energies at which electrons could be found in the material at a particular temperature. This meant that, at that temperature, the electrons were jumping from existing at one energy to existing at some other lower energy. The observation indicated that some form of condensation was occurring. However, a BEC was ruled out because of Pauli’s exclusion principle. At the same time, a BEC-like state had to have been achieved by the electrons.

This temperature is called the transition temperature, and is the temperature below which a conductor transitions into its superconducting state, and Cooper pairs form, leading to the drop in the energy of each electron. Also, the differences in various properties of the material on either side of this threshold are also attributed to this temperature, including an important notion called the Fermi energy: it is the potential energy that any system possesses when all its thermal energy has been removed from it. This is a significant idea because it defines both the kind and amount of energy that a superconductor has to offer for an externally applied electric current.

Enrico Fermi, along with Paul Dirac, defined the Fermi-Dirac statistics that governs the behavior all identical particles that obey Pauli’s exclusion principle (i.e., fermions). Fermi level and Fermi energy are concepts named for him; however, as long as we’re discussing eponymy, Fermilab overshadows them all.

In simple terms, the density of various energy states of the electrons at the Fermi energy of a given material dictates the “breadth” of the band gap if the electron-phonon interaction energy were to be held fixed at some value: a direct proportionality. Thus, the value of the energy gap at absolute zero should be a fixed multiple of the value of the energy gap at the superconducting transition temperature (the multiplication factor was found to be 3.5 universally, irrespective of the material).

Similarly, because of the suppression of thermal excitation (because of the low temperature), the heat capacity of the material reduces drastically at low temperatures, and vanishes below the transition temperature. However, just before hitting zero at the threshold, the heat capacity balloons up to beyond its original value, and then pops. It was found that the ballooned value was always 2.5 times the material’s normal heat capacity value… again, universally, irrespective of the material!

The temperature-dependence of superconductors gains further importance with respect to applications and industrial deployment in the context of its possible occurring at higher temperatures. The low temperatures currently necessary eliminate thermal excitations, in the form of vibrations, of nuclei and almost entirely counter the possibility of electrons, or Cooper pairs, colliding into them.The low temperatures also assist in the flow of Cooper pairs as a superfluid apart from allowing for the energy of the superfluid being higher than the phononic energy of the lattice.

However, to achieve all these states in order to turn a conductor into a superconductor at a higher temperature, a more definitive theory of superconductivity is required. One that allows for the conception of superconductivity that requires only certain internal conditions to prevail while the ambient temperature soars. The 1986-discovery of high-temperature superconductors in ceramics by Bednorz and Muller was the turning point. It started to displace the BCS theory which, physicists realized, doesn’t contain the necessary mechanisms for superconductivity to manifest itself in ceramics – insulators at room temperature – at temperatures as high as 125 K.

A firmer description of superconductivity, therefore, still remains elusive. Its construction should not only pave the for one of the few phenomena that hardly appears in nature and natural processes to be fully understood, but also for its substitution against standard conductors that are responsible for lossy transmission and other such undesirable effects. After all, superconductors are the creation of humankind, and only by its hand while they ever be fully worked.

The personal loss of a printed book

Over the last few days, I’ve been getting the feeling that buying a Kindle is the best decision I’ve ever made. What I find unsettling, however, is that I don’t seem to miss printed books as much as I thought I would: the transition was so smooth that it might almost have been irreversible. It seems I fell in love with what the books had to say and not with the books themselves, a disengagement with the metaphysical form and a betrothal to its humble, degenerate function

The Markovian Mind

In many ways, human engagement with information happens in such a manner that, with the accumulation of information over time, the dataset constructed out of the latest volume of information has the strongest relationship of any kind with the consecutively next dataset – a Markovian trait.

At any point of time, the future state of the dataset is determined solely by its present one. In other words, with a discrete understanding, its nth state is dependant solely on its (n – i)th state, where ‘i’ is a cardinal index. Upon a failure to quantify its (n + i)th state, there is no certain state that we know the dataset will intend to assume.

At the same time, given its limited historic dependency, the past’s bearing on the state of the dataset is continuous but constantly depreciating (asymptotically tending to zero): the correlation index between the (n + i)th state for increasing i with the (n – k)th decreases for increasing k (for all k = i).*

Over time, if the information-dataset could be quantized through a set of state variables, S, then there will be a characteristic function, φ(n), which would describe the slope of the correlation index’s curve at (i, k). Essentially, the evolution of S will be as a Markov chain whereas φ(n) will be continuous, rendering (i, k) random and memoryless.

(*For k = i, (n – k) = (n – i). However, for a given set of state variables S, which evolve as a Markov chain, the devolution that k tracks and the evolution that i tracks will be asymmetric, necessitating two different indices to describe the two degrees of freedom.)

When must science give way to religion?

When I saw an article titled ‘Sometimes science must give way to religion‘ in Nature on August 22, 2012, by Daniel Sarewitz, I had to read it. I am agnostic, and I try as much as I can to keep from attempting to proselyte anyone – through argument or reason (although I often fail at controlling myself). However, titled as it was, I had to read the piece, especially since it’d appeared in a publication I subscribe to for their hard-hitting science news, which I’ve always approached as Dawkins might: godlessly.

First mistake.

Dr. Daniel Sarewitz

At first, if anything, I hoped the article would treat the entity known as God as simply an encapsulation of the unknown rather than in the form of an icon or elemental to be worshiped. However, the lead paragraph was itself a disappointment – the article was going to be about something else, I understood.

Visitors to the Angkor temples in Cambodia can find themselves overwhelmed with awe. When I visited the temples last month, I found myself pondering the Higgs boson — and the similarities between religion and science.

The awe is architectural. When pilgrims visit a temple built like the Angkor, the same quantum of awe hits them as it does an architect who has entered a Pritzker-prize winning building. But then, this sort of “reasoning”, upon closer observation or just an extra second of clear thought, is simply nitpicking. It implies that I’m just pissed that Nature decided to publish an article and disappoint ME. So, I continued to read on.

Until I stumbled upon this:

If you find the idea of a cosmic molasses that imparts mass to invisible elementary particles more convincing than a sea of milk that imparts immortality to the Hindu gods, then surely it’s not because one image is inherently more credible and more ‘scientific’ than the other. Both images sound a bit ridiculous. But people raised to believe that physicists are more reliable than Hindu priests will prefer molasses to milk. For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

For a long time, I have understood that science and religion have a lot in common: they’re both frameworks that are understood through some supposedly indisputable facts, the nuclear constituents of the experience born from believing in a world reality that we think is subject to the framework. Yes, circular logic, but how are we to escape it? The presence of only one sentient species on the planet means a uniform biology beyond whose involvement any experience is meaningless.

So how are we to judge which framework is more relevant, more meaningful? To me, subjectively, the answer is to be able to predict what will come, what will happen, what will transpire. For religion, these are eschatological and soteriological considerations. As Hinduism has it: “What goes around comes around!” For science, these are statistical and empirical considerations. Most commonly, scientists will try to spot patterns. If one is found, they will go about pinning the pattern’s geometric whims down to mathematical dictations to yield a parametric function. And then, parameters will be pulled out of the future and plugged into the function to deliver a prediction.

Earlier, I would have been dismissive of religion’s “ability” to predict the future. Let’s face it, some of those predictions and prophecies are too far into the future to be of any use whatsoever, and some other claims are so ad hoc that they sound too convenient to be true… but I digress. Earlier, I would’ve been dismissive, but after Sarewitz’s elucidation of the difference between rationality and faith, I am prompted to explain why, to me, it is more science than religion that makes the cut. Granted, both have their shortcomings: empiricism was smashed by Popper, while statistics and unpredictability are conjugate variables.

(One last point on this matter: If Sarewitz seems to suggest that the metaphorical stands in the way of faith evolving into becoming a conclusion of rationalism, then he also suggests lack of knowledge in one field of science merits a rejection of scientific rationality in that field. Consequently, are we to stand in eternal fear of the incomprehensible, blaming its incomprehensibility on its complexity? He seems to have failed to realize that a submission to the simpler must always be a struggle, never a surrender.)

Sarewitz ploughed on, and drew a comparison more germane and, unfortunately, more personal than logical.

By contrast, the Angkor temples demonstrate how religion can offer an authentic personal encounter with the unknown. At Angkor, the genius of a long-vanished civilization, expressed across the centuries through its monuments, allows visitors to connect with things that lie beyond their knowing in a way that no journalistic or popular scientific account of the Higgs boson can. Put another way, if, in a thousand years, someone visited the ruins of the Large Hadron Collider, where the Higgs experiment was conducted, it is doubtful that they would get from the relics of the detectors and super­conducting magnets a sense of the subatomic world that its scientists say it revealed.

Granted, if a physicist were to visit the ruins of the LHC, he may be able to put two and two together at the sight of the large superconducting magnets, striated with the shadows of brittle wires and their cryostatic sleeves, and guess the nature of the prey. At the same time, an engagement with the unknown at the Angkor Wat (since I haven’t been there, I’ll extrapolate my experience at the Thillai Nataraja Temple, Chidambaram, South India, from a few years back) requires a need to engage with the unknown. A pilgrim visiting millennia-old temples will feel the same way a physicist does when he enters the chamber that houses the Tevatron! Are they not both pleasurable?

I think now that what Sarewitz is essentially arguing against is the incomparability of pleasures, of sensations, of entire worlds constructed on the basis two very different ideologies, rather requirements, and not against the impracticality of a world ruled by one faith, one science. This aspect came in earlier in this post, too, when I thought I was nitpicking when I surmised Sarewitz’s awe upon entering a massive temple was unique: it may have been unique, but only in sensation, not in subject, I realize now.

(Also, I’m sure we have enough of those unknowns scattered around science; that said, Sarewitz seems to suggest that the memorability of his personal experiences in Cambodia are a basis for the foundation of every reader’s objective truth. It isn’t.)

The author finishes with a mention that he is an atheist. That doesn’t give any value to or take away any value from the article. It could have been so were Sarewitz to pit the two worlds against each other, but in his highlighting their unification – their genesis in the human mind, an entity that continues to evade full explicability – he has left much to be desired, much to be yearned for in the form of clarification in the conflict of science with religion. If someday, we were able to fully explain the working and origin of the human mind, and if we find it has a fully scientific basis, then where will that put religion? And vice versa, too.

Until then, science will not give way for religion, nor religion for science, as both seem equipped to explain.