One of the hottest planets cold enough for ice

This article, as written by me, appeared in The Hindu on December 6, 2012.

Mercury, the innermost planet in the Solar System, is like a small rock orbiting the Sun, continuously assaulted by the star’s heat and radiation. It would have to be the last place to look for water at.

However, observations of NASA’s MESSENGER spacecraft indicate that Mercury seems to harbour enough water-ice to fill 20 billion Olympic skating rinks.

On November 29, during a televised press conference, NASA announced that data recorded since March 2011 by MESSENGER’s on-board instruments hinted that large quantities of water ice were stowed in the shadows of craters around the planet’s North Pole.

Unlike Earth, Mercury’s rotation is not tilted about an axis. This means areas around the planet’s poles that are not sufficiently tilted toward the sun will remain cold for long periods of time.

This characteristic allows the insides of polar craters to maintain low temperatures for millions of years, and capable of storing water-ice. But then, where is the water coming from?

Bright spots were identified by MESSENGER’s infrared laser fired from orbit into nine craters around the North Pole. The spots lined up perfectly with a thermal model of ultra-cold spots on the planet that would never be warmer than -170 degrees centigrade.

These icy spots are surrounded by darker terrain that receives a bit more sunlight and heat. Measurements by the neutron spectrometer aboard MESSENGER suggest that this darker area is a layer of material about 10 cm thick that lies on top of more ice, insulating it.

Dr. David Paige, a planetary scientist at the University of California, Los Angeles, and lead author of one of three papers that indicate the craters might contain ice, said, “The darker material around the bright spots may be made up of complex hydrocarbons expelled from comet or asteroid impacts.” Such compounds must not be mistaken as signs of life since they can be produced by simple chemical reactions as well.

The water-ice could also have been derived from crashing comets, the study by Paige and his team concludes.

Finding water on the system’s hottest planet changes the way scientists perceive the Solar System’s formation.

Indeed, in the mid-1990s, strong radar signals were fired from the US Arecibo radar dish in Puerto Rico, aimed at Mercury’s poles. Bright radar reflections were seen from crater-like regions, which was indicative of water-ice.

“However, other substances might also reflect radar in a similar manner, like sulfur or cold silicate materials,” says David J. Lawrence, a physicist from the Johns Hopkins University Applied Physics Laboratory and lead author of the neutron spectrometer study.

Lawrence and his team observed particles called neutrons bouncing and ricocheting off the planet via a spectrometer aboard MESSENGER. As high-energy cosmic rays from outer space bombarded into atoms on the planet, debris of particles, including neutrons, was the result.

However, hydrogen atoms in the path of neutrons can halt the speeding particles almost completely as both weigh about the same. Since water molecules contain two hydrogen atoms each, areas that could contain water-ice will show a suppressed count of neutrons in the space above them.

Because scientists have been living with the idea of Mercury containing water for the last couple decades, the find by MESSENGER is not likely to be revolutionary. However, it bolsters an exciting idea.

As Lawrence says, “I think this discovery reinforces the reality that water is able to find its way to many places in the Solar System, and this fact should be kept in mind when studying the system and its history.”

Reaching for the… sky?

This article, as written by me, appeared in The Hindu on December 4, 2012.

The Aakash initiative of the Indian government is an attempt to bolster the academic experience of students in the country by equipping them with purpose-built tablets at subsidised rates.

The Aakash 2 tablet was unveiled on November 11, 2012. It is the third iteration of a product first unveiled in October, 2011, and is designed and licensed by a British-Canadian-Indian company named DataWind, headed by chief executive Suneet Singh Tuli.

On November 29, the tablet received an endorsement from the United Nations, where it was presented to Secretary-General Ban-ki Moon by India’s ambassador to the UN, Hardeep Singh Puri, and Tuli.

DataWind will sell Aakash 2 to the government at Rs. 2,263, which will then be subsidised to students at Rs. 1,130. However, the question is this: is it value for money even at this low price?

When it first entered the market, Aakash was censured for being underpowered, underperforming, and just generally cheap. Version one was a flop. The subsequently upgraded successor, released April, 2012, was released commercially before it was remodelled into the Aakash 2 to suit the government’s subsidised rate. As a result, some critical features were substituted with some others whose benefits are either redundant or unnecessary.

Aakash 2 is more durable and slimmer than Aakash, even though both weigh 350 grams. If Akash is going to act as a substitute for textbooks, that would be a load off children’s schoolbags.

But the Ministry of Human Resource Development is yet to reveal if digitised textbooks in local languages or any rich, interactive content have been developed to be served specifically through Aakash 2. The 2 GB of storage space, if not expanded to a possible 32 GB, is likely to restrict the quantity of content further, whereas the quality will be restrained by the low 512 MB of RAM.

The new look has been achieved by substituting two USB ports that the first Aakash had for one mini-USB port. This means no internet dongles.

That is a big drawback, considering Aakash 2 can access only Wi-Fi networks. It does support tethering capability that lets it act as a local Wi-Fi hotspot. But not being able to access cellular networks like 3G, such as in rural areas where mobile phone penetration is miles ahead of internet penetration, will place the onus on local governments to lay internet-cables, bring down broadband prices, etc.

If the device is being envisaged mainly as a device on which students may take notes, then Aakash 2 could pass muster. But even here, the mini-USB port rules out plugging in an external keyboard for ease of typing.

Next, Aakash 2’s battery life is a meagre 4 hours, which is well short of a full college day, and prevents serious student use. Video-conferencing, with a front-facing low-resolution camera, will only drain the battery faster. Compensatory ancillary infrastructure can only render the experience more cumbersome.

In terms of software, after the operating system was recently upgraded in Aakash 2, the device is almost twice as fast and multi-tasks without overheating. But DataWind has quoted “insufficient processing power” as the reason the tablet will not have access to Android’s digital marketplace. Perhaps in an attempt to not entirely short-change students, access to the much less prolific GetJar apps directory is being provided.

Effectively, with limited apps, no 3G, a weak battery and a mini-USB port, the success of the tablet and its contribution to Indian education seems to be hinged solely on its low price.

As always, a problem of scale could exacerbate Aakash 2’s deficiencies. Consider the South American initiative of the One Laptop Per Child program instituted in 2005. Peru, in particular, distributed 8.5 lakh laptops at a cost of US $225 million in order to enhance its dismal education system.

No appreciable gains in terms of test scores were recorded, however. Only 13 per cent of twelve-year olds were at the required level in mathematics and 30 per cent at the required reading level, the country’s education ministry reported in March 2012.

However, Uruguay, its smaller continent-mate, saw rapid transformations after it equipped every primary-school student in the country with a laptop.

The difference, as Sandro Marcone, a Peruvian ministry official, conceded, lay in Uruguayan students using laptops to access interactive content from the web to become faster learners than their teachers, and forming closely knit learning communities that then expanded.

Therefore, what India shouldn’t do is subsidise a tablet that could turn out to be a very costly notebook. Yes, the price is low, but given the goal of ultimately unifying 58.6 lakh students across 25,000 colleges and 400 universities, Aakash 2 could be revised to better leverage existing infrastructure instead of necessitating more.

A regulator of the press

While Cameron is yet to accept the Leveson inquiry’s recommendations, political pressure is going to force his hand no doubt. Which side of the debate do you come down on, though?

I believe that a regulatory body must not exist – extraneous or no – to stem any practices by suppressing or appreciating the quantum of penalties in cases relating to privacy violations – albeit, of course, a system whose benefits in no way outweigh its hindrances.

By appreciating the solatium for prosecutors against defenders lying outside the purview of the recommended system, such as The Spectator, no justice is served if the defending party isn’t part of the system purely on the grounds of principle.

And a system that openly permits such inconsistencies is serving no justice but only sanctions, especially when the recommendations are based on a wide-ranging yet definitely locally emergent blight. Then, of course, there is also the encouragement of self-policing: when will we ever learn?

This is poetry. This is dance.

Drop everything, cut off all sound/noise, and watch this.

[vimeo http://www.vimeo.com/53914149 w=398&h=224]

If you’ve gotten as far as this line, here’s some extra info: this video was shot with these cameras for the sake of this conversation.

To understand the biology behind such almost-improbable fluidity, this is a good place to start.

The post-reporter era II

When a print-publication decides to go online, it will face a set of problems that is wholly unique and excluded from the set of problems it will have faced before. Keeping in mind that such an organization functions as a manager of reporters, and that those reporters will have already (hopefully) made the transition from offline-only to online-publishing as well, there is bound to be an attrition between how individuals see their stories and how the organization sees what it can do with those stories.

The principal cause of this problem – if that – is the nature of property on the world wide web. The newspaper isn’t the only portal on the internet where news is published and consumed, therefore its views on “its news” cannot be monopolistic. A reporter may not be allowed to publish his story with two publications if he works for either publication. This view is especially exacerbated if the two are competitors. On the web, however, who are you competing with?

On the web, news-dissemination may not be the only agenda of those locations where news is still consumed in large quantities. The Hindu or The Times of India keeping their reporters from pushing their agenda on Facebook or Twitter is just laughable: it could easily and well be considered censorship. At the same time, reporters abstain from a free exchange of ideas with the situation on the ground over the social networks because they’re afraid someone else might snap up their idea. In other words, Facebook/Twitter have become the battleground where the traditional view of information-ownership meets the emerging view.

The traditional newspaper must disinvest of its belief that news is a matter of money as well as of moral and historical considerations, and start to inculcate that, with the advent of information-management models for whom the “news” is not the most valuable commodity, news is of any value only for its own sake.

Where does this leave the reporter? For example, if a print-publication has promulgated an idea to host its reporters’ blogs, who owns the content on the blogs? Does the publication own the content because it has been generated with the publication’s resources? Or does the reporter own the content because it would’ve been created even if not for the publication’s resources? There are some who would say that the answers to these questions depends on what is being said.

If it’s a matter of opinion, then it may be freely shared. If it’s a news report, then it may not be freely shared. If it’s an analysis, then it may be dealt with on an ad hoc basis. No; this will not work because, simply put, it removes from the consistency of the reporter’s rights and, by extension, his opinions. It removes from the consistency of what the publication thinks “its” news is and what “its” news isn’t. Most of all, it defies the purpose of a blog itself – it’s not a commercial venture but an informational one. So… now what?

Flout ’em and scout ’em, and scout ’em and flout ’em;
Thought is free.

Stephano, The Tempest, Act 3: Scene II

News for news’s sake, that’s what. The deviation of the web from the commoditization of news to the commoditization of what presents that news implies a similar deviation for anyone who wants to be part of an enhanced enterprise. Don’t try to sell the content of the blogs: don’t restrict its supply and hope its value will increase; it won’t. Instead, drive traffic through the blogs themselves – pull your followers from Facebook and Twitter – and set up targeted-advertising on the blogs. Note, however, that this is only the commercial perspective.

What about things on the other side of the hypothetical paywall? Well, how much, really, has the other side mattered until now?

Window for an advanced theory of particles closes further

A version of this article, as written by me, appeared in The Hindu on November 22, 2012.

On November 12, at the first day of the Hadron Collider Physics Symposium at Kyoto, Japan, researchers presented a handful of results that constrained the number of hiding places for a new theory of physics long believed to be promising.

Members of the team from the LHCb detector on the Large Hadron Collider (LHC) experiment located on the border of France and Switzerland provided evidence of a very rare particle-decay. The rate of the decay process was in fair agreement with an older theory of particles’ properties, called the Standard Model (SM), and deviated from the new theory, called Supersymmetry.

“Theorists have calculated that, in the Standard Model, this decay should occur about 3 times in every billion total decays of the particle,” announced Pierluigi Campana, LHCb spokesperson. “This first measurement gives a value of around 3.2 per billion, which is in very good agreement with the prediction.”

The result was presented at the 3.5-sigma confidence level, which corresponds to an error rate of 1-in-2,000. While not strong enough to claim discovery, it is valid as evidence.

The particle, called a Bsmeson, decayed from a bottom antiquark and strange quark pair into two muons. According to the SM, this is a complex and indirect decay process: the quarks exchange a W boson particle, turn into a top-antitop quark pair, which then decays into a Z boson or a Higgs boson. The boson then decays to two muons.

This indirect decay is called a quantum loop, and advanced theories like Supersymmetry predict new, short-lived particles to appear in such loops. The LHCb, which detected the decays, reported no such new particles.

The solid blue line shows post-decay muons from all events, and the red dotted line shows the muon-decay event from the B(s)0 meson. Because of a strong agreement with the SM, SUSY may as well abandon this bastion.

At the same time, in June 2011, the LHCb had announced that it had spotted hints of supersymmetric particles at 3.9-sigma. Thus, scientists will continue to conduct tests until they can stack 3.5 million-to-1 odds for or against Supersymmetry to close the case.

As Prof. Chris Parkes, spokesperson for the UK participation in the LHCb experiment, told BBC News: “Supersymmetry may not be dead but these latest results have certainly put it into hospital.”

The symposium, which concluded on November 16, also saw the release of the first batch of data generated in search of the Higgs boson since the initial announcement on July 4 this year.

The LHC can’t observe the Higgs boson directly because it quickly decays into lighter particles. So, physicists count up the lighter particles and try to see if some of those could have come from a momentarily existent Higgs.

These are still early days, but the data seems consistent with the predicted properties of the elusive particle, giving further strength to the validity of the SM.

Dr. Rahul Sinha, a physicist at the Institute of Mathematical Sciences, Chennai, said, “So far there is nothing in the Higgs data that indicates that it is not the Higgs of Standard Model, but a conclusive statement cannot be made as yet.”

The scientific community, however, is disappointed as there are fewer channels for new physics to occur. While the SM is fairly consistent with experimental findings, it is still unable to explain some fundamental problems.

One, called the hierarchy problem, asks why some particles are much heavier than others. Supersymmetry is theoretically equipped to provide the answer, but experimental findings are only thinning down its chances.

Commenting on the results, Dr. G. Rajasekaran, scientific adviser to the India-based Neutrino Observatory being built at Theni, asked for patience. “Supersymmetry implies the existence of a whole new world of particles equaling our known world. Remember, we took a hundred years to discover the known particles starting with the electron.”

With each such tightening of the leash, physicists return to the drawing board and consider new possibilities from scratch. At the same time, they also hope that the initial results are wrong. “We now plan to continue analysing data to improve the accuracy of this measurement and others which could show effects of new physics,” said Campana.

So, while the area where a chink might be found in the SM armour is getting smaller, there is hope that there is a chink somewhere nonetheless.

On meson decay-modes in studying CP violation

In particle physics, CPT symmetry is an attribute of the universe that is held as fundamentally true by quantum field theory (QFT). It states that the laws of physics should not be changed and the opposite of all allowed motions be allowed (T symmetry) if a particle is replaced with its antiparticle (C symmetry) and then left and right are swapped (P symmetry).

What this implies is a uniformity of the particle’s properties across time, charge and orientation, effectively rendering them conjugate perspectives.

(T-symmetry, called so for an implied “time reversal”, defines that if a process moves one way in time, its opposite is signified by its moving the other way in time.)

The more ubiquitously studied version of CPT symmetry is CP symmetry with the assumption that T-symmetry is preserved. This is because CP-violation, when it was first observed by James Cronin and Val Fitch, shocked the world of physics, implying that something was off about the universe. Particles that ought to have remained “neutral” in terms of their properties were taking sides! (Note: CPT-symmetry is considered to be a “weaker symmetry” then CP-symmetry.)

Val Logsdon Fitch (L) and James Watson Cronin

In 1964, Oreste Piccioni, who had just migrated to the USA and was working at the Lawrence Berkeley National Laboratory (LBNL), observed that kaons, mesons each composed of a strange quark and an up/down antiquark, had a tendency to regenerate in one form when shot as a beam into matter.

The neutral kaon, denoted as K0, has two forms, the short-lived (KS) and the long-lived (KL). Piccioni found that kaons decay in flight, so a beam of kaons, over a period of time, becomes pure KL because the KS all decay away before them. When such a beam is shot into matter, the K0 is scattered by protons and neutrons whereas the K0* (i.e., antikaons) contribute to the formation of a class of particles called hyperons.

Because of this asymmetric interaction, (quantum) coherence between the two batches of particles is lost, resulting in the emergent beam being composed of KS and KL, where the KS is regenerated by firing a K0-beam into matter.

When the results of Piccioni’s experiment were duplicated by Robert Adair in the same year, regeneration as a physical phenomenon became a new chapter in the study of particle physics. Later that year, that’s what Cronin and Fitch set out to do. However, during the decay process, they observed a strange phenomenon.

According to a theory formulated in the 1950s by Murray Gell-Mann and Kazuo Nishijima, and then by Gell-Mann and Abraham Pais in 1955-1957, the KS meson was allowed to decay into two pions in order for certain quantum mechanical states to be conserved, and the KL meson was allowed to decay into three pions.

For instance, the KL (s*, u) decay happens thus:

  1. s* → u* + W+ (weak interaction)
  2. W+ → d* + u
  3. u → g + d + d* (strong interaction)
  4. u → u
A Feynman diagram depicting the decay of a KL meson into three pions.

In 1964, in their landmark experiment, Cronin and Fitch observed, however, that the KL meson was decaying into two pions, albeit at a frequency of 1-in-500 decays. This implied an indirect instance of CP-symmetry violation, and subsequently won the pair the 1980 Nobel Prize for Physics.

An important aspect of the observation of CP-symmetry violation in kaons is that the weak force is involved in the decay process (even as observed above in the decay of the KL meson). Even though the kaon is composed of a quark and an antiquark, i.e., held together by the strong force, its decay is mediated by the strong and the weak forces.

In all weak interactions, parity is not conserved. The interaction itself acts only on left-handed particles and right-handed anti-particles, and was parametrized in what is called the V-A Lagrangian for weak interactions, developed by Robert Marshak and George Sudarshan in 1957.

Prof. Robert Marshak

In fact, even in the case of the KS and KL kaons, their decay into pions can be depicted thus:

KS → π+ + π0
KL → π+ + π+ + π

Here, the “+” and “-” indicate a particle’s parity, or handedness. When a KS decays into two pions, the result is one right-handed (“+”) and one neutral pion (“0”). When a KL decays into three pions, however, the result is two right-handed pions and one left-handed (“-“) pion.

When kaons were first investigated via their decay modes, the different final parities indicated that there were two kaons that were decaying differently. Over time, however, as increasingly precise measurements indicated that only one kaon (now called K+) was behind both decays, physicists concluded that the weak interaction was responsible for resulting in one kind of decay some of the time and in another kind of decay the rest of the time.

To elucidate, in particle physics, the squares of the amplitudes of two transformations, B → f and B* → f*, are denoted thus.

Here,

B = Initial state (or particle); f = Final state (or particle)
B* = Initial antistate (or antiparticle); f* = Final antistate (or antiparticle)
P = Amplitude of transformation B → f; Q = Amplitude of transformation B* → f*
S = Corresponding strong part of amplitude; W = Corresponding weak part of amplitude; both treated as phases of the wave for which the amplitude is being evaluated

Subtracting (and applying some trigonometry):

The presence of the term sin(WPWQ) is a sign that purely, or at least partly, weak interactions can occur in all transformations that can occur in at least two ways, and thus will violate CP-symmetry. (It’s like having the option of having two paths to reach a common destination: #1 is longer and fairly empty; #2 is shorter and congested. If their distances and congestedness are fairly comparable, then facing some congestion becomes inevitable.)

Electromagnetism, strong interactions, and gravitation do not display any features that could give rise to the distinction between right and left, however. This disparity is also called the ‘strong CP problem’ and is one of the unsolved problems of physics. It is especially puzzling because the QCD Lagrangian, which is a function describing the dynamics of the strong interaction, includes terms that could break the CP-symmetry.

[youtube http://www.youtube.com/watch?v=KDkaMuN0DA0?rel=0]

(The best known resolution – one that doesn’t resort to spacetime with two time-dimensions – is the Peccei-Quinn theory put forth by Roberto Peccei and Helen Quinn in 1977. It suggests that the QCD-Lagrangian be extended with a CP-violating parameter whose value is 0 or close to 0.

This way, CP-symmetry is conserved during the strong interactions while CP-symmetry “breakers” in the QCD-Lagrangian have their terms cancelled by an emergent, dynamic field whose flux is encapsulated by massless Goldstone bosons called axions.)

Now, kaons are a class of mesons whose composition includes a strange quark (or antiquark). Another class of mesons, called B-mesons, are identified by their composition including a bottom antiquark, and are also notable for the role they play in studies of CP-symmetry violations in nature. (Note: A B-meson composed of a bottom antiquark and a bottom quark is not called a meson but a bottomonium.)

The six quarks, the fundamental (and proverbial) building blocks of matter

According to the Standard Model (SM) of particle physics, there are some particles – such as quarks and leptons – that carry a property called flavor. Mesons, which are composed of quarks and antiquarks, have an overall flavor inherited from their composition as a result. The presence of non-zero flavor is significant because SM permits quarks and leptons of one flavor to transmute into the corresponding quarks and leptons of another flavor, a process called oscillating.

And the B-meson is no exception. Herein lies the rub: during oscillations, the B-meson is favored over its antiparticle counterpart. Given the CPT theorem’s assurance of particles and antiparticles being differentiable only by charge and handedness, not mass, etc., the preference of B*-meson for becoming the B-meson more than the B-meson’s preference for becoming the B*-meson indicates a matter-asymmetry. Put another way, the B-meson decays at a slower rate than the B*-meson. Put yet another way, matter made of the B-meson is more stable than antimatter made of the B*-meson.

Further, if the early universe started off as a perfect symmetry (in every way), then the asymmetric formation of B-mesons would have paved the way for matter to take precedence over anti-matter. This is one of the first instances of the weak interaction possibly interfering with the composition of the universe. How? By promising never to preserve parity, and by participating in flavor-changing oscillations (in the form of the W/Z boson).

In this composite image of the Crab Nebula, matter and antimatter are propelled nearly to the speed of light by the Crab pulsar. The images came from NASA’s Chandra X-ray Observatory and the Hubble Space Telescope. (Photo by NASA; Caption from Howstuffworks.com)

The prevalence of matter over antimatter in our universe is credited to a hypothetical process called baryogenesis. In 1967, Andrei Sakharov, a Soviet nuclear physicist, proposed three conditions for asymmetric baryogenesis to have occurred.

  1. Baryon-number violation
  2. Departure from thermal equilibrium
  3. C- and CP-symmetry violation

The baryon-number of a particle is defined as one-third of the difference between the number of quarks and number of antiquarks that make up the particle. For a B-meson composed of a bottom antiquark and a quark, the value’s 0; of a bottom antiquark and another antiquark, the value’s 1. Baryon-number violation, while theoretically possible, isn’t considered in isolation of what is called “B – L” conservation (“L” is the lepton number, and is equal to the number of leptons minus the number of antileptons).

Now, say a proton decays into a pion and a position. A proton’s baryon-number is 1, L-number is 0; a pion has both baryon- and L-numbers as 0; a positron has baryon-number 0 and L-number -1. Thus, neither the baryon-number nor the lepton-number are conserved, but their difference (1) definitely is. If this hypothetical process were ever to be observed, then baryogenesis would make the transition from hypothesis to reality (and the question of matter-asymmetry become conclusively answered).

The quark-structure of a proton (notice that the two up-quarks have different flavors)

Therefore, in recognition of the role of B-mesons (in being able to present direct evidence of CP-symmetry violation through asymmetric B-B* oscillations involving the mediation of the weak-force) and their ability to confirm or deny an “SM-approved” baryogenesis in the early universe, what are called the B-factories were built: a collider-based machine whose only purpose is to spew out B-mesons so they can be studied in detail by high-precision detectors.

The earliest, and possibly most well-known, B-factories were constructed in the 1990s and shut down in the 2000s: the BaBar experiment at SLAC (2008), Stanford, and the Belle experiment at the KEKB collider (2010) in Japan. In fact, a Belle II plant is under construction and upon completion will boast the world’s highest-luminosity experiment.

The Belle detector (L) and the logo for Belle II under construction

Equations generated thanks to the Daum equations editor.

The travails of science communication

There’s an interesting phenomenon in the world of science communication, at least so far as I’ve noticed. Every once in a while, there comes along a concept that is gaining in research traction worldwide but is quite tricky to explain in simple terms to the layman.

Earlier this year, one such concept was the Higgs mechanism. Between December 13, 2011, when the first spotting of the Higgs boson was announced, and July 4, 2012, when the spotting was confirmed as being the piquingly-named “God particle”, the use of the phrase “cosmic molasses” was prevalent enough to prompt an annoyed (and struggling-to-make-sense) Daniel Sarewitz to hit back on Nature. While the article had a lot to say, and a lot more waiting there to just to be rebutted, it did include this remark:

If you find the idea of a cosmic molasses that imparts mass to invisible elementary particles more convincing than a sea of milk that imparts immortality to the Hindu gods, then surely it’s not because one image is inherently more credible and more ‘scientific’ than the other. Both images sound a bit ridiculous. But people raised to believe that physicists are more reliable than Hindu priests will prefer molasses to milk. For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

Sarewitz is not wrong in remarking of the problem as such, but in attempting to use it to define the case of religion’s existence. Anyway: In bridging the gap between advanced physics, which is well-poised to “unlock the future”, and public understanding, which is well-poised to fund the future, there is good journalism. But does it have to come with the twisting and turning of complex theory, maintaining only a tenuous relationship between what the metaphor implies and what reality is?

The notion of a “cosmic molasses” isn’t that bad; it does get close to the original idea of a pervading field of energy whose forces are encapsulated under certain circumstances to impart mass to trespassing particles in the form of the Higgs boson. Even this is a “corruption”, I’m sure. But what I choose to include or leave out makes all the difference.

The significance of experimental physicists having probably found the Higgs boson is best conveyed in terms of what it means to the layman in terms of his daily life and such activities more so than trying continuously to get him interested in the Large Hadron Collider. Common, underlying curiosities will suffice to to get one thinking about the nature of God, or the origins of the universe, and where the mass came from that bounced off Sir Isaac’s head. Shrouding it in a cloud of unrelated concepts is only bound to make the physicists themselves sound defensive, as if they’re struggling to explain something that only they will ever understand.

In the process, if the communicator has left out things such as electroweak symmetry-breaking and Nambu-Goldstone bosons, it’s OK. They’re not part of what makes the find significant for the layman. If, however, you feel that you need to explain everything, then change the question that your post is answering, or merge it with your original idea, etc. Do not indulge in the subject, and make sure to explain your concepts as a proper fiction-story: Your knowledge of the plot shouldn’t interfere with the reader’s process of discovery.

Another complex theory that’s doing the rounds these days is that of quantum entanglement. Those publications that cover news in the field regularly, such as R&D mag, don’t even do as much justice as did SciAm to the Higgs mechanism (through the “cosmic molasses” metaphor). Consider, for instance, this explanation from a story that appeared on November 16.

Electrons have a property called “spin”: Just as a bar magnet can point up or down, so too can the spin of an electron. When electrons become entangled, their spins mirror each other.

The causal link has been omitted! If the story has set out to explain an application of quantum entanglement, which I think it has, then it has done a fairly good job. But what about entanglement-the-concept itself? Yes, it does stand to lose a lot because many communicators seem to be divesting of its intricacies and spending more time explaining why it’s increasing in relevance in modern electronics and computation. If relevance is to mean anything, then debate has to exist – even if it seems antithetical to the deployment of the technology as in the case of nuclear power.

Without understanding what entanglement means, there can be no informed recognition of its wonderful capabilities, there can be no public dialog as to its optimum use to further public interests. When when scientific research stops contributing to the latter, it will definitely face collapse, and that’s the function, rather the purpose, that sensible science communication serves.

After less than 100 days, Curiosity renews interest in Martian methane

A version of this story, as written by me, appeared in The Hindu on November 15, 2012.

In the last week of October, the Mars rover Curiosity announced that there was no methane on Mars. The rover’s conclusion is only a preliminary verdict, although it is already controversial because of the implications of the gas’s discovery (or non-discovery).

The presence of methane is one of the most important prerequisites for life to have existed in the planet’s past. The interest in the notion was increased when Curiosity found signs that water may have flowed in the past through Gale Crater, the immediate neighbourhood of its landing spot, after finding sedimentary settlements.

The rover’s Tunable Laser Spectrometer (TLS), which analysed a small sample of Martian air to come to the conclusion, had actually detected a few parts per billion of methane. However, recognising that the reading was too low to be significant, it sounded a “No”.

In an email to this Correspondent, Adam Stevens, a member of the science team of the NOMAD instrument on the ExoMars Trace Gas Orbiter due to be launched in January 2016, stressed: “No orbital or ground-based detections have ever suggested atmospheric levels anywhere above 10-30 parts per billion, so we are not expecting to see anything above this level.”

At the same time, he also noted that the 10-30 parts per billion (ppb) is not a global average. The previous detections of methane found the gas localised in the Tharsis volcanic plateau, the Syrtis Major volcano, and the polar caps, locations the rover is not going to visit. What continues to keep the scientists hopeful is that methane on Mars seems to get replenished by some geochemical or biological source.

The TLS will also have an important role to play in the future. At some point, the instrument will go into a higher sensitivity-operating mode and make measurements of higher significance by reducing errors.

It is pertinent to note that scientists still have an incomplete understanding of Mars’s natural history. As Mr. Stevens noted, “While not finding methane would not rule out extinct or extant life, finding it would not necessarily imply that life exists or existed.”

Apart from methane, there are very few “bulk” signatures of life that the Martian geography and atmosphere have to offer. Scientists are looking for small fossils, complex carbon compounds and other hydrocarbon gases, amino acids, and specific minerals that could be suggestive of biological processes.

While Curiosity has some fixed long-term objectives, they are constantly adapted according to what the rover finds. Commenting on its plans, Mr. Stevens said, “Curiosity will definitely move up Aeolis Mons, the mountain in the middle of Gale Crater, taking samples and analyses as it goes.”

Curiosity is not the last chance to look more closely for methane in the near future, however.

On the other side of the Atlantic, development of the ExoMars Trace Gas Orbiter (TGO), with which Mr. Stevens is working, is underway. A collaboration between the European Space Agency and the Russian Federal Space Agency, the TGO is planned to deploy a stationary Lander that will map the sources of methane and other gases on Mars.

Its observations will contribute to selecting a landing site for the ExoMars rover due to be launched in 2018.

Even as Curiosity completed 100 days on Mars on November 14, it still has 590 days to go. However, it has also already attracted attention from diverse fields of study. There is no doubt that from the short trip from the rim of Gale Crater, where it is now, to the peak of Aeolis Mons, Curiosity will definitely change our understanding of the enigmatic red planet.