Being a science journalist with dignity

Classes at NYU have started! On day one, Michael Balter, who is a senior correspondent for Science, kicked off the program with an introduction to interviewing by, simply enough, interviewing each one of us, having us introduce ourselves at the same time. I’m not sure about how much others were able to take away from it, but I couldn’t much until Michael told us that he was getting each one of us to say something interesting. And it was only in hindsight that his demonstration started to make sense to me.

After introductions, we got into discussing Michael’s classes, how they’d be structured, what we’d be expected to do and what goals we’d better have in mind. While they wore on, what struck me hardest was my great inexperience as a science writer. Despite having spent two years at The Hindu reporting on science as well as grappling with tools to take the subject to a bigger audience, all that I’d thought were problems that only accrued with time found mention in our classroom discussion on day one.

Maybe we’d take on these problems “in detail” in the coming months, but their quick acknowledgment was proof enough for me that I was in the right place and among the right people.

Participating in the discussion – led by Michael’s comments – finally gave me the sense of dignity in being a science journalist that I believe is not easy to acquire in India except, of course, together with being considered exotic. It was reassuring to be able to discuss my problems in detail, especially being able to pick on small, nagging issues. For example, stuff like “What do you do when a scientist you’ve spoken to asks to see the story before it is published?”

It seems the answer’s not always a simple “No”.

The class on Day 2, by Dan Fagin, was more introspective. Seemingly, it was the class that explored – and I suppose will continue to explore – the basics of journalism in detail; what a news story is, where story ideas come from, etc. – the class that will keep us thinking about what it is that we’re really doing and why we’re doing it. And just to make things more interesting – and obviously more educative – each one of us in the class was assigned a beat to cover for the semester, so chosen that they lay completely outside our respective comfort zones.

Taking my cue from Masterchef USA, where so many attempts to cook the personally uncookable had paid off and trying to play it safe with “just chicken” had backfired, I got myself assigned genetics, secure in the knowledge that:

  1. If I do screw up, I will screw up gloriously.
  2. If I end up being able to write about experimental physics and genetics with equal ease, I will also likely feel up for anything.

Toward the end, and just like on orientation day, Dan had another nugget of golden advice. He said that while writing his stories, he had in mind not his entire potential audience but one reader in particular – a fantasy reader: a man named Stan whom Dan knew, who wanted to know everything about the world but actually didn’t know anything. “Pick someone like that, and my advice is don’t pick your mother because she will like everything you write.”

At this point, although I would like to keep writing, I’m going to have to get started on my assignments. So I’m going to leave you with this quote from an amazing blog post by Paige Brown Jarreau I read on SciLogs the other day, to give you a sense of why I’m writing “NYUlab” in the first place.

So if you are a student, especially a student of mass communication or a student studying at the intersection of two different fields, I highly encourage you to blog. Use your blog to make connections between concepts in vastly different fields of study, or that seemingly occupy different parts of your brain. Tie your art classes to science communication. Tie your biology classes to your information theory classes. Tie your knowledge of human cognition to environmental and scientific issues. Don’t let anything you learn or read about go un-applied.

Over time, I’m hoping my experiences at NYU will pay off in much the same way, by becoming closely tied to different aspects of my life. Have a nice day!

Who is a science writer?

August 28 was Orientation Day at the Arthur L. Carter Journalism Institute, New York University, where I’ve enrolled with the Science, Health & Environmental Reporting Program (SHERP) for the 2014 fall term. It was an exciting day for many reasons. The first such moment was meeting the wonderful people who are to be my classmates for the next 16 months and, if things work out as promised, friends for life as well. We were introduced to each other – 13 in all – by Dan Fagin, the SHERP program coordinator and, incidentally, this year’s winner of a Pulitzer Prize (general non-fiction category).

Dan’s icebreaker for the class centered on what made a good science writer, at least as far as SHERP was concerned. He had brought with him a burnt pine cone from somewhere near the Hamptons. We knew its seeds had been released because its scales were open. What was particularly unique about the specimen at hand was that it was a pine cone that had adapted through evolution to release its seeds in a hostile environment. Dan explained that it had, over the centuries, acquired a resin coating that would pop only when burnt off by a forest fire. On one level, he said, it was a news story about burnt pine cones, but on a deeper level, it was a story about evolution.

Then came a more interesting perspective. Dan said it was possible the cone’s seeds were sterile. How? “It has to do with something that has changed since the last ice age, something that humans have done different for the last 150 years. What could it be?”

“The seeds could’ve been sterile because of humans putting out forest fires. These pine cones were evolutionarily adapted to releasing their seeds during naturally occurring forest fires, like when lightning strikes. But humans have learnt to put out forest fires,” and that means the resin wouldn’t have had the time to melt completely. “In the same way, the job of a science writer is to peel off the different layers of a story to reveal” deeper truths. “On one level, this is about a burnt pine cone. On a deeper level, it’s about evolution. On a still deeper level, it’s about how humans are influencing the natural world around them.”

For all my success, such as it is, in making it to one of the better science writing programs in the USA, Dan’s introduction was doubly empowering, and I now look forward to classes doubly eagerly!

(I’d promised my friends @AkshatRathi, @pradx and @vigsun that I’d let them know as much as possible about life studying science writing at NYU. Consider this blog post the first in the series.)

Even something will come of nothing

The Hindu
June 3, 2014

“In the 3,000 years since the philosophers of ancient Greece first contemplated the mystery of creation, the emergence of something from nothing, the scientific method has revealed truths that they could not have imagined.” Thus writes the British physicist Frank Close in an introductory book on the idea of nothingness he wrote in 2009. It is the ontology of these truths that the book Nothing: From Absolute Zero to Cosmic Oblivion – Amazing Insights Into Nothingness explores so succinctly, drawing upon the communication skills of many of the renowned writers with NewScientist.

While at first glance the book may appear to be an anthology with no other connection between its various pieces than the narration of what lies at today’s cutting edge of scientific research, there grows a deeper sense of homogeneity toward the end as you, the reader, realize what you’ve read are stories of what drives people: a compulsion toward the known, away from the unknown, in various forms. Because we are a species hardwired to recognize nature in terms of a cause-effect chain, it can be intuited that somewhere between nothing and something lies our origin. And by extrapolating between the two, the pieces’ authors explore how humankind’s curiosity is inseparable from its existence.

So, as is customary when thinking about such things, the book begins and ends with pieces on cosmology. This is a vantage point that presents sufficient opportunity to think about both the physical and the metaphysical of nothingness, and the pieces by Marcus Chown and Stephen Battersby shows that that’s true. Both writers present the intricate circumstances of our conception and ultimate demise in language that is never intimidating, although it could easily have been, and with appreciable lucidity.

However, the best part of the book is that it dispels the notion that profound unknowns are limited to cosmology. Pieces on the placebo effect (Michael Brooks), vestigial organs (Laura Spinney) and anesthetics (Linda Geddes) reveal how scientists confront these mysteries when treating with the human body, the diminishing space for its organs, its elusive mind and the switch that throws the bulb ‘on’ inside it. What makes sick peoples’ malfunctioning bodies heal with nothing? What is the brain doing when people are ‘put under’? We’ve known about these effects since the 19th century. To this day, we’re having trouble getting a logical grip on them. Yet, in the past, today and henceforth, we will take what rough ideas of them pass for knowledge for granted.

There are other examples, too. Physicist Per Eklund writes a wonderful piece on how long it took for the world’s enterprising to defy Aristotle and discover vacuum because its existence is so far removed from ours. Jonathan Knight shows how animals that sit around and do nothing all day could actually die of starvation if they did anything more. Richard Webb awakens us to the staggering fact that modern electronics is based on the movement of holes, or locations in atoms where electrons are absent. And then, Nigel Henbest’s unraveling of the discourteous blankness of outer space leaves you feeling alone and… perhaps scared.

But relax. Matters are not so dire if only because nothingness is unique and rare, and insured against by the presence of something. At the same time, it isn’t extinct either even if places for it to exist on Earth are limited to laboratories and opinions, and even if it, unlike anything else, can be conjured out of thin air. A case in point is the titillating Casimir effect. In 1948, the Dutch physicist Hendrik Casimir predicted a “new” force that could act between two metallic plates parallel to each other in a vacuum such that the distance between them was only some tens of nanometers. Pointless thought it seems, Casimir was actually working on a tip-off from Niels Bohr, and his calculations showed something.

He’d found that the plates would move closer, in an effect that has come to be named for him. What could have moved them? They would practically have been surrounded by nothingness. However, as Sherlock Holmes might have induced, Casimir thought the answer lay with the nothingness itself. He explained that the vacuum of space didn’t imply an absolute nothingness but a volume that still contained some energy, called zero-point energy, continuously experiencing fluctuations. In this arena, bring two plates close enough and at some point, the strength of fluctuations between the plates is going to be outweighed by the strength of fluctuations on the outside, pushing the plates together.

Although it wasn’t until 1958 that an experiment to test the Casimir effect was performed, and until 1996 that the attractive force was measured to within 15 per cent of the value predicted by theory, the prediction salvaged the vacuum of space from abject impotency and made it febrile. As counter-intuitive as this seems, such is what quantum mechanics makes possible, in the process setting up a curious but hopefully fruitful stage upon which, in the same vein as Paul Davies writes in the piece The Day Time Began, science and theology can meet and sort out their differences.

Because, if anything, Nothing from the writers at NewScientist is as much a spiritual exploration as it is a physical one. Each of the pieces has at its center a human who is lost, confused, looking for answers, but doesn’t yet know the questions, a situation we’re becoming increasingly familiar with as we move on from the “How” of things to the “Why”. Even if we’re not in a position to understand what exactly happened before the big bang, the promise of causality that has accompanied everything after says that the answers lie between the nothingness of then and the somethingness of now. And the more somethings we find, the more Nothing will help us understand it.

Buy the book.

Can science and philosophy mix constructively?

Quantum mechanics can sometimes be very hard to understand, so much so that even thinking about it becomes difficult. This could be because its foundations lay in the action-centric depiction of reality that slowly rejected its origins and assumed a thought-centric one garb.

In his 1925 paper on the topic, physicist Werner Heisenberg used only observable quantities to denote physical phenomena. He also pulled up Niels Bohr in that great paper, saying, “It is well known that the formal rules which are used [in Bohr’s 1913 quantum theory] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron.”

A true theory

Because of the uncertainty principle, and other principles like it, quantum mechanics started to develop into a set of theories that could be tested against observations, and that, to physicists, left very little to thought experiments. Put another way, there was nothing a quantum-physicist could think up that couldn’t be proved or disproved experimentally. This way of looking at the world – in philosophy – is called logical positivism.

This made quantum mechanics a true theory of reality, as opposed to a hypothetical, unverifiable one.

However, even before Heisenberg’s paper was published, positivism was starting to be rejected, especially by chemists. An important example was the advent of statistical mechanics and atomism in the early 19th century. Both of them interpreted, without actual physical observations, that if two volumes of hydrogen and one volume of oxygen combined to form water vapor, then a water molecule would have to comprise two atoms of hydrogen and one atom of oxygen.

A logical positivist would have insisted on actually observing the molecule individually, but that was impossible at the time. This insistence on submitting physical proof, thus, played an adverse role in the progress of science by delaying/denying success its due.

As time passed, the failures of positivism started to take hold on quantum mechanics. In a 1926 conversation with Albert Einstein, Heisenberg said, “… we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are the frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path.” And since he held that any theory ought only to be a true theory, he concluded that these parameters must feature in the theory, and what it projected, as themselves instead of the unobservable electron path.

This wasn’t the case.

Gaps in our knowledge

Heisenberg’s probe of the granularity of nature led to his distancing from the theory of logical positivism. And Steven Weinberg, physicist and Nobel Laureate, uses just this distancing to harshly argue in a 1994 essay, titled Against Philosophy, that physics has never benefited from the advice of philosophers, and when it does, it’s only to negate the advice of another philosopher – almost suggesting that ‘science is all there is’ by dismissing the aesthetic in favor of the rational.

In doing so, Weinberg doesn’t acknowledge the fact that science and philosophy go hand in hand; what he has done is simply to outline the failure of logical positivism in the advancement of science.

At the simplest, philosophy in various forms guides human thought toward ideals like objective truth and is able to establish their superiority over subjective truths. Philosophy also provides the framework within which we can conceptualize unobservables and contextualize them in observable space-time.

In fact, Weinberg’s conclusion brings to mind an article in Nature News & Comment by Daniel Sarewitz. In the piece, Sarewitz, a physicist, argued that for someone who didn’t really know the physics supporting the Higgs boson, its existence would have to be a matter of faith than one of knowledge. Similarly, for someone who couldn’t translate electronic radiation to ‘mean’ the electron’s path, the latter would have to be a matter of faith or hope, not a bit of knowledge.

Efficient descriptions

A more well-defined example is the theory of quarks and gluons, both of which are particles that haven’t been spotted yet but are believed to exist by the scientific community. The equipment to spot them is yet to be built and will cost hundreds of billions of dollars, and be orders of magnitude more sophisticated than the LHC.

In the meantime, unlike what Weinberg and like what Sarewitz would have you believe, we do rely on philosophical principles, like that of sufficient reasoning (Spinoza 1663Leibniz 1686), to fill up space-time at levels we can’t yet probe, to guide us toward a direction that we ought to probe after investing money in it.

This is actually no different from a layman going from understanding electric fields to supposedly understanding the Higgs field. At the end of the day, efficient descriptions make the difference.

Exchange of knowledge

This sort of dependence also implies that philosophy draws a lot from science, and uses it to define its own prophecies and shortcomings. We must remember that, while the rise of logical positivism may have shielded physicists from atomism, scientific verification through its hallowed method also did push positivism toward its eventual rejection.

The moral is that scientists must not reject philosophy for its passage through crests and troughs of credence because science also suffers the same passage. What more proof of this do we need than Popper’s and Kuhn’s arguments – irrespective of either of them being true?

Yes, we can’t figure things out with pure thought, and yes, the laws of physics underlying the experiences of our everyday lives are completely known. However, in the search for objective truth – whatever that is – we can’t neglect pure thought until, as Weinberg’s Heisenberg-example itself seems to suggest, we know everything there is to know, until science and philosophy, rather verification-by-observation and conceptualization-by-ideation, have completely and absolutely converged toward the same reality.

Until, in short, we can describe nature continuously instead of discretely.

Liberation of philosophical reasoning

By separating scientific advance from contributions from philosophical knowledge, we are advocating for the ‘professionalization’ of scientific investigation, that it must decidedly lack the attitude-born depth of intuition, which is aesthetic and not rational.

It is against such advocacy that American philosopher Paul Feyerabend voiced vehemently: “The withdrawal of philosophy into a ‘professional’ shell of its own has had disastrous consequences.” He means, in other words, that scientists have become too specialized and are rejecting the useful bits of philosophy.

In his seminal work Against Method (1975), Feyerabend suggested that scientists occasionally subject themselves to methodological anarchism so that they may come up with new ideas, unrestricted by the constraints imposed by the scientific method, freed in fact by the liberation of philosophical reasoning. These new ideas, he suggests, can then be reformulated again and again according to where and how observations fit into it.

In the meantime, the ideas are not born from observations but pure thought that is aided by scientific knowledge from the past. As Wikipedia puts it neatly: “Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts.” These ‘known facts’ are akin to Weinberg’s observables.

So, until the day we can fully resolve nature’s granularity, and assume the objective truth of no reality before that, Pierre-Simon Laplace’s two-century old words should show the way: “We may regard the present state of the universe as the effect of its past and the cause of its future” (An Essay on Probabilities, 1814).

This article, as written by me, originally appeared in The Hindu’s science blog, The Copernican, on June 6, 2013.

Can science and philosophy mix constructively?

'The School of Athens', painted by Rafael during the Renaissance in 1509-1511, shows philosophers, mathematicians and scientists of ancient Greece gathered together.
‘The School of Athens’, painted by Rafael during the Renaissance in 1509-1511, shows philosophers, mathematicians and scientists of ancient Greece gathered together. Photo: Wikimedia Commons

Quantum mechanics can sometimes be very hard to understand, so much so that even thinking about it becomes difficult. This could be because its foundations lay in the action-centric depiction of reality that slowly rejected its origins and assumed a thought-centric one garb.

In his 1925 paper on the topic, physicist Werner Heisenberg used only observable quantities to denote physical phenomena. He also pulled up Niels Bohr in that great paper, saying, “It is well known that the formal rules which are used [in Bohr’s 1913 quantum theory] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron.”

A true theory

Because of the uncertainty principle, and other principles like it, quantum mechanics started to develop into a set of theories that could be tested against observations, and that, to physicists, left very little to thought experiments. Put another way, there was nothing a quantum-physicist could think up that couldn’t be proved or disproved experimentally. This way of looking at the world – in philosophy – is called logical positivism.

This made quantum mechanics a true theory of reality, as opposed to a hypothetical, unverifiable one.

However, even before Heisenberg’s paper was published, positivism was starting to be rejected, especially by chemists. An important example was the advent of statistical mechanics and atomism in the early 19th century. Both of them interpreted, without actual physical observations, that if two volumes of hydrogen and one volume of oxygen combined to form water vapor, then a water molecule would have to comprise two atoms of hydrogen and one atom of oxygen.

A logical positivist would have insisted on actually observing the molecule individually, but that was impossible at the time. This insistence on submitting physical proof, thus, played an adverse role in the progress of science by delaying/denying success its due.

As time passed, the failures of positivism started to take hold on quantum mechanics. In a 1926 conversation with Albert Einstein, Heisenberg said, “… we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are the frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path.” And since he held that any theory ought only to be a true theory, he concluded that these parameters must feature in the theory, and what it projected, as themselves instead of the unobservable electron path.This wasn’t the case.

Gaps in our knowledge

Heisenberg’s probe of the granularity of nature led to his distancing from the theory of logical positivism. And Steven Weinberg, physicist and Nobel Laureate, uses just this distancing to harshly argue in a 1994 essay, titled Against Philosophy, that physics has never benefited from the advice of philosophers, and when it does, it’s only to negate the advice of another philosopher – almost suggesting that ‘science is all there is’ by dismissing the aesthetic in favor of the rational.

In doing so, Weinberg doesn’t acknowledge the fact that science and philosophy go hand in hand; what he has done is simply to outline the failure of logical positivism in the advancement of science.

At the simplest, philosophy in various forms guides human thought toward ideals like objective truth and is able to establish their superiority over subjective truths. Philosophy also provides the framework within which we can conceptualize unobservables and contextualize them in observable space-time.

In fact, Weinberg’s conclusion brings to mind an article in Nature News & Comment by Daniel Sarewitz. In the piece, Sarewitz, a physicist, argued that for someone who didn’t really know the physics supporting the Higgs boson, its existence would have to be a matter of faith than one of knowledge. Similarly, for someone who couldn’t translate electronic radiation to ‘mean’ the electron’s path, the latter would have to be a matter of faith or hope, not a bit of knowledge.

Efficient descriptions

A more well-defined example is the theory of quarks and gluons, both of which are particles that haven’t been spotted yet but are believed to exist by the scientific community. The equipment to spot them is yet to be built and will cost hundreds of billions of dollars, and be orders of magnitude more sophisticated than the LHC.

In the meantime, unlike what Weinberg and like what Sarewitz would have you believe, we do rely on philosophical principles, like that of sufficient reasoning (Spinoza 1663Leibniz 1686), to fill up space-time at levels we can’t yet probe, to guide us toward a direction that we ought to probe after investing money in it.

This is actually no different from a layman going from understanding electric fields to supposedly understanding the Higgs field. At the end of the day, efficient descriptions make the difference.

Exchange of knowledge

This sort of dependence also implies that philosophy draws a lot from science, and uses it to define its own prophecies and shortcomings. We must remember that, while the rise of logical positivism may have shielded physicists from atomism, scientific verification through its hallowed method also did push positivism toward its eventual rejection. There was human agency in both these timelines, both motivated by either the support for or the rejection of scientific and philosophical ideas.

The moral is that scientists must not reject philosophy for its passage through crests and troughs of credence because science also suffers the same passage. What more proof of this do we need than Popper’s and Kuhn’s arguments – irrespective of either of them being true?

Yes, we can’t figure things out with pure thought, and yes, the laws of physics underlying the experiences of our everyday lives are completely known. However, in the search for objective truth –whatever that is – we can’t neglect pure thought until, as Weinberg’s Heisenberg-example itself seems to suggest, we know everything there is to know, until science and philosophy, rather verification-by-observation and conceptualization-by-ideation, have completely and absolutely converged toward the same reality.

Until, in short, we can describe nature continuously instead of discretely.

Liberation of philosophical reasoning

By separating scientific advance from contributions from philosophical knowledge, we are advocating for the ‘professionalization’ of scientific investigation, that it must decidedly lack the attitude-born depth of intuition, which is aesthetic and not rational.

It is against such advocacy that American philosopher Paul Feyerabend voiced vehemently: “The withdrawal of philosophy into a ‘professional’ shell of its own has had disastrous consequences.” He means, in other words, that scientists have become too specialized and are rejecting the useful bits of philosophy.

In his seminal work Against Method (1975), Feyerabend suggested that scientists occasionally subject themselves to methodological anarchism so that they may come up with new ideas, unrestricted by the constraints imposed by the scientific method, freed in fact by the liberation of philosophical reasoning.

These new ideas, he suggests, can then be reformulated again and again according to where and how observations fit into it. In the meantime, the ideas are not born from observations but pure thought that is aided by scientific knowledge from the past. As Wikipedia puts it neatly: “Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts.” These ‘known facts’ are akin to Weinberg’s observables.

So, until the day we can fully resolve nature’s granularity, and assume the objective truth of no reality before that, Pierre-Simon Laplace’s two-century old words should show the way: “We may regard the present state of the universe as the effect of its past and the cause of its future” (An Essay on Probabilities, 1814).

(This blog post first appeared at The Copernican on June 6, 2013.)

The travails of science communication

There’s an interesting phenomenon in the world of science communication, at least so far as I’ve noticed. Every once in a while, there comes along a concept that is gaining in research traction worldwide but is quite tricky to explain in simple terms to the layman.

Earlier this year, one such concept was the Higgs mechanism. Between December 13, 2011, when the first spotting of the Higgs boson was announced, and July 4, 2012, when the spotting was confirmed as being the piquingly-named “God particle”, the use of the phrase “cosmic molasses” was prevalent enough to prompt an annoyed (and struggling-to-make-sense) Daniel Sarewitz to hit back on Nature. While the article had a lot to say, and a lot more waiting there to just to be rebutted, it did include this remark:

If you find the idea of a cosmic molasses that imparts mass to invisible elementary particles more convincing than a sea of milk that imparts immortality to the Hindu gods, then surely it’s not because one image is inherently more credible and more ‘scientific’ than the other. Both images sound a bit ridiculous. But people raised to believe that physicists are more reliable than Hindu priests will prefer molasses to milk. For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

Sarewitz is not wrong in remarking of the problem as such, but in attempting to use it to define the case of religion’s existence. Anyway: In bridging the gap between advanced physics, which is well-poised to “unlock the future”, and public understanding, which is well-poised to fund the future, there is good journalism. But does it have to come with the twisting and turning of complex theory, maintaining only a tenuous relationship between what the metaphor implies and what reality is?

The notion of a “cosmic molasses” isn’t that bad; it does get close to the original idea of a pervading field of energy whose forces are encapsulated under certain circumstances to impart mass to trespassing particles in the form of the Higgs boson. Even this is a “corruption”, I’m sure. But what I choose to include or leave out makes all the difference.

The significance of experimental physicists having probably found the Higgs boson is best conveyed in terms of what it means to the layman in terms of his daily life and such activities more so than trying continuously to get him interested in the Large Hadron Collider. Common, underlying curiosities will suffice to to get one thinking about the nature of God, or the origins of the universe, and where the mass came from that bounced off Sir Isaac’s head. Shrouding it in a cloud of unrelated concepts is only bound to make the physicists themselves sound defensive, as if they’re struggling to explain something that only they will ever understand.

In the process, if the communicator has left out things such as electroweak symmetry-breaking and Nambu-Goldstone bosons, it’s OK. They’re not part of what makes the find significant for the layman. If, however, you feel that you need to explain everything, then change the question that your post is answering, or merge it with your original idea, etc. Do not indulge in the subject, and make sure to explain your concepts as a proper fiction-story: Your knowledge of the plot shouldn’t interfere with the reader’s process of discovery.

Another complex theory that’s doing the rounds these days is that of quantum entanglement. Those publications that cover news in the field regularly, such as R&D mag, don’t even do as much justice as did SciAm to the Higgs mechanism (through the “cosmic molasses” metaphor). Consider, for instance, this explanation from a story that appeared on November 16.

Electrons have a property called “spin”: Just as a bar magnet can point up or down, so too can the spin of an electron. When electrons become entangled, their spins mirror each other.

The causal link has been omitted! If the story has set out to explain an application of quantum entanglement, which I think it has, then it has done a fairly good job. But what about entanglement-the-concept itself? Yes, it does stand to lose a lot because many communicators seem to be divesting of its intricacies and spending more time explaining why it’s increasing in relevance in modern electronics and computation. If relevance is to mean anything, then debate has to exist – even if it seems antithetical to the deployment of the technology as in the case of nuclear power.

Without understanding what entanglement means, there can be no informed recognition of its wonderful capabilities, there can be no public dialog as to its optimum use to further public interests. When when scientific research stops contributing to the latter, it will definitely face collapse, and that’s the function, rather the purpose, that sensible science communication serves.

A revisitation inspired by Facebook’s opportunities

When habits form, rather become fully formed, it becomes difficult to recognize the drive behind its perpetuation. Am I still doing what I’m doing for the habit’s sake, or is it that I still love what I do and that’s why I’m doing it? In the early stages of habit-formation, the impetus has to come from within – let’s say as a matter of spirit – because it’s a process of creation. Once the entity has been created, once it is fully formed, it begins to sustain itself. It begins to attract attention, the focus of other minds, perhaps even the labor of other wills. That’s the perceived pay-off of persevering at the beginning, persevering in the face of nil returns.

But where the perseverance really makes a difference is when, upon the onset of that dull moment, upon the onset of some lethargy or the writers’ block, we somehow lose the ability to set apart fatigue-of-the-spirit and suspension-of-the-habit. If I no longer am able to write, even if at least for a day or so, I should be able to tell the difference between that pit-stop and a perceived threat of the habit starting to become endangered. If we don’t learn to make that distinction – which is more palpable than fine or blurry most of the time – then we will have have persevered for nothing but perseverance’s sake.

This realization struck me after I opened a Facebook page for my blog so that, given my incessant link-sharing on the social network, only the people who wanted to read the stuff I shared could sign-up and receive the updates. I had no intention earlier to use Facebook as anything but a socialization platform, but after my the true nature of my activity on Facebook was revealed to me (by myself), I realized my professional ambitions had invaded my social ones. So, to remind myself why the social was important, too, I decided to stop sharing news-links and analyses on my timeline.

However, after some friends expressed excitement – that I never quite knew was there – about being able to avail my updates in a more cogent manner, I understood that there were people listening to me, that they did spend time reading what I had to say on science news, etc., not just from on my blog but also from wherever I decided to post it! At the same moment, I thought to myself, “Now, why am I blogging?” I had no well-defined answer, and that’s when I knew my perseverance was being misguided by my own hand, misdirected by my own foolishness.

I opened astrohep.wordpress.com in January, 2011, and whatever science- or philosophy-related stories I had to tell, I told here. After some time, during a period coinciding with the commencement of my formal education in journalism, I started to use isnerd more effectively: I beat down the habit of using big words (simply because they encapsulated better whatever I had to say) and started to put some effort in telling my stories differently, I did a whole lot of reading before and while writing each post, and I used quotations and references wherever I could.

But the reason I’d opened this blog stayed intact all the time (or at least I think it did): I wanted to tell my science/phil. stories because some of the people around me liked hearing them and I thought the rest of the world might like hearing them, too.

At some point, however, I crossed over into the other side of perseverance: I was writing some of my posts not because they were stories people might like to hear but because, hey, I was a story-writer and what do I do but write stories! I was lucky enough to warrant no nasty responses to some absolutely egregious pieces of non-fiction on this blog, and parallely, I was unlucky enough to not understand that a reader, no matter how bored, never would want to be presented crap.

Now, where I used to draw pride from pouring so much effort into a small blog in one corner of WordPress, I draw pride from telling stories somewhat effectively – although still not as effectively as I’d like. Now, astrohep.wordpress.com is not a justifiable encapsulation of my perseverance, and nothing is or will be until I have the undivided attention of my readers whenever I have something to present them. I was wrong in assuming that my readers would stay with me and take to my journey as theirs, too: A writer is never right in assuming that.