The Symmetry Incarnations – Part I

Symmetry in nature is a sign of unperturbedness. It means nothing has interfered with a natural process, and that its effects at each step are simply scaled-up or scaled-down versions of each other. For this reason, symmetry is aesthetically pleasing, and often beautiful. Consider, for instance, faces. Symmetry of facial features about the central vertical axis is often translated as the face being beautiful – not just by humans but also monkeys.

However, this is just an example of one of the many forms of symmetry’s manifestation. When it involves geometric features, it’s a case of geometrical symmetry. When a process occurs similarly both forward and backward in time, it is temporal symmetry. If two entities that don’t seem geometrically congruent at first sight rotate, move or scale with similar effects on their forms, it is transformational symmetry. A similar definition applies to all theoretical models, musical progressions, knowledge, and many other fields besides.

Symmetry-breaking

One of the first (postulated) instances of symmetry is said to have occurred during the Big Bang, when the observable universe was born. A sea of particles was perturbed 13.75 billion years ago by a high-temperature event, setting up anecdotal ripples in their system, eventually breaking their distribution in such a way that some particles got mass, some charge, some a spin, some all of them, and some none of them. In physics, this event is called spontaneous, or electroweak, symmetry-breaking. Because of the asymmetric properties of the resultant particles, matter as we know it was conceived.

Many invented scientific systems exhibit symmetry in that they allow for the conception of symmetry in the things they make possible. A good example is mathematics – yes, mathematics! On the real-number line, 0 marks the median. On either sides of 0, 1 and -1 are equidistant from 0, 5,000 and -5,000 are equidistant from 0; possibly, ∞ and -∞ are equidistant from 0. Numerically speaking, 1 marks the same amount of something that -1 marks on the other side of 0. Not just that, but also characterless functions built on this system also behave symmetrically on either sides of 0.

To many people, symmetry evokes an image of an object that, when cut in half along a specific axis, results in two objects that are mirror-images of each other. Cover one side of your face and place the other side against a mirror, and what a person hopes to see is the other side of the face – despite it being a reflection (interestingly, this technique was used by neuroscientist V.S. Ramachandran to “cure” the pain of amputees when they tried to move a limb that wasn’t there). Like this, there are symmetric tables, chairs, bottles, houses, trees (although uncommon), basic geometric shapes, etc.

A demonstration of V.S. Ramachandran’s mirror-technique

Natural symmetry

Symmetry at its best, however, is observed in nature. Consider germination: when a seed grows into a small plant and then into a tree, the seed doesn’t experiment with designs. The plant is not designed differently from the small tree, and the small tree is not designed differently from the big tree. If a leaf is given to sprout from the mineral-richest node on the stem then it will; if a branch is given to sprout from the mineral-richest node on the trunk then it will. So, is mineral-deposition in the arbor symmetric? It should be if their transportation out of the soil and into the tree is radially symmetric. And so forth…

At times, repeated gusts of wind may push the tree to lean one way or another, shadowing the leaves from against the force and keeping them form shedding off. The symmetry is then broken, but no matter. The sprouting of branches from branches, and branches from those branches, and leaves from those branches all follow the same pattern. This tendency to display an internal symmetry is characterized as fractalization. A well-known example of a fractal geometry is the Mandelbrot set, shown below.

If you want to interact with a Mandelbrot set, check out this magnificent visualization by Paul Neave. You can keep zooming in, but at each step, you’ll only see more and more Mandelbrot sets. Unfortunately, this set is one of a few exceptional sets that are geometric fractals.

Meta-geometry & Mulliken symbols

Now, it seems like geometric symmetry is the most ubiquitous and accessible example to us. Let’s take it one step further and look at the “meta-geometry” at play when one symmetrical shape is given an extra dimension. For instance, a circle exists in two dimensions; its three-dimensional correspondent is the sphere. Through such an up-scaling, we’re ensuring that all the properties of a circle in two dimensions stay intact in three dimensions, and then we’re observing what the three-dimensional shape is.

A circle, thus, becomes a sphere; a square becomes a cube; a triangle becomes a tetrahedron (For those interested in higher-order geometry, the tesseract, or hypercube, may be of special interest!). In each case, the 3D shape is said to have been generated by a 2D shape, and each 2D shape is said to be the degenerate of the 3D shape. Further, such a relationship holds between corresponding shapes across many dimensions, with doubly and triply degenerate surfaces also having been defined.

The tesseract (a.k.a. hypercube)

Obviously, there are different kinds of degeneracy, 10 of which the physicist Robert S. Mulliken identified and laid out. These symbols are important because each one defines a degree of freedom that nature possesses while creating entities, and this includes symmetrical entities as well. In other words, if a natural phenomenon is symmetrical in n dimensions, then the only way it can be symmetrical in n+1 dimensions also is by transforming through one or many of the degrees of freedom defined by Mulliken.

Robert S. Mulliken (1896-1986)

Apart from regulating the perpetuation of symmetry across dimensions, the Mulliken symbols also hint at nature wanting to keep things simple and straightforward. The symbols don’t behave differently for processes moving in different directions, through different dimensions, in different time-periods or in the presence of other objects, etc. The preservation of symmetry by nature is not a coincidental design; rather, it’s very well-defined.

Anastomosis

Now, if that’s the case – if symmetry is held desirable by nature, if it is not a haphazard occurrence but one that is well orchestrated if given a chance to be – why don’t we see symmetry everywhere? Why is natural symmetry broken? Is all of the asymmetry that we’re seeing today the consequence of that electro-weak symmetry-breaking phenomenon? It can’t be because natural symmetry is still prevalent. Is it then implied that what symmetry we’re observing today exists in the “loopholes” of that symmetry-breaking? Or is it all part of the natural order of things, a restoration of past capabilities?

One of the earliest symptoms of symmetry-breaking was the appearance of the Higgs mechanism, which gave mass to some particles but not some others. The hunt for it’s residual particle, the Higgs boson, was spearheaded by the Large Hadron Collider (LHC) at CERN.

The last point – of natural order – is allegorical with, as well as is exemplified by, a geological process called anastomosis. This property, commonly of quartz crystals in metamorphic regions of Earth’s crust, allows for mineral veins to form that lead to shearing stresses between layers of rock, resulting in fracturing and faulting. Philosophically speaking, geological anastomosis allows for the displacement of materials from one location and their deposition in another, thereby offsetting large-scale symmetry in favor of the prosperity of microstructures.

Anastomosis, in a general context, is defined as the splitting of a stream of anything only to rejoin sometime later. It sounds really simple but it is a phenomenon exceedingly versatile, if only because it happens in a variety of environments and for an equally large variety of purposes. For example, consider Gilbreath’s conjecture. It states that each series of prime numbers to which the forward difference operator has been applied always starts with 1. To illustrate:

2 3 5 7 11 13 17 19 23 29 … (prime numbers)

Applying the operator once: 1 2 2 4 2 4 2 4 6 … (successive differences between numbers)
Applying the operator twice: 1 0 2 2 2 2 2 2 …
Applying the operator thrice: 1 2 0 0 0 0 0 …
Applying the operator for the fourth time: 1 2 0 0 0 0 0 …

And so forth.

If each line of numbers were to be plotted on a graph, moving upwards each time the operator is applied, then a pattern for the zeros emerges, shown below.

This pattern is called that of the stunted trees, as if it were a forest populated by growing trees with clearings that are always well-bounded triangles. The numbers from one sequence to the next are anastomosing, only to come close together after every five lines! Another example is the vein skeleton on a hydrangea leaf. Both the stunted trees and the hydrangea veins patterns can be simulated using the rule-90 simple cellular automaton that uses the exclusive-or (XOR) function.

Nambu-Goldstone bosons

Now, what does this have to do with symmetry, you ask? While anastomosis may not have a direct relation with symmetry and only a tenuous one with fractals, its presence indicates a source of perturbation in the system. Why else would the streamlined flow of something split off and then have the tributaries unify, unless possibly to reach out to richer lands? Either way, anastomosis is a sign of the system acquiring a new degree of freedom. By splitting a stream with x degrees of freedom into two new streams each with x degrees of freedom, there are now more avenues through which change can occur.

Water entrainment in an estuary is an example of a natural asymptote or, in other words, a system’s “yearning” for symmetry

Particle physics simplies this scenario by assigning all forces and amounts of energy a particle. Thus, a force is said to be acting when a force-carrying particle is being exchanged between two bodies. Since each degree of freedom also implies a new force acting on the system, it wins itself a particle, actually a class of particles called the Nambu-Goldstone (NG) bosons. Named for Yoichiro Nambu and Jeffrey Goldstone, the particle’s existence’s hypothesizers, the presence of n NG bosons in a system means that, broadly speaking, the system has n degrees of freedom.

Jeffrey Goldstone (L) & Yoichiro Nambu

How and when an NG boson is introduced into a system is not yet a well-understood phenomenon theoretically, let alone experimentally! In fact, it was only recently that a mathematical model was developed by a theoretical physicist at UCal-Berkeley, Haruki Watanabe, capable of predicting how many degrees of freedom a complex system could have given the presence of a certain number of NG bosons. However, at the most basic level, it is understood that when symmetry breaks, an NG boson is born!

The asymmetry of symmetry

In other words, when asymmetry is introduced in a system, so is a degree of freedom. This seems only intuitive. At the same time, you’d think the axiom is also true: that when an asymmetric system is made symmetric, it loses a degree of freedom – but is this always true? I don’t think so because, then, it would violate the third law of thermodynamics (specifically, the Lewis-Randall version of its statement). Therefore, there is an inherent irreversibility, an asymmetry of the system itself: it works fine one way, it doesn’t work fine another – just like the split-off streams, but this time, being unable to reunify properly. Of course, there is the possibility of partial unification: in the case of the hydrangea leaf, symmetry is not restored upon anastomosis but there is, evidently, an asymptotic attempt.

Each piece of a broken mirror-glass reflects an object entirely, shedding all pretensions of continuity. The most intriguing mathematical analogue of this phenomenon is the Banach-Tarski paradox, which, simply put, takes symmetry to another level.

However, it is possible that in some special frames, such as in outer space, where the influence of gravitational forces is weak if not entirely absent, the restoration of symmetry may be complete. Even though the third law of thermodynamics is still applicable here, it comes into effect only with the transfer of energy into or out of the system. In the absence of gravity (and, thus, friction), and other retarding factors, such as distribution of minerals in the soil for acquisition, etc., symmetry may be broken and reestablished without any transfer of energy.

The simplest example of this is of a water droplet floating around. If a small globule of water breaks away from a bigger one, the bigger one becomes spherical quickly; when the seditious droplet joins with another globule, that globule also reestablishes its spherical shape. Thermodynamically speaking, there is mass transfer, but at (almost) 100% efficiency, resulting in no additional degrees of freedom. Also, the force at play that establishes sphericality is surface tension, through which a water body seeks to occupy the shape that has the lowest volume for the correspondingly highest surface area (notice how the shape is incidentally also the one with the most axes of symmetry, or, put another way, no redundant degrees of freedom? Creating such spheres is hard!).

A godless, omnipotent impetus

Perhaps the explanation of the roles symmetry assumes seems regressive: every consequence of it is no consequence but itself all over again (self-symmetry – there, it happened again). This only seems like a natural consequence of anything that is… well, naturally conceived. Why would nature deviate from itself? Nature, it seems, isn’t a deity in that it doesn’t create. It only recreates itself with different resources, lending itself and its characteristics to different forms.

A mountain will be a mountain to its smallest constituents, and an electron will be an electron no matter how many of them you bring together at a location. But put together mountains and you have ranges, sub-surface tectonic consequences, a reshaping of volcanic activity because of changes in the crust’s thickness, and a long-lasting alteration of wind and irrigation patterns. Bring together a unusual number of electrons to make up a high-density charge, and you have a high-temperature, high-voltage volume from which violent, permeating discharges of particles could occur – i.e., lightning. Why should stars, music, light, radioactivity, politics, manufacturing or knowledge be any different?

With this concludes the introduction to symmetry. Yes, there is more, much more…

xkcd #849

Is anything meant to remain complex?

The first answer is “No”. I mean, whatever you’re writing about, the onus is on the writer to break his subject down to its simplest components, and then put them back together in front of the reader’s eyes. If the writer fails to do that, then the blame can’t be placed on the subject.

It so happens that the blame can be placed on the writer’s choice of subject. Again, the fault is the writer’s, but what do you when the subject is important and ought to be written about because some recent contribution to it makes up a piece of history? Sure, the essentials are the same: read up long and hard on it, talk to people who know it well and are able to break it down in some measure for you, and try and use infographics to augment the learning process.

But these methods, too, have their shortcomings. For one, if the subject has only a long-winded connection to phenomena that affect reality, then strong comparisons have to make way for weak metaphors. A consequence of this is that the reader is more misguided in the long-term than he is “learned” in the short-term. For another, these methods require that the writer know what he’s doing, that what he’s writing about makes sense to him before he attempts to make sense of it for his readers.

This is not always the case: given the grey depths that advanced mathematics and physics are plumbing these days, science journalism concerning these areas are written with a view to make the subject sound awesome, enigmatic, and, sometimes, hopefully consequential than they are in place to provide a full picture of on-goings.

Sometimes, we don’t have a full picture because things are that complex.

The reader is entitled to know – that’s the tenet of the sort of science writing that I pursue: informational journalism. I want to break the world around me down to small bits that remain eternally comprehensible. Somewhere, I know, I must be able to distinguish between my shortcomings and the subject’s; when I realize I’m not able to do that effectively, I will have failed my audience.

In such a case, am I confined to highlighting the complexity of the subject I’ve chosen?


The part of the post that makes some sense ends here. The part of the post that may make no sense starts here.

The impact of this conclusion on science journalism worldwide is that there is a barrage of didactic pieces once something is completely understood and almost no literature during the finding’s formative years despite public awareness that important, and legitimate, work was being done (This is the fine line that I’m treading).

I know this post sounds like a rant – it is a rant – against a whole bunch of things, not the least-important of which is that groping-in-the-dark is a fact of life. However, somehow, I still have a feeling that a lot of scientific research is locked up in silence, yet unworded, because we haven’t received the final word on it. A safe course, of course: nobody wants to be that guy who announced something prematurely and the eventual result was just something else.

On the fear of the disgusting

Disgusting things are broken, unnatural manifestations of beauty in an otherwise beautiful world. If anything, isn’t it the symmetrical and the alluring that we must fear for their full mastery over chaos?

Disgusting things are defeated things.

With our fear comes the baleful regard we credit them with, the attention of our minds. They don’t deserve it, those weaklings. Promise ourselves not to look upon their world as we are walking, or they will all gather at the feet of our attention; no!

Trample them if we must, ignore them if we can.

One day, there shall be no screams, just a hollow silence, the memories of fear long dissolved into the flesh our feet. That day, we will be Masters of the Diseases, and their reckoning!

A brief description of galactic clusters and their detection

 

The oldest galaxies are observed today as elliptical, and to be found in clusters. These clusters are the remnants of older protoclusters that dominated the landscape of outer space in the universe’s early years, years that witnessed the formation of the first stars and, subsequently, the first galaxies. In  regions of space where the population density of stars is low and the closest cluster farther away from the closest star, more recently formed galaxies may be found. These relatively emptier regions are called ‘general fields’.

A galaxy takes hundreds of millions of years to form fully, and involves processes quite complex; imagine, the simplest among them are nuclear transmutations! At the same time, the phenomenology of the entire sequence – the first steps taken, the interaction of matter and radiation at large scales, the influence of the rest of the universe on the galaxy’s formation itself – can be understood by peering into history through some of Earth’s most powerful telescopes. The farther through their lenses we look, the deeper into the universe’s history we are gazing. And so, looking hard enough, we may observe a protocluster in its formative years, and glean from the spectacle the various forces at play!

That is what two astronomers and their team from the National Astronomical Observatory of Japan (NAOJ) have done. Using the Multi-object Infrared Camera and Spectrograph (MOIRCS – mounted on the Subaru Telescope), Drs. Masao Hayashi and Tadayuki Kodama identified a highly dense and active protocluster 11 billion light-years from Earth (announced September 20, 2012). In other words, the cluster they are looking at exists 11 billion years in our past, at a time when the universe was only 2.75 billion years old! Needless to say, it makes for an excellent laboratory, one that need only be carefully observed  to answer our burning questions.

USS1558-003 (The horizontal and vertical axes show relative distances in right ascensions and declinations in arcminute units with respect to the radio galaxy. North is up, and east is to the left. The black dots are all galaxies selected in this field. Magenta dots show old, passively evolving galaxies. Blue squares represent star-forming galaxies with H-alpha emission lines, while red ones show red-burning galaxies. Large gray circles show the three clumps of galaxies.)

The first among them is why galaxies “choose” to cluster themselves. The protocluster, as usual named inelegantly as USS1558-003, actually consists of three large closely-spaced clusters of galaxies, with an astral density as much as 15 times that of the general fields in the same cosmic period and a star-formation rate equivalent to a whopping 10,000 Suns/year. These numbers effectively leave such clusters peerless in their formative libido, as well as naked in the eyes of infrared telescopes such as the MOIRCS, without with such bristling cosmic laboratories could not have been found.

Because of the higher star-formation rate, a lot of energy is traded between different bits of matter. However, there is an evident problem of plenty: what do the telescopes look for? Surely, they must somehow be able to measure the amount of energy riding on each exchange. However, the frequency of the associated radiation is not confined to any one bracket of the electromagnetic spectrum – even if only thermal or visible radiation is being tracked. What exactly do the telescopes look for, then?

Leave alone the quintillions of kilometers; the answer to this question lies in the angstroms, within the confines of hydrogen atoms. Have a look at the image below.

The galaxies marked by green circles are emitting radiation with a wavelength of 656 nm, also called H-alpha radiation. It falls within what is called the Balmer series of hydrogen’s emission spectrum, named for Johann Balmer, who discovered the formula for the eponymous series of emission-frequencies in 1885. The presence of an H-alpha line in the emitted radiation is an unmistakable sign of the presence of hydrogen: Radiation is emitted at precisely 656 nm (in the form of a photon of that wavelength) when excited electrons in the hydrogen atom drop from the third to the second energy levels.

The Balmer series, and the H-α line, are important tools used in astronomical spectroscopy: It is not just that they indicate the presence of hydrogen, but where and in what quantities, too. Those values throw light on which stages of formation the stars of that galaxy are in, the influence of the reactive region’s neighborhood, and where and how star-formation is initiated. In fact, by detecting and measuring the said properties of hydrogen, Kodama et al have already found that, in a cluster, star-formation begins at the core – just where the density of extant stars is already high – and gradually spreads outward to its periphery. In the present-day, this finding contrasts with the shape and structure of elliptical galaxies, which have a different mass distribution from what an in-to-out star-formation pattern suggests.

These are only the early stages of Kodama’s and his team’s research. As they sum it up,

We are now at the stage when we are using various new instruments to show in detail the internal structures of galaxies in formation so that we can identify the physical mechanisms that control and determine the properties of galaxies.

Not that it needs utterance, but: This is important research. Astronomy and astrophysics are costly affairs when the genre is experimental and the scale quite big, rendering finds such as that of USS1558-003 very providential as well as insightful. We may have just spotted the Higgs boson, we may just have begun on a long journey to find the smallest thing in the universe. However, as it stands, of the largest things in the universe we have very little to say, too.

 

Blunt the blade

Are destructive but naturally occurring environmental phenomena important to Earth’s atmosphere?

Does the mere acceptance of its innate “environmentalness” free it from having to tolerate human intervention?

For example, how do hurricanes contribute to Earth’s atmosphere and the atmosphere’s “well-being”? If humans were to prevent hurricanes from ever taking shape again – in the process not unleashing any collateral side-effect – would the ecosystem be any the better or worse for it?

The capacity to notoriety of work

Why is it considered OK to flaunt hard work? Will there come a time when it might be more prudent to mask long hours of work behind a finished product and instead behave as if the object was conceived with less work and more skill and intelligence?

Is it because hard work is considered a fundamental opportunity given all humankind?

But just the possession of will and spirit deep within doesn’t mean it has to be used, to be exhausted in the pursuit of success, albeit its exhaustion be accompanied with praise. Why is that praise justified?

“He worked hard and long, I worked not half-as-hard and not for half-as-long, and I give you something better”: With this example in mind, is hard work considered a nullifier, a currency that translates all forms of luck, ill-luck, opportunity and accident into the form of perspiration and blood? Why should it be?

Moreover, the tendency exists, too, that recognizes, nay, yearns that, the capacity for honest work is somehow more innate than the capacity to fool, trick, spy on, defame, slander, and kill, that honest work is more human than the capacity for all these traits.

Is it really?

Who deigned that work would be that nullifier, a currency, and not intelligence? Is hard-work “more” fundamental than intelligence? Why is the flaunting of intelligence considered impudent while the flaunting of work a sign of the presence of humility? Is the capacity for work less volatile than the capacity to think smart? Is one acquired and the other only delivered at the time of birth?

Will a day come when the flaunting of hard-work is considered a sign of impudence and the flaunting of intelligence a sign of the presence of humility? Or – alas! – is it the implied notion of superiority that so scares us, that keeps us from acknowledging publicly that superior intelligence does imply a form of success, perhaps similar to the success implied by the capacity to work hard?

What sacrifice does one represent that the other, seemingly, rejects? Why does only intelligence suffer the curse of bigotry while honest work retains the privilege to socially unfettered use?

A revisitation inspired by Facebook’s opportunities

When habits form, rather become fully formed, it becomes difficult to recognize the drive behind its perpetuation. Am I still doing what I’m doing for the habit’s sake, or is it that I still love what I do and that’s why I’m doing it? In the early stages of habit-formation, the impetus has to come from within – let’s say as a matter of spirit – because it’s a process of creation. Once the entity has been created, once it is fully formed, it begins to sustain itself. It begins to attract attention, the focus of other minds, perhaps even the labor of other wills. That’s the perceived pay-off of persevering at the beginning, persevering in the face of nil returns.

But where the perseverance really makes a difference is when, upon the onset of that dull moment, upon the onset of some lethargy or the writers’ block, we somehow lose the ability to set apart fatigue-of-the-spirit and suspension-of-the-habit. If I no longer am able to write, even if at least for a day or so, I should be able to tell the difference between that pit-stop and a perceived threat of the habit starting to become endangered. If we don’t learn to make that distinction – which is more palpable than fine or blurry most of the time – then we will have have persevered for nothing but perseverance’s sake.

This realization struck me after I opened a Facebook page for my blog so that, given my incessant link-sharing on the social network, only the people who wanted to read the stuff I shared could sign-up and receive the updates. I had no intention earlier to use Facebook as anything but a socialization platform, but after my the true nature of my activity on Facebook was revealed to me (by myself), I realized my professional ambitions had invaded my social ones. So, to remind myself why the social was important, too, I decided to stop sharing news-links and analyses on my timeline.

However, after some friends expressed excitement – that I never quite knew was there – about being able to avail my updates in a more cogent manner, I understood that there were people listening to me, that they did spend time reading what I had to say on science news, etc., not just from on my blog but also from wherever I decided to post it! At the same moment, I thought to myself, “Now, why am I blogging?” I had no well-defined answer, and that’s when I knew my perseverance was being misguided by my own hand, misdirected by my own foolishness.

I opened astrohep.wordpress.com in January, 2011, and whatever science- or philosophy-related stories I had to tell, I told here. After some time, during a period coinciding with the commencement of my formal education in journalism, I started to use isnerd more effectively: I beat down the habit of using big words (simply because they encapsulated better whatever I had to say) and started to put some effort in telling my stories differently, I did a whole lot of reading before and while writing each post, and I used quotations and references wherever I could.

But the reason I’d opened this blog stayed intact all the time (or at least I think it did): I wanted to tell my science/phil. stories because some of the people around me liked hearing them and I thought the rest of the world might like hearing them, too.

At some point, however, I crossed over into the other side of perseverance: I was writing some of my posts not because they were stories people might like to hear but because, hey, I was a story-writer and what do I do but write stories! I was lucky enough to warrant no nasty responses to some absolutely egregious pieces of non-fiction on this blog, and parallely, I was unlucky enough to not understand that a reader, no matter how bored, never would want to be presented crap.

Now, where I used to draw pride from pouring so much effort into a small blog in one corner of WordPress, I draw pride from telling stories somewhat effectively – although still not as effectively as I’d like. Now, astrohep.wordpress.com is not a justifiable encapsulation of my perseverance, and nothing is or will be until I have the undivided attention of my readers whenever I have something to present them. I was wrong in assuming that my readers would stay with me and take to my journey as theirs, too: A writer is never right in assuming that.

An Indian supercomputer by 2017. Umm…

This is a tricky question. And for background, here’s the tweet from IBN Live that caught my eye.

(If you didn’t read the IBN piece, this is the gist. India, rather Kapil Sibal, our present telecom minister, will have a state-of-the-art supercomputer, 61 times faster than current-leader Sequoia, built indigenously by 2017 at a cost of Rs. 4,700 crore across 5 years.)

Kapil Sibal

India already has many supercomputers: NAL’s Flosolver, C-DAC’s PARAM, DRDO’s PACE/ANURAG, BARC’s Anupam, IMS’s Kabru-Linux cluster and CRL’s Eka (both versions of PARAM), and ISRO’s Saga 220.

The most-powerful among them, PARAM (through its latest version), is ranked 58th in the world. It was designed and deployed by the Pune-based Centre for Development of Advanced Computing (C-DAC) and the Department of Electronics and Information Technology (DEITY – how apt) in 1991. Its first version, PARAM 8000, used 8,000 Inmos transputers (a microprocessor architecture built with parallel-processing in mind); subsequent versions include PARAM 10000, Padma, and the latest Yuva. Yuva came into operation in November 2008 and boasts a peak speed of 54 teraflops (1 teraflops = 1 trillion floating point operations per second; floating point is a data type that stores numbers as {significant digits * base^exponent}).

Interestingly, in July 2009, C-DAC had announced that a new version of PARAM was in the works and that it would be deployed in 2012 with a computing power of more than 1 petaflops (1 petalfops = 1,000 teraflops) at a cost of Rs. 500 crore. Where is it?

Then, in May, 2011, it was announced that India would spend Rs. 10,000 crore in building a 132.8-exaflops supercomputer by 2017. Does that make today’s announcement an effective reduction in budget as well as diminishing of ambitions? If so, then why? If not, then are we going to have two high-power supercomputers?!

Such high-power supercomputers that the proposed 2017-supercomputer will compete with usually find use in computational fluid dynamics simulations, weather forecasting, finite element analysis, seismic modelling, e-governance, telemedicine, and administering high-speed network activities. Obviously, these are tasks that operate with a lot of probabilities thrown into the simulation and calculation mix, and require hundreds of millions of operation per second to be solved within an “acceptable” chance of the answer being right. As a result, and because of the broad scale of these applications, such supercomputers are built only when the need for the answers is already present. They are not installed to create needs but only to satisfy them.

So, that said, why does India need such a high-power supercomputer? Deploying a supercomputer is no easy task, and deploying one that’s so far ahead of the field also involves an overhaul of the existing system and network architectures. What needs is the government creating that might require so much power? Will we be able to afford it?

In fact, I worry that Mr. Kapil Sibal has announced the decision to build such a device simply because India doesn’t feature in the list of top 10 countries that have high-power supercomputers. Because, beyond being able to predict weather patterns and further extend the country’s space-faring capabilities, what will the device be used for? Are there records that the ones already in place are being used effectively?

Rubbernecking at redshifting

The interplay of energy and matter is simply wonderful because, given the presence of some intrinsic properties, the results of their encounters can be largely predicted. The presence of smoke indicates fire, the presence of shadows both darkness and light, the presence of winds a pressure gradient, the presence of mass a gravitational potential. And a special radiological extension of the last correlation gives rise to a phenomenon called gravitational redshift.

The wave-particle duality insists that electromagnetic radiation, if conceived as a stream of photons, can also be thought of as propagating as waves. All waves have two fundamental properties: wavelength and frequency. If a wave consists of a crest and a trough, the length of a crest-trough pair is its wavelength, and the number of wavelengths traversed by the wave in a second its frequency. Also, the energy contained in a wave is directly proportional to its frequency.

A wave undergoes a gravitational redshift when it moves from a region of lower gravitational potential to a region of higher gravitational potential. Such a potential gradient may be experienced when one moves away from a massive body, from regions of stronger to weaker gravitational pull (note the inverse variation). And when you think of radiation, such as light, moving from the surface of a star and toward a far-away observer, the light gets redshifted. The phenomenon was proposed, implicitly, in 1916 by Albert Einstein through his, and so called, Einstein Field Equations (EFE) that described the general theory of relativity (GR).

When radiation gets redshifted, its frequency gets reduced toward the red portion of the electromagnetic spectrum, hence the name. Agreed, the phenomenon is counter-intuitive. Usually, when the leash on an escaping object is loosened, the object speeds up. In the case of a redshift, however, the frequency is lowered (or the particle slowed).

The real wonder lies in the predictive power of such physics. It doesn’t matter whence the mass and what the wave: their interaction is always preceded and succeeded by a blueshift and a redshift. More, speaking from an application-oriented perspective, the radiation reaching Earth from outer space will always be redshifted. Consider it: the waves will have left the gravitational pull of some body behind on their way toward Earth. In thinking so, given some radiation, its source, and thus the radiation’s initial frequency, it becomes easy to calculate how much mass lies between the source and Earth.

A universal map of the cosmic microwave background (CMB) radiation as recorded by the Wilkinson Microwave Anisotropy Probe (WMAP) after its launch in 2011 (This map serves as a reliable benchmark against which to compare the locally observed frequencies of CMB)

As a naturally available resource, consider the cosmic microwave background (CMB) radiation. The CMB was born when the universe was around 379,000 years old, when the ionic plasma born moments after the Big Bang had cooled to a temperature at which electrons and protons could combine to form hydrogen atoms, leaving the photons decoupled from matter and loosened upon the universe as residual radiation (currently at a temperature of  2.72548 ± 0.00057 K).

And in the CMB-context, the Sachs-Wolfe effect is of two kinds: integrated and non-integrated. The non-integrated Sachs-Wolfe effect occurs at the surface-of-last-scattering, and the integrated version between the surface-of-last-scattering and Earth. The surface mentioned here can be thought of as an imaginary surface in space where the last matter-radiation decouplings occurred. What we’re interested in is the integrated Sachs-Wolfe effect.

Assuming that the photons have just left a star behind, and been gravitationally redshifted in the process, there is a lot of matter they could still encounter on their way to Earth even if our home planet maintains a clear line-of-sight to the star. This includes dust, stray rocks, gases blown around by stellar winds, and – if it does exist – dark energy.

The iconic Pillars of Creation, snapped by the Hubble Space Telescope on April 1, 1995, show columns of intergalactic dust in the midst of star-creation while also being eroded by starlight and stellar winds from other stars in the neighborhood.

Therefore, a great way to detect the presence of dark energy between two points in space would be easy, wouldn’t it? All we’d have to do is measure the redshift in radiation detected by a satellite in orbit around Earth coming from a selected region, and compare it with a map of that region. An analysis of the redshift “leftover” from subtracting the redshift due to matter should yield the amount of dark energy! (See also: WMAP)

This procedure was suggested in 1996 by Neil Turok and Robert Crittenden of the Perimeter Institute, Canada. However, after the first evidence of the integrated Sachs-Wolfe effect was detected in 2003, the correlation between the observed data and already-available maps was very low. This lead some skeptics to suggest that the effect could have instead been caused by space dust. The possibility of their being right was indeed high, until September 11, 2012, when their skepticism was almost conclusively refuted by a team of scientists from the University of Portsmouth and the LMU University Munich.

The study, lead by Tommaso Giannantonio and Crittenden, lasted two years and established at a confidence level of 5.4 sigma (or 99.996%) that the ’03 observation indeed corresponded to dark energy and not any other source of gravitational potential.

The phenomenological legacy of redshifts is derived from its special place in Einstein’s GR. The descriptive EFE first opened even the theoretical possibilities of such redshifts and their applications in astrophysics research.

The invasion

The most fear I’ve ever experienced is when I smoked up for the first time. I thought I’d enjoy it – isn’t that always the case when you foray into an unknown realm of experiences, a world of as-yet uninhabited sensations? With that promise firmly in mind, I’d taken a few drags and settled back, waiting for some awakening to come dazzle me. And when it did hit, I was terrified. It started with my fingertips turning numb, followed by my face… I couldn’t feel the wind on that windy day. There was nothing about me that let me close my eyes lest they turn dry against the onslaught of dead, cold air. Next, there was the reeling imagination: flying colours, rifle-toting Russian stalkers, speeding cars that cannoned me into the wall in front of my chair, and then… a memory of standing in front of a painting wondering if it was really there.

Through all of this, a voice persisted at the back of my head – is there any other place whence subdued voices persist? – telling me that I was losing control. Now, I know that there were two of me: one moving forward like an untamed warhorse, trampling and snorting and drooling, the other attempting to rein it in, trying to snap my head back without breaking it altogether. I couldn’t possibly have sided with either force: each was as necessary as it was inexplicably just there. When I tried to stand up, the jockey brought to mind gravity, my uneven footing, and my sense of neuromuscular control, but they were quick to dissipate, to dissolve within the temptation of murky passions swimming in front of my eyes. My body was lost to me; just as suddenly, I was someone else. Sure, I could have appreciated the loss of all but some inhibitions, but the loss only served to further remind that it was just that: loss. In its passing was more betrayal than in its wake more promise.

Death, you see, is nothing different. Of course, it stops with the loss – there is no “otherside”, no after. And with the presence of that darkness continuously assured, the loss simply accentuates it, each passing moment stealing forever a sense. That must be a terrifying thing, ironical as it may sound, perhaps because it’s an irreversible, suffocating handicap, the last argument that you will have, and one that you will be forced to leave without a chance at rebuttal. And then… who will look through your eyes? Who will reason through your mind? Who will shiver against the oncoming cold under the sheath of your skin? It is hard to say, just like measuring the brightness of one candle with another: the first could be twice as bright as the second, but really how bright are they? It is a dead comparison, the life of any such glow trapped within the body of a burning wick. The luxury of universal constants doesn’t exist, does it?