A brief description of galactic clusters and their detection

 

The oldest galaxies are observed today as elliptical, and to be found in clusters. These clusters are the remnants of older protoclusters that dominated the landscape of outer space in the universe’s early years, years that witnessed the formation of the first stars and, subsequently, the first galaxies. In  regions of space where the population density of stars is low and the closest cluster farther away from the closest star, more recently formed galaxies may be found. These relatively emptier regions are called ‘general fields’.

A galaxy takes hundreds of millions of years to form fully, and involves processes quite complex; imagine, the simplest among them are nuclear transmutations! At the same time, the phenomenology of the entire sequence – the first steps taken, the interaction of matter and radiation at large scales, the influence of the rest of the universe on the galaxy’s formation itself – can be understood by peering into history through some of Earth’s most powerful telescopes. The farther through their lenses we look, the deeper into the universe’s history we are gazing. And so, looking hard enough, we may observe a protocluster in its formative years, and glean from the spectacle the various forces at play!

That is what two astronomers and their team from the National Astronomical Observatory of Japan (NAOJ) have done. Using the Multi-object Infrared Camera and Spectrograph (MOIRCS – mounted on the Subaru Telescope), Drs. Masao Hayashi and Tadayuki Kodama identified a highly dense and active protocluster 11 billion light-years from Earth (announced September 20, 2012). In other words, the cluster they are looking at exists 11 billion years in our past, at a time when the universe was only 2.75 billion years old! Needless to say, it makes for an excellent laboratory, one that need only be carefully observed  to answer our burning questions.

USS1558-003 (The horizontal and vertical axes show relative distances in right ascensions and declinations in arcminute units with respect to the radio galaxy. North is up, and east is to the left. The black dots are all galaxies selected in this field. Magenta dots show old, passively evolving galaxies. Blue squares represent star-forming galaxies with H-alpha emission lines, while red ones show red-burning galaxies. Large gray circles show the three clumps of galaxies.)

The first among them is why galaxies “choose” to cluster themselves. The protocluster, as usual named inelegantly as USS1558-003, actually consists of three large closely-spaced clusters of galaxies, with an astral density as much as 15 times that of the general fields in the same cosmic period and a star-formation rate equivalent to a whopping 10,000 Suns/year. These numbers effectively leave such clusters peerless in their formative libido, as well as naked in the eyes of infrared telescopes such as the MOIRCS, without with such bristling cosmic laboratories could not have been found.

Because of the higher star-formation rate, a lot of energy is traded between different bits of matter. However, there is an evident problem of plenty: what do the telescopes look for? Surely, they must somehow be able to measure the amount of energy riding on each exchange. However, the frequency of the associated radiation is not confined to any one bracket of the electromagnetic spectrum – even if only thermal or visible radiation is being tracked. What exactly do the telescopes look for, then?

Leave alone the quintillions of kilometers; the answer to this question lies in the angstroms, within the confines of hydrogen atoms. Have a look at the image below.

The galaxies marked by green circles are emitting radiation with a wavelength of 656 nm, also called H-alpha radiation. It falls within what is called the Balmer series of hydrogen’s emission spectrum, named for Johann Balmer, who discovered the formula for the eponymous series of emission-frequencies in 1885. The presence of an H-alpha line in the emitted radiation is an unmistakable sign of the presence of hydrogen: Radiation is emitted at precisely 656 nm (in the form of a photon of that wavelength) when excited electrons in the hydrogen atom drop from the third to the second energy levels.

The Balmer series, and the H-α line, are important tools used in astronomical spectroscopy: It is not just that they indicate the presence of hydrogen, but where and in what quantities, too. Those values throw light on which stages of formation the stars of that galaxy are in, the influence of the reactive region’s neighborhood, and where and how star-formation is initiated. In fact, by detecting and measuring the said properties of hydrogen, Kodama et al have already found that, in a cluster, star-formation begins at the core – just where the density of extant stars is already high – and gradually spreads outward to its periphery. In the present-day, this finding contrasts with the shape and structure of elliptical galaxies, which have a different mass distribution from what an in-to-out star-formation pattern suggests.

These are only the early stages of Kodama’s and his team’s research. As they sum it up,

We are now at the stage when we are using various new instruments to show in detail the internal structures of galaxies in formation so that we can identify the physical mechanisms that control and determine the properties of galaxies.

Not that it needs utterance, but: This is important research. Astronomy and astrophysics are costly affairs when the genre is experimental and the scale quite big, rendering finds such as that of USS1558-003 very providential as well as insightful. We may have just spotted the Higgs boson, we may just have begun on a long journey to find the smallest thing in the universe. However, as it stands, of the largest things in the universe we have very little to say, too.

 

Blunt the blade

Are destructive but naturally occurring environmental phenomena important to Earth’s atmosphere?

Does the mere acceptance of its innate “environmentalness” free it from having to tolerate human intervention?

For example, how do hurricanes contribute to Earth’s atmosphere and the atmosphere’s “well-being”? If humans were to prevent hurricanes from ever taking shape again – in the process not unleashing any collateral side-effect – would the ecosystem be any the better or worse for it?

The capacity to notoriety of work

Why is it considered OK to flaunt hard work? Will there come a time when it might be more prudent to mask long hours of work behind a finished product and instead behave as if the object was conceived with less work and more skill and intelligence?

Is it because hard work is considered a fundamental opportunity given all humankind?

But just the possession of will and spirit deep within doesn’t mean it has to be used, to be exhausted in the pursuit of success, albeit its exhaustion be accompanied with praise. Why is that praise justified?

“He worked hard and long, I worked not half-as-hard and not for half-as-long, and I give you something better”: With this example in mind, is hard work considered a nullifier, a currency that translates all forms of luck, ill-luck, opportunity and accident into the form of perspiration and blood? Why should it be?

Moreover, the tendency exists, too, that recognizes, nay, yearns that, the capacity for honest work is somehow more innate than the capacity to fool, trick, spy on, defame, slander, and kill, that honest work is more human than the capacity for all these traits.

Is it really?

Who deigned that work would be that nullifier, a currency, and not intelligence? Is hard-work “more” fundamental than intelligence? Why is the flaunting of intelligence considered impudent while the flaunting of work a sign of the presence of humility? Is the capacity for work less volatile than the capacity to think smart? Is one acquired and the other only delivered at the time of birth?

Will a day come when the flaunting of hard-work is considered a sign of impudence and the flaunting of intelligence a sign of the presence of humility? Or – alas! – is it the implied notion of superiority that so scares us, that keeps us from acknowledging publicly that superior intelligence does imply a form of success, perhaps similar to the success implied by the capacity to work hard?

What sacrifice does one represent that the other, seemingly, rejects? Why does only intelligence suffer the curse of bigotry while honest work retains the privilege to socially unfettered use?

A revisitation inspired by Facebook’s opportunities

When habits form, rather become fully formed, it becomes difficult to recognize the drive behind its perpetuation. Am I still doing what I’m doing for the habit’s sake, or is it that I still love what I do and that’s why I’m doing it? In the early stages of habit-formation, the impetus has to come from within – let’s say as a matter of spirit – because it’s a process of creation. Once the entity has been created, once it is fully formed, it begins to sustain itself. It begins to attract attention, the focus of other minds, perhaps even the labor of other wills. That’s the perceived pay-off of persevering at the beginning, persevering in the face of nil returns.

But where the perseverance really makes a difference is when, upon the onset of that dull moment, upon the onset of some lethargy or the writers’ block, we somehow lose the ability to set apart fatigue-of-the-spirit and suspension-of-the-habit. If I no longer am able to write, even if at least for a day or so, I should be able to tell the difference between that pit-stop and a perceived threat of the habit starting to become endangered. If we don’t learn to make that distinction – which is more palpable than fine or blurry most of the time – then we will have have persevered for nothing but perseverance’s sake.

This realization struck me after I opened a Facebook page for my blog so that, given my incessant link-sharing on the social network, only the people who wanted to read the stuff I shared could sign-up and receive the updates. I had no intention earlier to use Facebook as anything but a socialization platform, but after my the true nature of my activity on Facebook was revealed to me (by myself), I realized my professional ambitions had invaded my social ones. So, to remind myself why the social was important, too, I decided to stop sharing news-links and analyses on my timeline.

However, after some friends expressed excitement – that I never quite knew was there – about being able to avail my updates in a more cogent manner, I understood that there were people listening to me, that they did spend time reading what I had to say on science news, etc., not just from on my blog but also from wherever I decided to post it! At the same moment, I thought to myself, “Now, why am I blogging?” I had no well-defined answer, and that’s when I knew my perseverance was being misguided by my own hand, misdirected by my own foolishness.

I opened astrohep.wordpress.com in January, 2011, and whatever science- or philosophy-related stories I had to tell, I told here. After some time, during a period coinciding with the commencement of my formal education in journalism, I started to use isnerd more effectively: I beat down the habit of using big words (simply because they encapsulated better whatever I had to say) and started to put some effort in telling my stories differently, I did a whole lot of reading before and while writing each post, and I used quotations and references wherever I could.

But the reason I’d opened this blog stayed intact all the time (or at least I think it did): I wanted to tell my science/phil. stories because some of the people around me liked hearing them and I thought the rest of the world might like hearing them, too.

At some point, however, I crossed over into the other side of perseverance: I was writing some of my posts not because they were stories people might like to hear but because, hey, I was a story-writer and what do I do but write stories! I was lucky enough to warrant no nasty responses to some absolutely egregious pieces of non-fiction on this blog, and parallely, I was unlucky enough to not understand that a reader, no matter how bored, never would want to be presented crap.

Now, where I used to draw pride from pouring so much effort into a small blog in one corner of WordPress, I draw pride from telling stories somewhat effectively – although still not as effectively as I’d like. Now, astrohep.wordpress.com is not a justifiable encapsulation of my perseverance, and nothing is or will be until I have the undivided attention of my readers whenever I have something to present them. I was wrong in assuming that my readers would stay with me and take to my journey as theirs, too: A writer is never right in assuming that.

An Indian supercomputer by 2017. Umm…

This is a tricky question. And for background, here’s the tweet from IBN Live that caught my eye.

(If you didn’t read the IBN piece, this is the gist. India, rather Kapil Sibal, our present telecom minister, will have a state-of-the-art supercomputer, 61 times faster than current-leader Sequoia, built indigenously by 2017 at a cost of Rs. 4,700 crore across 5 years.)

Kapil Sibal

India already has many supercomputers: NAL’s Flosolver, C-DAC’s PARAM, DRDO’s PACE/ANURAG, BARC’s Anupam, IMS’s Kabru-Linux cluster and CRL’s Eka (both versions of PARAM), and ISRO’s Saga 220.

The most-powerful among them, PARAM (through its latest version), is ranked 58th in the world. It was designed and deployed by the Pune-based Centre for Development of Advanced Computing (C-DAC) and the Department of Electronics and Information Technology (DEITY – how apt) in 1991. Its first version, PARAM 8000, used 8,000 Inmos transputers (a microprocessor architecture built with parallel-processing in mind); subsequent versions include PARAM 10000, Padma, and the latest Yuva. Yuva came into operation in November 2008 and boasts a peak speed of 54 teraflops (1 teraflops = 1 trillion floating point operations per second; floating point is a data type that stores numbers as {significant digits * base^exponent}).

Interestingly, in July 2009, C-DAC had announced that a new version of PARAM was in the works and that it would be deployed in 2012 with a computing power of more than 1 petaflops (1 petalfops = 1,000 teraflops) at a cost of Rs. 500 crore. Where is it?

Then, in May, 2011, it was announced that India would spend Rs. 10,000 crore in building a 132.8-exaflops supercomputer by 2017. Does that make today’s announcement an effective reduction in budget as well as diminishing of ambitions? If so, then why? If not, then are we going to have two high-power supercomputers?!

Such high-power supercomputers that the proposed 2017-supercomputer will compete with usually find use in computational fluid dynamics simulations, weather forecasting, finite element analysis, seismic modelling, e-governance, telemedicine, and administering high-speed network activities. Obviously, these are tasks that operate with a lot of probabilities thrown into the simulation and calculation mix, and require hundreds of millions of operation per second to be solved within an “acceptable” chance of the answer being right. As a result, and because of the broad scale of these applications, such supercomputers are built only when the need for the answers is already present. They are not installed to create needs but only to satisfy them.

So, that said, why does India need such a high-power supercomputer? Deploying a supercomputer is no easy task, and deploying one that’s so far ahead of the field also involves an overhaul of the existing system and network architectures. What needs is the government creating that might require so much power? Will we be able to afford it?

In fact, I worry that Mr. Kapil Sibal has announced the decision to build such a device simply because India doesn’t feature in the list of top 10 countries that have high-power supercomputers. Because, beyond being able to predict weather patterns and further extend the country’s space-faring capabilities, what will the device be used for? Are there records that the ones already in place are being used effectively?

Rubbernecking at redshifting

The interplay of energy and matter is simply wonderful because, given the presence of some intrinsic properties, the results of their encounters can be largely predicted. The presence of smoke indicates fire, the presence of shadows both darkness and light, the presence of winds a pressure gradient, the presence of mass a gravitational potential. And a special radiological extension of the last correlation gives rise to a phenomenon called gravitational redshift.

The wave-particle duality insists that electromagnetic radiation, if conceived as a stream of photons, can also be thought of as propagating as waves. All waves have two fundamental properties: wavelength and frequency. If a wave consists of a crest and a trough, the length of a crest-trough pair is its wavelength, and the number of wavelengths traversed by the wave in a second its frequency. Also, the energy contained in a wave is directly proportional to its frequency.

A wave undergoes a gravitational redshift when it moves from a region of lower gravitational potential to a region of higher gravitational potential. Such a potential gradient may be experienced when one moves away from a massive body, from regions of stronger to weaker gravitational pull (note the inverse variation). And when you think of radiation, such as light, moving from the surface of a star and toward a far-away observer, the light gets redshifted. The phenomenon was proposed, implicitly, in 1916 by Albert Einstein through his, and so called, Einstein Field Equations (EFE) that described the general theory of relativity (GR).

When radiation gets redshifted, its frequency gets reduced toward the red portion of the electromagnetic spectrum, hence the name. Agreed, the phenomenon is counter-intuitive. Usually, when the leash on an escaping object is loosened, the object speeds up. In the case of a redshift, however, the frequency is lowered (or the particle slowed).

The real wonder lies in the predictive power of such physics. It doesn’t matter whence the mass and what the wave: their interaction is always preceded and succeeded by a blueshift and a redshift. More, speaking from an application-oriented perspective, the radiation reaching Earth from outer space will always be redshifted. Consider it: the waves will have left the gravitational pull of some body behind on their way toward Earth. In thinking so, given some radiation, its source, and thus the radiation’s initial frequency, it becomes easy to calculate how much mass lies between the source and Earth.

A universal map of the cosmic microwave background (CMB) radiation as recorded by the Wilkinson Microwave Anisotropy Probe (WMAP) after its launch in 2011 (This map serves as a reliable benchmark against which to compare the locally observed frequencies of CMB)

As a naturally available resource, consider the cosmic microwave background (CMB) radiation. The CMB was born when the universe was around 379,000 years old, when the ionic plasma born moments after the Big Bang had cooled to a temperature at which electrons and protons could combine to form hydrogen atoms, leaving the photons decoupled from matter and loosened upon the universe as residual radiation (currently at a temperature of  2.72548 ± 0.00057 K).

And in the CMB-context, the Sachs-Wolfe effect is of two kinds: integrated and non-integrated. The non-integrated Sachs-Wolfe effect occurs at the surface-of-last-scattering, and the integrated version between the surface-of-last-scattering and Earth. The surface mentioned here can be thought of as an imaginary surface in space where the last matter-radiation decouplings occurred. What we’re interested in is the integrated Sachs-Wolfe effect.

Assuming that the photons have just left a star behind, and been gravitationally redshifted in the process, there is a lot of matter they could still encounter on their way to Earth even if our home planet maintains a clear line-of-sight to the star. This includes dust, stray rocks, gases blown around by stellar winds, and – if it does exist – dark energy.

The iconic Pillars of Creation, snapped by the Hubble Space Telescope on April 1, 1995, show columns of intergalactic dust in the midst of star-creation while also being eroded by starlight and stellar winds from other stars in the neighborhood.

Therefore, a great way to detect the presence of dark energy between two points in space would be easy, wouldn’t it? All we’d have to do is measure the redshift in radiation detected by a satellite in orbit around Earth coming from a selected region, and compare it with a map of that region. An analysis of the redshift “leftover” from subtracting the redshift due to matter should yield the amount of dark energy! (See also: WMAP)

This procedure was suggested in 1996 by Neil Turok and Robert Crittenden of the Perimeter Institute, Canada. However, after the first evidence of the integrated Sachs-Wolfe effect was detected in 2003, the correlation between the observed data and already-available maps was very low. This lead some skeptics to suggest that the effect could have instead been caused by space dust. The possibility of their being right was indeed high, until September 11, 2012, when their skepticism was almost conclusively refuted by a team of scientists from the University of Portsmouth and the LMU University Munich.

The study, lead by Tommaso Giannantonio and Crittenden, lasted two years and established at a confidence level of 5.4 sigma (or 99.996%) that the ’03 observation indeed corresponded to dark energy and not any other source of gravitational potential.

The phenomenological legacy of redshifts is derived from its special place in Einstein’s GR. The descriptive EFE first opened even the theoretical possibilities of such redshifts and their applications in astrophysics research.

The invasion

The most fear I’ve ever experienced is when I smoked up for the first time. I thought I’d enjoy it – isn’t that always the case when you foray into an unknown realm of experiences, a world of as-yet uninhabited sensations? With that promise firmly in mind, I’d taken a few drags and settled back, waiting for some awakening to come dazzle me. And when it did hit, I was terrified. It started with my fingertips turning numb, followed by my face… I couldn’t feel the wind on that windy day. There was nothing about me that let me close my eyes lest they turn dry against the onslaught of dead, cold air. Next, there was the reeling imagination: flying colours, rifle-toting Russian stalkers, speeding cars that cannoned me into the wall in front of my chair, and then… a memory of standing in front of a painting wondering if it was really there.

Through all of this, a voice persisted at the back of my head – is there any other place whence subdued voices persist? – telling me that I was losing control. Now, I know that there were two of me: one moving forward like an untamed warhorse, trampling and snorting and drooling, the other attempting to rein it in, trying to snap my head back without breaking it altogether. I couldn’t possibly have sided with either force: each was as necessary as it was inexplicably just there. When I tried to stand up, the jockey brought to mind gravity, my uneven footing, and my sense of neuromuscular control, but they were quick to dissipate, to dissolve within the temptation of murky passions swimming in front of my eyes. My body was lost to me; just as suddenly, I was someone else. Sure, I could have appreciated the loss of all but some inhibitions, but the loss only served to further remind that it was just that: loss. In its passing was more betrayal than in its wake more promise.

Death, you see, is nothing different. Of course, it stops with the loss – there is no “otherside”, no after. And with the presence of that darkness continuously assured, the loss simply accentuates it, each passing moment stealing forever a sense. That must be a terrifying thing, ironical as it may sound, perhaps because it’s an irreversible, suffocating handicap, the last argument that you will have, and one that you will be forced to leave without a chance at rebuttal. And then… who will look through your eyes? Who will reason through your mind? Who will shiver against the oncoming cold under the sheath of your skin? It is hard to say, just like measuring the brightness of one candle with another: the first could be twice as bright as the second, but really how bright are they? It is a dead comparison, the life of any such glow trapped within the body of a burning wick. The luxury of universal constants doesn’t exist, does it?

The weakening measurement

Unlike the special theory of relativity that the superluminal-neutrinos fiasco sought to defy, Heisenberg’s uncertainty principle presents very few, and equally iffy, measurement techniques to stand verified. While both Einstein’s and Heisenberg’s foundations are close to fundamental truths, the uncertainty principle has more guided than dictated applications that involved its consequences. Essentially, a defiance of Heisenberg is one for the statisticians.

And I’m pessimistic. Let’s face it, who wouldn’t be?

Anyway, the parameters involved in the experiment were:

  1. The particles being measured
  2. Weak measurement
  3. The apparatus

The experimenters claim that a value of the photon’s original polarization, X, was obtained upon a weak measurement. Then, a “stronger” measurement was made, yielding a value A. However, according to Heisenberg’s principle, the observation should have changed the polarization from A to some fixed value A’.

Now, the conclusions they drew:

  1. Obtaining X did not change A: X = A
  2. A’ – A < Limits set by Heisenberg

The terms of the weak measurement are understood with the following formula in mind:

(The bra-ket, or Dirac, notation signifies the dot-product between two vectors or vector-states.)

Here, φ(1,2) denote the pre- and post-selected states, A-hat the observable system, and Aw the value of the weak-measurement. Thus, when the pre-selected state tends toward becoming orthogonal to the post-selected state, the value of the weak measurement increases, becoming large, or “strong”, enough to affect the being-measured value of A-hat.

In our case: Aw = A – X; φ(1) = A; φ(2) = A’.

As listed above, the sources of error are:

  1. φ(1,2)
  2. X

To prove that Heisenberg was miserly all along, Aw would have been increased until φ(1) • φ(2) equaled 0 (through multiple runs of the same experiment), and then φ(2) – φ(1), or A’ – A, measured and compared to the different corresponding values of X. After determining the strength of the weak measurement thus, A’ – X can be determined.

I am skeptical because X signifies the extent of coupling between the measuring device and the system being measured, and its standard deviation, in the case of this experiment, is dependent on the standard deviation of A’ – A, which is in turn dependent on X.

The common tragedy

I have never been able to fathom poetry. Not because it’s unensnarable—which it annoyingly is—but because it never seems to touch upon that all-encompassing nerve of human endeavour supposedly running through our blood, transcending cultures and time and space. Is there a common trouble that we all share? Is there a common tragedy that is not death that we all quietly await that so many claim is described by poetry?

I, for one, think that that thread of shared memory is lost, forever leaving the feeble grasp of our comprehension. In fact, I believe that there is more to be shared, more to be found that will speak to the mind’s innermost voices, in a lonely moment of self-doubting. Away from a larger freedom, a “shared freedom”, we now reside in a larger prison, an invisible cell that assumes various shapes and sizes.

Sometimes, it’s in your throat, blocking your words from surfacing. Sometimes, it has your skull in a death-grip, suffocating all thoughts. Sometimes, it holds your feet to the ground and keeps you from flying, or sticks your fingers in your ears and never lets you hear what you might want to hear. Sometimes, it’s a cock in a cunt, a blade against your nerves, a catch on your side, a tapeworm in your intestines, or that cold sensation that kills wet dreams.

Today, now, this moment, the smallest of freedoms, the freedoms that belong to us alone, are what everyone shares, what everyone experiences. It’s simply an individuation of an idea, rather a belief, and the truth of that admission—peppered as it is with much doubt—makes us hold on more tightly to it. And as much as we partake of that individuation, like little gluons that emit gluons, we inspire more to pop into existence.

Within the confines of each small freedom, we live in worlds of our own fashioning. Poetry is, to me, the voice of those worlds. It is the resultant voice, counter-resolved into one expression of will and intention and sensation, that cannot, in turn, be broken down into one man or one woman, but only into whole histories that have bred them. Poetry is, to me, no longer a contiguous spectrum of pandered hormones or a conflict-indulged struggle, but an admission of self-doubt.

Credibility on the web

There are a finite number of sources from which anyone receives information. The most prominent among them are media houses (incl. newspapers, news channels, radio stations, etc.) and scientific journals (at least w.r.t. the subjects I work with).

Seen one way, these establishments generate the information that we receive. Without them, stories would remain localized, centralized, away from the ears that could accord them gravity.

Seen another way, these establishments are also motors: sans their motive force, information wouldn’t move around as it does, although this is assuming that they don’t mess with the information itself.

With more such “motors” in the media mix, the second perspective is becoming the norm of things. Even if information isn’t picked up by one house, it could be set sailing through a blog or a CJ initiative. The means through which we learn something, or stumble upon it for that matter, are growing to be more overlapped, lines crossing each others’ paths more often.

Veritably, it’s a maze. In such a labyrinthine setup, the entity that stands to lose the most is faith of a reader/viewer/consumer in the credibility of the information received.

In many cases, with a more interconnected web – the largest “supermotor” – the credibility of one bit of information is checked in one location, by one entity. Then, as it moves around, all following entities inherit that credibility-check.

For instance, on Wikipedia, credibility is established by citing news websites, newspaper/magazine articles, journals, etc. Jimmy Wales’ enterprise doesn’t have its own process of verification in place. Sure, there are volunteers who almost constantly police its millions of pages, but all they can do is check if the citation is valid, and if there are any contrarious reports, too, to the claims being staked.

One way or another, if a statement has appeared in a publication, it can be used to have the reader infer a fact.

In this case, Wikipedia has inherited the credibility established by another entity. If the verification process had failed in the first place, the error would’ve been perpetrated by different motors, each borrowing from the credibility of the first.

Moreover, the more strata that the information percolates through, the harder it will be to establish a chain of accountability.

*

My largest sources of information are:

  1. Wikipedia
  2. Journals
  3. Newspapers
  4. Blogs

(The social media is just a popular aggregator of news from these sources.)

Wikipedia cites news reports and journal articles.

News reports are compiled with the combined efforts of reporters and editors. Reporters verify the information they receive by checking if it’s repeated by different sources under (if possible) different circumstances. Editors proofread the copy and are (or must remain) sensitive to factual inconsistencies.

Journals have the notorious peer-reviewing mechanism. Each paper is subject to a thorough verification process intended to wean out all mistakes, errors, information “created” by lapses in the scientific method, and statistical manipulations and misinterpretations.

Blogs borrow from such sources and others.

Notice: Even in describing the passage of information through these ducts, I’ve vouched for reporters, editors, and peer-reviews. What if they fail me? How would I find out?

*

The point of this post was to illustrate

  1. The onerous yet mandatory responsibility that verifiers of information must assume,
  2. That there aren’t enough of them, and
  3. That there isn’t a mechanism in place that periodically verifies the credibility of some information across its lifetime.

How would you ensure the credibility of all the information you receive?