‘Surface of last screaming’

This has nothing to do with anything in the news. I was reading up about the Big Bang for a blog post when I came across this lucid explanation – so good it’s worth sharing for that reason alone – for the surface of last scattering, the site of an important event in the history of the universe. A lot happens by this moment, even if it happens only 379,000 year after the bang, and it’s easy to get lost in the details. But as the excerpt below shows, coming at it from the PoV of phase transitions considerably simplifies the picture (assuming of course that you’re comfortable with phase transitions).

To visualise how this effect arises, imagine that you are in a large field filled with people screaming. You are screaming too. At some time t = 0 everyone stops screaming simultaneously. What will you hear? After 1 second you will still be able to hear the distant screaming of people more than 330 metres away (the speed of sound in air, v, is about 330 m/s). After 3 seconds you will be able to hear distant screams from people more than 1 kilometre away (even though those distant people stopped screaming when you did). At any time t, assuming a suitably heightened sense of hearing, you will hear some faint screams, but the closest and loudest will be coming from people a distance v*t away. This distance defines the ‘surface of last screaming’ and this surface is receding from you at the speed of sound. …

When something is hot and cools down it can undergo a phase transition. For example, hot steam cools down to become water, and when cooled further it becomes ice. The Universe went through similar phase transitions as it expanded and cooled. One such phase transition … produced the last scattering surface. When the Universe was cool enough to allow the electrons and protons to fall together, they ‘recombined’ to form neutral hydrogen. […] photons do not interact with neutral hydrogen, so they were free to travel through the Universe without being scattered. They decoupled from matter. The opaque Universe then became transparent.

Imagine you are living 15 billion years ago. You would be surrounded by a very hot opaque plasma of electrons and protons. The Universe is expanding and cooling. When the Universe cools down below a critical temperature, the fog clears instantaneously everywhere. But you would not be able to see that it has cleared everywhere because, as you look into the far distance, you would be seeing into the opaque past of distant parts of the Universe. As the Universe continues to expand and cool you would be able to see farther, but you would always see the bright opaque fog in the distance, in the past. That bright fog is the surface of last scattering. It is the boundary between a transparent and an opaque universe and you can still see it today, 15 billion years later.

Jayant Narlikar’s pseudo-defence of Darwin

Jayant Narlikar, the noted astrophysicist and emeritus professor at the Inter-University Centre for Astronomy and Astrophysics, Pune, recently wrote an op-ed in The Hindu titled ‘Science should have the last word’. There’s probably a tinge of sanctimoniousness there, echoing the belief many scientists I’ve met have that science will answer everything, often blithely oblivious to politics and culture. But I’m sure Narlikar is not one of them.

Nonetheless, the piece IMO was good and not great because what Narlikar has written has been written in the recent past by many others, with different words. It was good because the piece’s author was Narlikar. His position on the subject is now in the public domain where it needs to be if only so others can now bank on his authority to stand up for science themselves.

Speaking of authority: there is a gaffe in the piece that its fans – and The Hindu‘s op-ed desk – appear to have glazed over. If they didn’t, it’s possible that Narlikar asked for his piece to be published without edits, and which could have either been further proof of sanctimoniousness or, of course, distrust of journalists. He writes:

Recently, there was a claim made in India that the Darwinian theory of evolution is incorrect and should not be taught in schools. In the field of science, the sole criterion for the survival of a theory is that it must explain all observed phenomena in its domain. For the present, Darwin’s theory is the best such theory but it is not perfect and leaves many questions unanswered. This is because the origin of life on earth is still unexplained by science. However, till there is a breakthrough on this, or some alternative idea gets scientific support, the Darwinian theory is the only one that should continue to be taught in schools.

@avinashtn, @thattai and @rsidd120 got the problems with this excerpt, particularly the part in bold, just right in a short Twitter exchange, beginning with this tweet (please click-through to Twitter to see all the replies):

Gist: the origin of life is different from the evolution of life.

But even if they were the same, as Narlikar conveniently assumes in his piece, something else should have stopped him. That something else is also what is specifically interesting for me. Sample what Narlikar said next and then the final line from the excerpt above:

For the present, Darwin’s theory is the best such theory but it is not perfect and leaves many questions unanswered. … However, till there is a breakthrough on this, or some alternative idea gets scientific support, the Darwinian theory is the only one that should continue to be taught in schools.

Darwin’s theory of evolution got many things right, continues to, so there is a sizeable chunk in the domain of evolutionary biology where it remains both applicable and necessary. However, it is confusing that Narlikar believes that, should some explanations for some phenomena thus far not understood arise, Darwin’s theories as a whole could become obsolete. But why? It is futile to expect a scientific theory to be able to account for “all observed phenomena in its domain”. Such a thing is virtually impossible given the levels of specialisation scientists have been able to achieve in various fields. For example, an evolutionary biologist might know how migratory birds evolved but still not be able to explain how some birds are thought to use quantum entanglement with Earth’s magnetic field to navigate.

The example Mukund Thattai provides is fitting. The Navier-Stokes equations are used to describe fluid dynamics. However, scientists have been studying fluids in a variety of contexts, from two-dimensional vortices in liquid helium to gas outflow around active galactic nuclei. It is only in some of these contexts that the Navier-Stokes equations are applicable; that they are not entirely useful in others doesn’t render the equations themselves useless.

Additionally, this is where Narlikar’s choice of words in his op-ed becomes more curious. He must be aware that his own branch of study, quantum cosmology, has thin but unmistakable roots in a principle conceived in the 1910s by Niels Bohr, with many implications for what he says about Darwin’s theories.

Within the boundaries of physics, the principle of correspondence states that at larger scales, the predictions of quantum mechanics must agree with those of classical mechanics. It is an elegant idea because it acknowledges the validity of classical, a.k.a. Newtonian, mechanics when applied at a scale where the effects of gravity begin to dominate the effects of subatomic forces. In its statement, the principle does not say that classical mechanics is useless because it can’t explain quantum phenomena. Instead, it says that (1) the two mechanics each have their respective domain of applicability and (2) the newer one must be resemble the older one when applied to the scale at which the older one is relevant.

Of course, while scientists have been able to satisfy the principle of correspondence in some areas of physics, an overarching understanding of gravity as a quantum phenomenon has remained elusive. If such a theory of ‘quantum gravity’ were to exist, its complicated equations would have to be able to resemble Newton’s equations and the laws of motion at larger scales.

But exploring the quantum nature of spacetime is extraordinarily difficult. It requires scientists to probe really small distances and really high energies. While lab equipment has been setup to meet this goal partway, it has been clear for some time that it might be easier to learn from powerful cosmic objects like blackholes.

And Narlikar has done just that, among other things, in his career as a theoretical astrophysicist.

I don’t imagine he would say that classical mechanics is useless because it can’t explain the quantum, or that quantum mechanics is useless because it can’t be used to make sense of the classical. More importantly, should a theory of quantum gravity come to be, should we discard the use of classical mechanics all-together? No.

In the same vein: should we continue to teach Darwin’s theories for lack of a better option or because it is scientific, useful and, through the fossil record, demonstrable? And if, in the future, an overarching theory of evolution comes along with the capacity to subsume Darwin’s, his ideas will still be valid in their respective jurisdictions.

As Thattai says, “Expertise in one part of science does not automatically confer authority in other areas.” Doesn’t this sound familiar?

Featured image credit: sipa/pixabay.

A universe out of sight

Two things before we begin:

  1. The first subsection of this post assumes that humankind has colonised some distant extrasolar planet(s) within the observable universe, and that humanity won’t be wiped out in 5 billion years.
  2. Both subsections assume a pessimistic outlook, and neither projections they dwell on might ever come to be while humanity still exists. Nonetheless, it’s still fun to consider them and their science, and, most importantly, their potential to fuel fiction.

Cosmology

Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0
Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

Note: An edited version of this post has been published on The Wire.

A new study whose results were reported this morning made for a disconcerting read: it seems the universe is expanding 5-9% faster than we figured it was.

That the universe is expanding at all is disappointing, that it is growing in volume like a balloon and continuously birthing more emptiness within itself. Because of the suddenly larger distances between things, each passing day leaves us lonelier than we were yesterday. The universe’s expansion is accelerating, too, and that doesn’t simply mean objects getting farther away. It means some photons from those objects never reaching our telescopes despite travelling at lightspeed, doomed to yearn forever like Tantalus in Tartarus. At some point in the future, a part of the universe will become completely invisible to our telescopes, remaining that way no matter how hard we try.

And the darkness will only grow, until a day out of an Asimov story confronts us: a powerful telescope bearing witness to the last light of a star before it is stolen from us for all time. Even if such a day is far, far into the future – the effect of the universe’s expansion is perceptible only on intergalactic scales, as the Hubble constant indicates, and simply negligible within the Solar System – the day exists.

This is why we are uniquely positioned: to be able to see as much as we are able to see. At the same time, it is pointless to wonder how much more we are able to see than our successors because it calls into question what we have ever been able to see. Say the whole universe occupies a volume of X, that the part of it that remains accessible to us contains a volume Y, and what we are able to see today is Z. Then: Z < Y < X. We can dream of some future technological innovation that will engender a rapid expansion of what we are able to see, but with Y being what it is, we will likely forever play catch-up (unless we find tachyons, navigable wormholes, or the universe beginning to decelerate someday).

How fast is the universe expanding? There is a fixed number to this called the deceleration parameter:

q = – (1 + /H2),

where H is the Hubble constant and  is its first derivative. The Hubble constant is the speed at which an object one megaparsec from us is moving away at. So, if q is positive, the universe’s expansion is slowing down. If q is zero, then H is the time since the Big Bang. And if q is negative – as scientists have found to be the case – then the universe’s expansion is accelerating.

The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons
The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

We measure the expansion of the universe from our position: on its surface (because, no, we’re not inside the universe). We look at light coming from distant objects, like supernovae; we work out how much that light is ‘red-shifted’; and we compare that to previous measurements. Here’s a rough guide.

What kind of objects do we use to measure these distances? Cosmologists prefer type Ia supernovae. In a type Ia supernova, a white-dwarf (the core of a dead stare made entirely of electrons) is slowly sucking in matter from an object orbiting it until it becomes hot enough to trigger fusion reaction. In the next few seconds, the reaction expels 1044 joules of energy, visible as a bright fleck in the gaze of a suitable telescope. Such explosions have a unique attribute: the mass of the white-dwarf that goes boom is uniform, which means type Ia supernova across the universe are almost equally bright. This is why cosmologists refer to them as ‘cosmic candles’. Based on how faint these candles are, you can tell how far away they are burning.

After a type Ia supernova occurs, photons set off from its surface toward a telescope on Earth. However, because the universe is continuously expanding, the distance between us and the supernova is continuously increasing. The effective interpretation is that the explosion appears to be moving away from us, becoming fainter. How much it has moved away is derived from the redshift. The wave nature of radiation allows us to think of light as having a frequency and a wavelength. When an object that is moving away from us emits light toward us, the waves of light appear to become stretched, i.e. the wavelength seems to become distended. If the light is in the visible part of the spectrum when starting out, then by the time it reached Earth, the increase in its wavelength will make it seem redder. And so the name.

The redshift, z – technically known as the cosmological redshift – can be calculated as:

z = (λobserved – λemitted)/λemitted

In English: the redshift is the factor by which the observed wavelength is changed from the emitted wavelength. If z = 1, then the observed wavelength is twice as much as the emitted wavelength. If z = 5, then the observed wavelength is six-times as much as the emitted wavelength. The farthest galaxy we know (MACS0647-JD) is estimated to be at a distance wherefrom = 10.7 (corresponding to 13.3 billion lightyears).

Anyway, z is used to calculate the cosmological scale-factor, a(t). This is the formula:

a(t) = 1/(1 + z)

a(t) is then used to calculate the distance between two objects:

d(t) = a(t) d0,

where d(t) is the distance between the two objects at time t and d0 is the distance between them at some reference time t0. Since the scale factor would be constant throughout the universe, d(t) and d0 can be stand-ins for the ‘size’ of the universe itself.

So, let’s say a type Ia supernova lit up at a redshift of 0.6. This gives a(t) = 0.625 = 5/8. So: d(t) = 5/8 * d0. In English, this means that the universe was 5/8th its current size when the supernova went off. Using z = 10.7, we infer that the universe was one-twelfth its current size when light started its journey from MACS0647-JD to reach us.

As it happens, residual radiation from the primordial universe is still around today – as the cosmic microwave background radiation. It originated 378,000 years after the Big Bang, following a period called the recombination epoch, 13.8 billion years ago. Its redshift is 1,089. Phew.

The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you're observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0
The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you’re observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

A curious redshift is z = 1.4, corresponding to a distance of about 4,200 megaparsec (~0.13 trillion trillion km). Objects that are already this far from us will be moving away faster than at the speed of light. However, this isn’t faster-than-light travel because it doesn’t involve travelling. It’s just a case of the distance between us and the object increasing at such a rate that, if that distance was once covered by light in time t0, light will now need t > t0 to cover it*. The corresponding a(t) = 0.42. I wonder at times if this is what Douglas Adams was referring to (… and at other times I don’t because the exact z at which this happens is 1.69, which means a(t) = 0.37. But it’s something to think about).

Ultimately, we will never be able to detect any electromagnetic radiation from before the recombination epoch 13.8 billion years ago; then again, the universe has since expanded, leaving the supposed edge of the observable universe 46.5 billion lightyears away in any direction. In the same vein, we can imagine there will be a distance (closing in) at which objects are moving away from us so fast that the photons from their surface never reach us. These objects will define the outermost edges of the potentially observable universe, nature’s paltry alms to our insatiable hunger.

Now, a gentle reminder that the universe is expanding a wee bit faster than we thought it was. This means that our theoretical predictions, founded on Einstein’s theories of relativity, have been wrong for some reason; perhaps we haven’t properly accounted for the effects of dark matter? This also means that, in an Asimovian tale, there could be a twist in the plot.

*When making such a measurement, Earthlings assume that Earth as seen from the object is at rest and that it’s the object that is moving. In other words: we measure the relative velocity. A third observer will notice both Earth and the object to be moving away, and her measurement of the velocity between us will be different.


Particle physics

Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN
Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

If the news that our universe is expanding 5-9% faster than we thought sooner portends a stellar barrenness in the future, then another foretells a fecundity of opportunities: in the opening days of its 2016 run, the Large Hadron Collider produced more data in a single day than it did in the entirety of its first run (which led to the discovery of the Higgs boson).

Now, so much about the cosmos was easy to visualise, abiding as it all did with Einstein’s conceptualisation of physics: as inherently classical, and never violating the principles of locality and causality. However, Einstein’s physics explains only one of the two infinities that modern physics has been able to comprehend – the other being the world of subatomic particles. And the kind of physics that reigns over the particles isn’t classical in any sense, and sometimes takes liberties with locality and causality as well. At the same time, it isn’t arbitrary either. How then do we reconcile these two sides of quantum physics?

Through the rules of statistics. Take the example of the Higgs boson: it is not created every time two protons smash together, no matter how energetic the protons are. It is created at a fixed rate – once every ~X collisions. Even better: we say that whenever a Higgs boson forms, it decays to a group of specific particles one-Yth of the time. The value of Y is related to a number called the coupling constant. The lower Y is, the higher the coupling constant is, and more often will the Higgs boson decay into that group of particles. When estimating a coupling constant, theoretical physicists assess the various ways in which the decays can happen (e.g., Higgs boson → two photons).

A similar interpretation is that the coupling constant determines how strongly a particle and a force acting on that particle will interact. Between the electron and the electromagnetic force is the fine-structure constant,

α = e2/2ε0hc;

and between quarks and the strong nuclear force is the constant defining the strength of the asymptotic freedom:

αs(k2) = [β0ln(k22)]-1

So, if the LHC’s experiments require P (number of) Higgs bosons to make their measurements, and its detectors are tuned to detect that group of particles, then at least P-times-that-coupling-constant collisions ought to have happened. The LHC might be a bad example because it’s a machine on the Energy Frontier: it is tasked with attaining higher and higher energies so that, at the moment the protons collide, heavier and much shorter-lived particles can show themselves. A better example would be a machine on the Intensity Frontier: its aim would be to produce orders of magnitude more collisions to spot extremely rare processes, such as particles that are formed very rarely. Then again, it’s not as straightforward as just being prolific.

It’s like rolling an unbiased die. The chance that you’ll roll a four is 1/6 (i.e. the coupling constant) – but it could happen that if you roll the die six times, you never get a four. This is because the chance can also be represented as 10/60. Then again, you could roll the die 60 times and still never get a four (though the odds of that happened are even lower). So you decide to take it to the next level: you build a die-rolling machine that rolls the die a thousand times. You would surely have gotten some fours – but say you didn’t get fours one-sixth of the time. So you take it up a notch: you make the machine roll the die a million times. The odds of a four should by now start converging toward 1/6. This is how a particle accelerator-collider aims to work, and succeeds.

And this is why the LHC producing as much data as it already has this year is exciting news. That much data means a lot more opportunities for ‘new physics’ – phenomena beyond what our theories can currently explain – to manifest itself. Analysing all this data completely will take many years (physicists continue to publish papers based on results gleaned from data generated in the first run), and all of it will be useful in some way even if very little of it ends up contributing to new ideas.

The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN
The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

Occasionally, an oddball will show up – like a pentaquark, a state of five quarks bound together. As particles in their own right, they might not be as exciting as the Higgs boson, but in the larger schemes of things, they have a role to call their own. For example, the existence of a pentaquark teaches physicists about what sorts of configurations of the strong nuclear force, which holds the quarks together, are really possible, and what sorts are not. However, let’s say the LHC data throws up nothing. What then?

Tumult is what. In the first run, the LHC used to smash two beams of billions of protons, each beam accelerated to 4 TeV and separated into 2,000+ bunches, head on at the rate of two opposing bunches every 50 nanoseconds. In the second run, after upgrades through early 2015, the LHC smashes bunches accelerated to 6.5 TeV once every 25 nanoseconds. In the process, the number of collisions per sq. cm per second increased tenfold, to 1 × 1034. These heightened numbers are so new physics has fewer places to hide; we are at the verge of desperation to tease them out, to plumb the weakest coupling constants, because existing theories have not been able to answer all of our questions about fundamental physics (why things are the way they are, etc.). And even the barest hint of something new, something we haven’t seen before, will:

  • Tell us that we haven’t seen all that there is to see**, that there is yet more, and
  • Validate this or that speculative theory over a host of others, and point us down a new path to tread

Axiomatically, these are the desiderata at stake should the LHC find nothing, even more so that it’s yielded a massive dataset. Of course, not all will be lost: larger, more powerful, more innovative colliders will be built – even as a disappointment will linger. Let’s imagine for a moment that all of them continue to find nothing, and that persistent day comes to be when the cosmos falls out of our reach, too. Wouldn’t that be maddening?

**I’m not sure of what an expanding universe’s effects on gravitational waves will be, but I presume it will be the same as its effect on electromagnetic radiation. Both are energy transmissions travelling on the universe’s surface at the speed of light, right? Do correct me if I’m wrong.

As the ripples in space-time blow through dust…

The last time a big announcement in science was followed by an at least partly successful furor to invalidate it was when physicists at the Gran Sasso National Laboratory, Italy, claimed to have recorded a few neutrinos travelling at faster than the speed of light. In this case, most if not all scientists know something had to be wrong. That nothing can clock such speeds except electromagnetic radiation is set in stone for all practical purposes.

9flkw

Although astronomers from Harvard University’s Center for Astrophysics (CfA) made a more plausible claim on March 17 on having found evidence of primordial gravitational waves, they do have something in common with the superluminal-neutrinos announcement: prematurity. Since the announcement, it has emerged that the CfA team didn’t account for some observations that would’ve seriously disputed their claims even though, presumably, they were aware that such observations existed. Something like willful negligence…

Imagine receiving a tight slap to the right side of face. If there was good enough contact, the slapper’s fingers should be visible for some time on your right cheek before fading away. Your left cheek should bear mostly no signs of you having just been slapped. The CfA astronomers were trying to look for a similar fingerprint in a sea of energy found throughout the universe. If they found the fingerprint, they’d know the energy was polarized, or ‘slapped’, by primordial gravitational waves more than 13 billion years ago. To be specific, the gravitational waves – which are ripples in space-time – would only have polarized one of two components the energy contains: the B-mode (‘right cheek’), the other being the E-mode (‘left cheek’).

The Dark Sector Lab (DSL), located 3/4 of a mile from the Geographic South Pole, houses the BICEP2 telescope (left) and the South Pole Telescope (right).
The Dark Sector Lab (DSL), located 3/4 of a mile from the Geographic South Pole, houses the BICEP2 telescope (left) and the South Pole Telescope (right). Image: bicepkeck.org

On March 17, CfA astronomers made the announcement that they’d found evidence of B-mode polarization using a telescope situated at the South Pole called BICEP2, hallelujah! Everyone was excited. Many journalists wrote articles without exercising sufficient caution, including me. Then, just the next day I found an astronomy blog that basically said, “Hold on right there…” The author’s contention was that CfA had looked only at certain parts of the sea of energy to come to their conclusion. The rest of the ‘cheek’ was still unexplored, and the blogger believed that if they checked out those areas, the fingerprints actually might not be there (for the life of me I can’t find the blog right now).

“Right from the time of BICEP2 announcement, some important lacunae have been nagging the serious-minded,” N.D. Hari Dass, an adjunct professor at Chennai Mathematical Institute told me. From the instrumental side, he said, there was the possibility of cross-talk between measurements of polarization and of temperature, and between measurements on the E-mode and on the B-mode. On the observational front, CfA simply hadn’t studied all parts of the sky – just one patch above the South Pole where B-mode polarization seemed significant. And they had studied that one patch by filtering for one specific temperature, not a range of temperatures.

“The effect should be frequency-independent if it were truly galactic,” Prof. Dass said.

Milky_Way_s_magnetic_fingerprint
The Milky Way galaxy’s magnetic fingerprint according to observations by the Planck space telescope. Image: ESA

But the biggest challenge came from quarters that questioned how CfA could confirm the ‘slappers’ were indeed primordial gravitational waves and not something else. Subir Sarkar, a physicist at Oxford University, and his colleagues were able to show that what BICEP2 saw to be B-mode polarization could actually have been from diffuse radio emissions from the Milky Way and magnetized dust. The pot was stirred further when the Planck space telescope team released a newly composed map of magnetic fields across the sky but blanked out the region where BICEP2 had made its observations.

There was reasonable, and it persists… More Planck data is expected by the end of the year and that might lay some contentions to rest.

On June 3, physicist Paul Steinhardt made a provocative claim in Nature: “The inflationary paradigm” – which accounts for B-mode polarization due to gravitational waves – “is fundamentally untestable, and hence scientifically meaningless”. Steinhardt was saying that the theory supposed to back the CfA quest was more like a tautology and that it would be true no matter the outcome. I asked Prof. Dass about this and he agreed.

A tautology at work.
A tautology at work.

“Inflation is a very big saga with various dimensions and consequences. One of Steinhardt’s
points is that the multiverse aspect” – which it allows for – “can never be falsified as every conceivable value predict will manifest,” he explained. “In other words, there are no predictions.” Turns out the Nature claim wasn’t provocative at all, implying CfA did not set itself well-defined goals to overcome these ‘axiomatic’ pitfalls or that it did but fell prey to sampling bias. At this point, Prof. Dass said, “current debates have reached some startling professional lows with each side blowing their own trumpets.

It wasn’t as if BICEP2 was the only telescope making these observations. Even in the week leading up to March 17, in fact, another South Pole telescope named Polarbear announced that it had found some evidence for B-mode polarization in the sky (see tweet below). The right thing to do now, then, would be to do what we’re starting to find very hard: be patient and be critical.

The Big Bang did bang

The Hindu
March 19, 2014

On March 17, the most important day for cosmology in over a decade, the Harvard-Smithsonian Centre for Astrophysics made an announcement that swept even physicists off their feet. Scientists published the first pieces of evidence that a popular but untested theory called cosmic inflation is right. This has significant implications for the field of cosmology.

The results also highlight a deep connection between the force of gravitation and quantum mechanics. This has been the subject of one of the most enduring quests in physics.

Marc Kamionkowski, professor of physics and astronomy at Johns Hopkins University, said the results were a “smoking gun for inflation,” at a news conference. Avi Loeb, a theoretical physicist from Harvard University, added that “the results also tell us when inflation took place and how powerful the process was.” Neither was involved in the project.

Rapid expansion

Cosmic inflation was first hypothesized by American physicist Alan Guth. He was trying to answer the question why distant parts of the universe were similar even though they couldn’t have shared a common history. In 1980, he proposed a radical solution. He theorized that 10-36 seconds after the Big Bang happened, all matter and radiation was uniformly packed into a volume the size of a proton.

In the next few instants, its volume increased by 1078 times – a period called the inflationary epoch. After this event, the universe was almost as big as a grapefruit, expanding to this day but at a slower pace. While this theory was poised to resolve many cosmological issues, it was difficult to prove. To get this far, scientists from the Centre used the BICEP2 telescope stationed at the South Pole.

BICEP (Background Imaging of Cosmic Extragalactic Polarization) 2 studies some residual energy of the Big Bang called the cosmic microwave background (CMB). This is a field of microwave radiation that permeates the universe. Its temperature is about 3 Kelvin. The CMB consists of electric (E) and magnetic (B) fields, called modes.

Polarized radiation

Before proceeding further, consider this analogy. When sunlight strikes a smooth, non-metallic surface, like a lake, the particles of light start vibrating parallel to the lake’s surface, becoming polarized. This is what we see as glare. Similarly, the E-mode and B-mode of the CMB are also polarized in certain ways.

The E-mode is polarized because of interactions with scattered photons and electrons in the universe. It is the easier to detect than the B-mode, and was studied in great detail until 2012 by the Planck space telescope. The B-mode, on the other hand, can be polarized only under the effect of gravitational waves. These are waves of purely gravitational energy capable of stretching or squeezing the space-time continuum.

The inflationary epoch is thought to have set off gravitational waves rippling through the continuum, in the process polarizing the B-mode.

To find this, a team of scientists led by John Kovac from Harvard University used the BICEP2 telescope from 2010 to 2012. It was equipped with a lens of aperture 26 cm, and devices called bolometers to detect the power of the CMB section being studied.

The telescope’s camera is actually a jumble of electronics. “The circuit board included an antenna to focus and filter polarized light, a micro-machined detector that turns the radiation into heat, and a superconducting thermometer to measure this heat,” explained Jamie Bock, a physics professor at the California Institute of Technology and project co-leader.

It scanned an effective area of two to 10 times the width of the Moon. The signal denoting effects of gravitational waves on the B-mode was confirmed with a statistical significance of over 5σ, sufficient to claim evidence.

Prof. Kovac said in a statement, “Detecting this signal is one of the most important goals in cosmology today.”

Unified theory

Despite many physicists calling the BICEP2 results as the first direct evidence of gravitational waves, theoretical physicist Carlo Rovelli advised caution. “The first direct detection is not here yet,” he tweeted, alluding to the scientists only having found the waves’ signatures.

Scientists are also looking for the value of a parameter called r, which describes the level of impact that gravitational waves could have had on galaxy formation. That value has been found to be particularly high: 0.20 (+0.07 –0.05). This helps explain why galaxies formed so rapidly, how powerful inflation was and why the universe is so large.

Now, astrophysicists from other observatories around the world will try to replicate BICEP2’s results. Also, data from the Planck telescope on the B-mode is due in 2015.

It is notable that gravitational waves are a feature of theories of gravitation, and cosmic inflation is a feature of quantum mechanics. Thus, the BICEP2 results show that the two previously exclusive theories can be combined at a fundamental level. This throws open the door for theoretical physicists and string theorists to explore a unified theory of nature in new light.

Liam McAllister, a physicist from Cornell University, proclaimed, “In terms of impact on fundamental physics, particularly as a tool for testing ideas about quantum gravity, the detection of primordial gravitational waves is completely unprecedented.”