Our universe, the poor man’s accelerator

The Hindu
March 25, 2014

On March 17, radio astronomers from the Harvard-Smithsonian Center for Astrophysics, Massachusetts, announced a remarkable discovery. They found evidence of primordial gravitational waves imprinted on the cosmic microwave background (CMB), a field of energy pervading the universe.

A confirmation that these waves exist is the validation of a theory called cosmic inflation. It describes the universe’s behaviour less than one-billionth of a second after it was born in the Big Bang, about 14 billion years ago, when it witnessed a brief but tremendous growth spurt. The residual energy of the Bang is the CMB, and the effect of gravitational waves on it is like the sonorous clang of a bell (the CMB) that was struck powerfully by an effect of cosmic inflation. Thanks to the announcement, now we know the bell was struck.

Detecting these waves is difficult. In fact, astrophysicists used to think this day was many more years into the future. If it has come now, we must be thankful to human ingenuity. There is more work to be done, of course, because the results hold only for a small patch of the sky surveyed, and there is also data due from studies done until 2012 on the CMB. Should any disagreement with the recent findings arise, scientists will have to rework their theories.

Remarkable in other ways

The astronomers from the Harvard-Smithsonian used a telescope called BICEP2, situated at the South Pole, to make their observations of the CMB. In turn, BICEP2’s readings of the CMB imply that when cosmic inflation occurred about 14 billion years ago, it happened at a tremendous amount of energy of 1016 GeV (GeV is a unit of energy used in particle physics). Astrophysicists didn’t think it would be so high.

Even the Large Hadron Collider (LHC), the world’s most powerful particle accelerator, manages a puny 104 GeV. The words of the physicist Yakov Zel’dovich, “The universe is the poor man’s accelerator”— written in the 1970s — prove timeless.

This energy at which inflation has occurred has drawn the attention of physicists studying various issues because here, finally, is a window that allows humankind to naturally study high-energy physics by observing the cosmos. Such a view holds many possibilities, too, from the trivial to the grand.

For example, consider the four naturally occurring fundamental forces: gravitation, strong and weak-nuclear force, and electromagnetic force. Normally, the strong-nuclear, weak-nuclear and electromagnetic forces act at very different energies and distances.

However, as we traverse higher and higher energies, these forces start to behave differently, as they might have in the early universe. This gives physicists probing the fundamental texture of nature an opportunity to explore the forces’ behaviours by studying astronomical data — such as from BICEP2 — instead of relying solely on particle accelerators like the LHC.

In fact, at energies around 1019 GeV, some physicists think gravity might become unified with the non-gravitational forces. However, this isn’t a well-defined goal of science, and doesn’t command as much consensus as it submits to rich veins of speculation. Theories like quantum gravity operate at this level, finding support from frameworks like string theory and loop quantum gravity.

Another perspective on cosmic inflation opens another window. Even though we now know that gravitational waves were sent rippling through the universe by cosmic inflation, we don’t know what caused them. An answer to this question has to come from high-energy physics — a journey that has taken diverse paths over the years.

Consider this: cosmic inflation is an effect associated with quantum field theory, which accommodates the three non-gravitational forces. Gravitational waves are an effect of the theories of relativity, which explain gravity. Because we may now have proof that the two effects are related, we know that quantum mechanics and relativity are also capable of being combined at a fundamental level. This means a theory unifying all the four forces could exist, although that doesn’t mean we’re on the right track.

At present, the Standard Model of particle physics, a paradigm of quantum field theory, is proving to be a mostly valid theory of particle physics, explaining interactions between various fundamental particles. The questions it does not have answers for could be answered by even more comprehensive theories that can use the Standard Model as a springboard to reach for solutions.

Physicists refer to such springboarders as “new physics”— a set of laws and principles capable of answering questions for which “old physics” has no answers; a set of ideas that can make seamless our understanding of nature at different energies.

Supersymmetry

One leading candidate of new physics is a theory called supersymmetry. It is an extension of the Standard Model, especially at higher energies. Finding symptoms of supersymmetry is one of the goals of the LHC, but in over three years of experimentation it has failed. This isn’t the end of the road, however, because supersymmetry holds much promise to solve certain pressing issues in physics which the Standard Model can’t, such as what dark matter is.

Thus, by finding evidence of cosmic inflation at very high energy, radio-astronomers from the Harvard-Smithsonian Center have twanged at one strand of a complex web connecting multiple theories. The help physicists have received from such astronomers is significant and will only mount as we look deeper into our skies.

The Big Bang did bang

The Hindu
March 19, 2014

On March 17, the most important day for cosmology in over a decade, the Harvard-Smithsonian Centre for Astrophysics made an announcement that swept even physicists off their feet. Scientists published the first pieces of evidence that a popular but untested theory called cosmic inflation is right. This has significant implications for the field of cosmology.

The results also highlight a deep connection between the force of gravitation and quantum mechanics. This has been the subject of one of the most enduring quests in physics.

Marc Kamionkowski, professor of physics and astronomy at Johns Hopkins University, said the results were a “smoking gun for inflation,” at a news conference. Avi Loeb, a theoretical physicist from Harvard University, added that “the results also tell us when inflation took place and how powerful the process was.” Neither was involved in the project.

Rapid expansion

Cosmic inflation was first hypothesized by American physicist Alan Guth. He was trying to answer the question why distant parts of the universe were similar even though they couldn’t have shared a common history. In 1980, he proposed a radical solution. He theorized that 10-36 seconds after the Big Bang happened, all matter and radiation was uniformly packed into a volume the size of a proton.

In the next few instants, its volume increased by 1078 times – a period called the inflationary epoch. After this event, the universe was almost as big as a grapefruit, expanding to this day but at a slower pace. While this theory was poised to resolve many cosmological issues, it was difficult to prove. To get this far, scientists from the Centre used the BICEP2 telescope stationed at the South Pole.

BICEP (Background Imaging of Cosmic Extragalactic Polarization) 2 studies some residual energy of the Big Bang called the cosmic microwave background (CMB). This is a field of microwave radiation that permeates the universe. Its temperature is about 3 Kelvin. The CMB consists of electric (E) and magnetic (B) fields, called modes.

Polarized radiation

Before proceeding further, consider this analogy. When sunlight strikes a smooth, non-metallic surface, like a lake, the particles of light start vibrating parallel to the lake’s surface, becoming polarized. This is what we see as glare. Similarly, the E-mode and B-mode of the CMB are also polarized in certain ways.

The E-mode is polarized because of interactions with scattered photons and electrons in the universe. It is the easier to detect than the B-mode, and was studied in great detail until 2012 by the Planck space telescope. The B-mode, on the other hand, can be polarized only under the effect of gravitational waves. These are waves of purely gravitational energy capable of stretching or squeezing the space-time continuum.

The inflationary epoch is thought to have set off gravitational waves rippling through the continuum, in the process polarizing the B-mode.

To find this, a team of scientists led by John Kovac from Harvard University used the BICEP2 telescope from 2010 to 2012. It was equipped with a lens of aperture 26 cm, and devices called bolometers to detect the power of the CMB section being studied.

The telescope’s camera is actually a jumble of electronics. “The circuit board included an antenna to focus and filter polarized light, a micro-machined detector that turns the radiation into heat, and a superconducting thermometer to measure this heat,” explained Jamie Bock, a physics professor at the California Institute of Technology and project co-leader.

It scanned an effective area of two to 10 times the width of the Moon. The signal denoting effects of gravitational waves on the B-mode was confirmed with a statistical significance of over 5σ, sufficient to claim evidence.

Prof. Kovac said in a statement, “Detecting this signal is one of the most important goals in cosmology today.”

Unified theory

Despite many physicists calling the BICEP2 results as the first direct evidence of gravitational waves, theoretical physicist Carlo Rovelli advised caution. “The first direct detection is not here yet,” he tweeted, alluding to the scientists only having found the waves’ signatures.

Scientists are also looking for the value of a parameter called r, which describes the level of impact that gravitational waves could have had on galaxy formation. That value has been found to be particularly high: 0.20 (+0.07 –0.05). This helps explain why galaxies formed so rapidly, how powerful inflation was and why the universe is so large.

Now, astrophysicists from other observatories around the world will try to replicate BICEP2’s results. Also, data from the Planck telescope on the B-mode is due in 2015.

It is notable that gravitational waves are a feature of theories of gravitation, and cosmic inflation is a feature of quantum mechanics. Thus, the BICEP2 results show that the two previously exclusive theories can be combined at a fundamental level. This throws open the door for theoretical physicists and string theorists to explore a unified theory of nature in new light.

Liam McAllister, a physicist from Cornell University, proclaimed, “In terms of impact on fundamental physics, particularly as a tool for testing ideas about quantum gravity, the detection of primordial gravitational waves is completely unprecedented.”

Go slow on the social media

In a matter of months, India will overtake the US as Facebook’s largest user-base. According to various sources, the social media site is currently adding about 40 million users from the country per year while the US adds some 5 million at the same rate. Such growth is not likely to leave Facebook much enthused as the Indian horde is worth only about $400 million in ads, but for the impending Lok Sabha polls, the Californian giant spells many possibilities of varying efficacy for propaganda.

Almost all our political parties, including the Congress, BJP and AAP, are active on Facebook and Twitter. Among them, the BJP and AAP are the most active, if only because their anti-incumbent and evangelical content, respectively, is highly viral, reaching millions within minutes and, unlike with TV, with a shelf life of forever. Although 5.5-11.2% of all Facebook accounts, and 32-64% of Twitter profiles of the followers of Indian political leaders (according to a rudimentary analysis by The Hindu), are fake, that still leaves space for tens of millions of users to be swayed by opinions disseminated on the web.

However, this is also why whether the social media will inspire direct mobilization is hard to say. Even though most of India’s 18-24 year-olds could be on Facebook, Twitter and YouTube, we know little about how articulation online translates to action offline quantitatively. This is why surveys showing how certain constituencies harbor more Facebook users than the margins of victory in previous Assembly elections are only engaging in empirical speculation. The 2014 Lok Sabha polls could be our first opportunity to understand this influential mechanism.

(These inputs were provided for a piece that appeared in The Hindu on March 17, 2014.)

The stuff we learn after a plane goes missing

(A version of this post, as written by me, first appeared in The Hindu science blog, The Copernican, on March 16, 2014.)

It’s likely any of you knew many of or all the following, but these are things I became aware of from reading news items and analyses of the missing Malaysian Airlines flight 370, currently one of hijacked, crashed into a large water-body or next-plausible-occurrence. While some of them may not directly apply to the search for any survivors or the carrier, all of them shine important and interesting light on how things work.

Ringing phones aren’t actually ringing. Yet. – After the relative of a passenger on board flight 370 called up the person’s phone, it started to ring. This was flashed on TV channels as proof of the plane still being intact, whether or not it was in the air. A couple hours later, some telecom experts wrote in that the first few rings you hear aren’t rings that the call’s receiver is hearing, too. Instead, those are the rings the network relays to you so you don’t cut the call while it looks for the receiver’s device.

Air-traffic controllers don’t always know where the plane is* – Because planes are flying at 35,000 feet, controllers don’t anticipate much to happen to them, and they’re almost always right. This is why, while cruising at that altitude, pilots don’t constantly buzz home to controllers about where their flight is, its altitude, its speed, etc. To be on the safe side, they buzz home over specific intervals, a process that’s automated on some modern models. Between these intervals, of course, the flight might just as well be blinking in and out of extra dimensions but no one is going to have an eye on it.

Radar that controllers have access to don’t work so well beyond a range of 150-350 km** – If civilian aircraft are farther than this, they no longer show up as pings on the scanning screen. In fact, in another system, called automatic dependent surveillance-broadcast (ADS-B), a plane determines its location based on GPS and transmits it down to a controller.  Here again, there’s a distance limit of up to 300 or so km. Beyond this, they communicate over high-frequency radio. Of course, this depends on the quality of equipment, but it’s useful to know such limitations exist.

If a plane’s communication systems have been disabled, there’s no Plan B – There’s radar, then radio, then GPS, then a fourth system where the aircraft’s computers communicate via satellite with the airline’s offices. The effectiveness of radar and radio is contingent on weather conditions. Beyond a particular altitude and, again, depending on the weather, GPS is capable of blinking out. The fourth system can be be manually disabled. If a renegade technician on the flight knows these things and how to work them, he/she can take the flight off the grid.

For pilots, it’s aviate, navigate, and then communicate – If the flight is in some kind of danger, the pilot’s primary responsibility is to do those things necessary to tackle the threat, and try and get the carrier away from the danger area. Only then is he/she obligated to get in touch with the controllers.

The ocean is a LARGE place – Sure, we studied in school that the oceans cover 71% of Earth’s surface and contain 1.3 billion cubic km of water, but those were just numbers – big numbers, but numbers nonetheless. I think our sense of bigness isn’t reliant any bit on numbers but only on physical experiences. I’m 6’4″ tall, but you’ll have to come stand next to me to understand how tall I really am. That said, I now quote former US Navy sailor Jim Wright (from his Facebook post):

… even when you know exactly, and I mean EXACTLY, where to look, it’s still extremely difficult to find scattered bits of airplane or, to be blunt, scattered bits of people in the water. As a navy sailor, I’ve spent days searching for lost aircraft and airmen, and even if you think you know where the bird went down, the winds and the currents can spread the debris across hundreds or even thousands of miles of ocean in fairly short order. No machine, no computer, can search this volume, you have to put human eyeballs on every inch of the search area. You have to inspect every item you come across – and the oceans of the world are FULL of flotsam, jetsam, debris, junk, trash, crap, bits, and pieces. Often neither the sea nor the weather cooperates, it is INCREDIBLY difficult to spot [an] item the size of a human being in the water, among the swells and the spray, even if you know exactly where to look – and the sea conditions in this part of the world are some of the worst, especially this time of year.

Mr. Wright goes on to write that should flight 370 have crashed into the Bay of Bengal, the South China Sea or wherever, its leaked fuel wouldn’t exactly be visible as an oil slick because of two reasons: first, high-grade aircraft fuel evaporates really fast (if it hasn’t already been vaporized on its way down from the sky); second, given the size of the fuel-tank, such a slick might cover a few square kilometers: on an ocean, that’s a blip. The current extended search area spans 30,000 sq. km.

Military threats in militarized zones are discerned by ballistic trajectories of bodies – One of the simplest ways armored units know what they’re seeing in the sky is not a missile but a civilian aircraft is by their trajectory – the shape of their path. Most missiles are ballistic, which means their trajectories are like upturned Us. Aircraft, on the other hand, fly in a straight line. I suppose this really is common sense but it is good to know just what’s keeping me from getting bombed out of the air should I fly over, say, the East China Sea…

The global positioning system doesn’t continuously relay the aircraft’s location to controllers – See * and **.

Smaller nations advance pilots with fewer flying hours than is the norm in bigger nations – According to a piece on CNN, one of flight 370’s two pilots had clocked only 2,763 flying hours as a pilot, and was “transitioning from flight simulator training to the Boeing 777-200ER”. The other pilot had a little over 18,000 hours under his belt. As CNN goes on to explain, smaller nations tend to advance pilots they think are very talented, farther than they could go in the same time in other countries, through intensive training programs. I couldn’t find anything substantive on the nature of these supposedly advanced programs, so I can’t comment further.

Pilot suicide – Okay, what the hell. Nobody wants a person at the controls who’s expressed suicidal tendencies, and it’s the airline’s responsibility to treat or accordingly deal with such people. However, the moment you’ve said that, you realize how difficult such situations could be to predict, not to mention how much more difficult to prevent. A report by the US Federal Aviation Administration titled ‘Aircraft-Assisted Pilot Suicides in the United States‘, from February 2014, describes eight case-studies of flights whose pilots have killed themselves by crashing the aircraft. Each study describes the pilot’s behavior during the flight’s duration and is careful to note no other electric/mechanical failures were present. In the case of flight 370, of course, pilot suicide is just a theory.

The Boeing 777 is one safe carrier – Since its first flight in 1994, the Boeing 777-200ER (for ‘Extended Range’) had an estimated full loss equivalent (FLE) of 0.01 as of December 31, 2012, over 6.9 million flights. According to AirSafe.com, the FLE…

… is the sum of the proportions of passengers killed for each fatal event. For example, 50 out of 100 passengers killed on a flight is an FLE of 0.50, 1 of 100 would be a FLE of 0.01. The fatal event rate for a set of fatal events is found by dividing the total FLE by the number of flights in millions.

The same site also lists the 777-200ER as having the second lowest crash rate – 0.001 per million flights – of all time, among all models with 2 million flights or more, as of September, 2013. Only the Airbus A340 is better with a crash rate of 0, although it has clocked 4 million fewer flights (just saying).

Southeast Asia is a busy area for aviation – Between April-2012 and October-2013, the number of seats per week per Southeast Asian country grew by an average of 19.4%. In the same 18 months, the entire region’s population grew by 6% (both numbers courtesy the Center for Asia-Pacific Aviation). Then, of course, there’s Singapore’s Changi Airport. It’s one of Asia’s busiest, if not the world’s, handling 6,100 flights a week. And it was in this jam-packed area that people were trying to look for one flight.

For more on how we can manage to lose a plane in 2014, check out my previous post Airplanes Can Still Go Missing.

Airplanes can still go missing

Airplanes are one of our largest modes of transportation in terms of physical size. With the exception of ships, airplanes have the highest carrying capacity, are quite environmentally disruptive while in operation, and are equipped with some of the most sophisticated positional tracking technologies.

Yet, one still went missing last week. Fourteen years into the 21st century, while the NSA threatens the privacy of global telecommunications, one airplane goes missing. I don’t mean to trivialize the issue of the Malaysian Airlines Flight 370 turning untraceable, but just that even though some of us are smart enough to build invisibility cloaks, we also still have our problems.

I went around the web trying to understand why this was the case and found some interesting stuff. Even though #370 was only the third flight to go missing in the 21st century (and almost the 1,900th to have crashed), it is the 111th flight to do so since radio-sets were first installed on airplanes in 1917. On average, that’s a little more than one disappearance per year.

One reason finding missing airplanes is so difficult is multiplicity. Airplanes are made up of thousands of components each. When one component malfunctions, it could lead to a form of failure that’s very different from what would happen when a different component malfunctions. Watch an episode of Air Crash Investigation on National Geographic if you don’t believe me—airline investigators trying to figure out what exactly could have wrong often find the blame lies with small deviations from normal practices by the pilots or maintenance crews. I remember an episode titled Disaster on the Potomac that aired in December 2013, which details how the 1982 Air Florida crash that killed 78 people was due to a faulty de-icing procedure that skewed instrument readings in the cockpit.

On top of this, you have environmental factors to deal with. According to a piece by Jordan Golson in Wired on March 11, Col. J. Joseph, an aviation consultant, thinks that when planes break up at higher altitudes, the debris is likely to be moved around by stronger winds. Given that flight #370 was over 11 km up, Col. Joseph thinks windspeed could have been over 180 km/hr, enough to blow pieces out of any geographical context.

Things get worse if the plane crashes into the water. Consider the oft-quoted example of Air France Flight 447, which crashed into the Atlantic Ocean in 2009 with 228 people on board. Rescue missions took almost two days to find the first signs of its wreckage. Before that, in 2007, an Indonesia Boeing 737 crashed near the Makassar Strait near Sulawesi. Its wreckage took 10 days to find. There are many other sad yet interesting examples.

According to a Wall Street Journal analytical piece by Daniel Michaels and John Ostrower on March 11, the search for #370 could be further hampered by the fact that the region it was traversing is one of the busiest on the planet: Southeast Asia. Moreover, according to Golson, radar isn’t good enough after the plane’s farther than about 200 km from the nearest control tower, while precise GPS locations aren’t relayed continuously by the pilots to air-traffic controllers—this is why we rely on a ‘last known location’, not a definitive ‘last location’. At the same time, controllers don’t panic when pilots don’t ping back frequently because, according to pilot Patrick Smith’s blog:

In an emergency, communicating with the ground is secondary to dealing with the problems at hand. As the old adage goes: you aviate, navigate, and communicate — in that order. And so, the fact that no messages or distress signals were sent by the crew is not surprising or an indicator of anything specific.

However, what’s stranger about flight 370 is that it’s a Boeing 777, which comes with an emergency locator that beeps out location signals for many days after a crash. Rescuers are yet to spot one in the area they’re combing. So, as the search for a missing airplane drags on, it’s only our conviction that some trace of the vehicle will surface that lasts, accompanied by stronger and stronger scrutiny of what facts we manage to gather (In the meantime, the Daily Mail has something about an aeronautical black hole you might want to read about).

Riffing off the Rift

At first glance, the Oculus Rift is an ingenious invention – not something you can look at and go “Hey, how come I didn’t think of that?” By letting its users ‘step inside the game’, the Rift holds enormous potential not only to herald the proverbial revolution due a disruptive bit of technology in gaming but to change what gaming itself means. Building on many studies and discussions over the years about what gaming truly represents – a proof-of-work based rewards system that leaves players emotionally fulfilled – with the Rift, they’re for the first time truly equipped to explore more complex gaming constructs involving even more sophisticated rewards systems, including incorporating them into training programs and simulations. That’s just at first glance. At second glance, however, what’s even more remarkable is that the Rift involves nothing new – at least nothing that’s disruptively new – apart from the particular way it’s been assembled.

The device is a melange of components working in sync: motion sensors, gyroscopes, accelerometers, a processor, a pair of stereoscopic lenses, specially designed set of goggles and, necessarily, a game. Essentially, using the Rift is like playing an FPS in a theater with 3D glasses on. The game, built on the versatile Unity game engine, is rendered by the processor on the display device – a stripped down tablet will do – mounted on the front of the goggles that’re then strapped onto your head. These goggles are fit with the stereoscopic lenses that give the illusion of depth necessary to stepping inside the game. Next, with the gaming controller in your hand, you navigate the game you’re seeing play out in front of your eyes. A camera in front of you tracks your head movements and relays it to the processor, which uses the positional information to move your head inside the game. Suddenly, you’re thinking “Hey, how come I didn’t think of that?” A story by a colleague and me for The Hindu on how different developers have decided to take the Rift forward.

What is a fusion reaction?

The Copernican
February 21, 2014

Last week, the National Ignition Facility, USA, announced that it had breached the first step in triggering a fusion reaction. But what is a fusion reaction? Here are some answers from Prof. Bora – which require prior knowledge of high-school physics and chemistry. We’ll start from their basics (with my comments in square brackets).

What is meant by a nuclear reaction?

A process in which two nuclei or a nucleus and a subatomic particle collide to produce one or more different nucleii is known as a nuclear reaction. It implies an induced change in at least in one nucleus and does not apply to any radioactive decay.

What is the difference between fission and fusion reactions?

The main difference between fusion and fission reactions is that fission is the splitting of an atom into two or more smaller ones while fusion is the fusing of two or more smaller atoms into a larger one. They are two different types of energy-releasing reactions in which energy is released from powerful atomic bonds between the particles within the nucleus.

Which elements are permitted to undergo nuclear fusion?

Technically any two light nuclei below iron [in the Periodic Table] can be used for fusion, although some nuclei are better than most others when it comes to energy production. Like in fission, the energy in fusion comes from the “mass defect” (loss in mass) due to the increase in binding energy [that holds subatomic particles inside an atom together]. The greater the change in binding energy (from lower binding energy to higher binding energy), more the mass lost, results in more output energy.

What are the steps of a nuclear fusion reaction?

To create fusion energy, extremely high temperatures (100 million degrees Celsius) are required to overcome the electrostatic force of repulsion that exists between the light nuclei, popularly known as the Coulomb’s barrier [due to the protons’ positive charges]. Fusion, therefore, can occur for any two nuclei provided the temperature, density of the plasma [the superheated soup of charged particles] and confinement durations are met.

Under what conditions will a fusion chain-reaction occur?

When, say, a deuterium (D) and tritium (T) plasma is compressed to very high density, the particles resulting from nuclear reactions give their energy mostly to D and T ions, by nuclear collisions, rather than to electrons as usual. Fusion can thus proceed as a chain reaction, without the need of thermonuclear temperatures.

What are the natural forces at play during nuclear fusion?

The gravitational forces in the stars compress matter, mostly hydrogen, up to very large densities and temperatures at the star-centers, igniting the fusion reaction. The same gravitational field balances the enormous thermal expansion forces, maintaining the thermonuclear reactions in a star, like the sun, at a controlled and steady rate.

In the laboratory, the gravitational force is replaced by magnetic forces in magnetic confinement systems whereas radiation force compresses the fuel, generating even higher pressures and temperature, and resulting in a fusion reaction in the inertial confinement systems.

What approaches have human attempts to achieve nuclear fusion taken?

Two main approaches, namely magnetic containment and inertial containment, have been attempted to achieve fusion.

In the magnetic confinement scheme, various magnetic ‘cages’ have been used, the most successful being the tokamak configuration. Here, magnetic fields are generated by electric coils. Together with the current due to charged particles in the plasma, they confine the plasma into a particular shape. It is then heated to an extremely high temperature for fusion to occur.

In the inertial confinement scheme, extremely high-power lasers are concentrated on a tiny sphere consisting of the D-T mixture, creating tremendous pressure and compression. This generates even higher pressures and temperatures, creating a conducive environment for a fusion reaction to occur.

To create fusion energy in both the schemes, the reaction must be self-sustaining.

What are the hurdles that must be overcome to operate a working nuclear fusion power plant to generate electricity?

Fusion power is in the form of fast neutrons that are released, of an energy of 14 Mev [although MeV is a unit of energy, it denotes a certain mass of the particle according to the mass-energy equivalence; to compare, a non-excited proton has an energy of 938.2 MeV]. This energy will be converted to thermal energy which then would be converted to electrical energy. Hurdles are in the form of special materials that need to be developed that are capable of withstanding extremely high heat flux in a neutron environment. Reliability of operation of fusion reactors is also a big challenge.

What kind of waste products/emissions would be produced by a fusion power plant?

All the plasma facing components are bombarded by neutrons, which will make the first layers of the metallic confinement radioactive for a short period. The confinement will be made of different materials. Efforts are being made by materials scientists to develop special-grade steel to have weaker effects struck by neutrons. All said, such irradiated components will have to be stored for at least 50 years. The extent of contamination should be reduced with the newer structural materials.

Fusion reactions are intrinsically safe as the reaction terminates itself in the event of the failure of any sub-system.

India is one of the seven countries committed to the ITER program in France. Could you tell us what its status is?

ITER project has gradually moved into construction phase. Therefore, Fusion is no more a dream but a reality. Construction at site is progressing rapidly. Various critical components are being fabricated in the seven parties through their domestic agencies.

The first plasma is expected in the end of 2020 as per the 2010 baseline. Indian industries are also involved in producing various subsystems. R&D and prototyping of many of the high tech components are progressing as per plan. India is committed to deliver its share in time.

Type 1a supernova spotted in M82

(A version of this piece appeared on The Hindu, Chennai, website on January 22 as written by me.)

A Type 1a supernova was spotted a few hours ago by stargazers in the starburst galaxy M82, which is only 11.4 million light-years away from Earth (here’s an interactive map and a helpful sky-chart). This is the closest such supernova that has been detected since 1972, and is poised to give astronomers and cosmologists some invaluable insight into how such stellar explosions pan out, and what we can learn about neutrinos, gamma rays and dark energy from them.

See the bright, blinking spot of light on the galaxy's 'lower' half? That's your SN1a.
See the bright, blinking spot of light on the galaxy’s ‘lower’ half? That’s your SN1a. Animation by E. Guido, N. Howes, M. Nicolini

The supernova is a Type 1a supernova (SN1a), which means it’s not the explosion that happens when a star runs out of fuel and blows itself apart. Instead, it’s what happens when a white dwarf pulls in too much material from a nearby star and blows itself apart—having bitten off more than it could chew.

That M82 is a starburst galaxy means it’s rapidly producing stars. This also means it has a lot of old stars, many of which are continuously dying. They could either be dying as Type 2 supernovae—which is the run-out-fuel kind—or Type 1. The SN1a that’s gone off now (i.e. so many millions of years ago) has chosen to go off as Type 1a, and that’s a good thing because we haven’t spotted a Type 1a since 1972 that’s so close.

When the explosion releases light, it doesn’t immediately start its journey and head straight for Earth. Instead, the light gets trapped in the explosion behind lots of matter, and is delayed. In fact, the ‘ghost particles’ that can pass through matter almost undetected, neutrinos, get a headstart. They reach us before light from the explosion does.

However, a Type 1a supernova produces far fewer neutrinos than does a Type 2, so while the neutrinos flying our way will still be valuable, they might not be valuable enough to study a supernova with. On the other hand, the M82-SN1a could be our big chance to study SN-origin gamma rays in the best detail for the first time in more than four decades.

However, since we haven’t had our detectors trained for neutrinos from M82 particularly, how do we know when that white dwarf in M82 blew up? We measure how its brightness varies over time. Using that information, we know the thing blew up 11.4 million years ago. Because a 1a’s variation of brightness over time consistently follows a well-established pattern, white dwarfs across the universe can be used as cosmic candlesticks: astronomers use them to judge the relative distances of nearby objects.

In fact, white dwarfs did play an important role in astronomers discovering that the universe was expanding at an accelerating rate due to dark energy. Paraphrasing astronomer Katharine Mack’s tweet: “With a better estimate of the distance [as judged from their brightness], we get a better link between the distance and the universe’s expansion.”

M82’s relative closeness is useful because it provides a lot more information to work with before it could get (more) adulterated through the distance of space. In fact, according to astronomer Daniel Fischer, the supernova’s been going on for a full week now, and was missed by the bigger budget telescopes because it was, and I quote, ‘too bright’. As Brad Tucker, an astronomer from Berkeley, tweeted,

[tweet https://twitter.com/btucker22/status/425989187188187137 hide_thread=’true’]

So, hadn’t it been for amateur astronomers, who’ve made this remarkable observation, too, we wouldn’t have spotted this beauty. Already, according to Skymania’s Paul Sutherland, astronomers believe they’ve caught this supernova early in its act and think it could brighten even further.

This particular find was made by Russian amateur astronomers on January 22, and later confirmed by multiple sources. In fact, M82-SN1a seems to have appeared in the photographs taken by noted Japanese amateur astronomer Koichi Itagaki on January 14 itself (beating Patrick Wiggins by a day). And if you’re interested in reporting such discoveries, check this page out. If you want to keep up with the social media conversation over M82, follow @astrokatie. She’s going nuts (in a good way).

Itagaki's photos of M82
Itagaki’s photos of M82

A useful book to have around

14oeb_Space_jpg_1719808fIndia’s Rise as a Space Power is a book by Prof. Udupi Ramachandra Rao, former Chairman, ISRO (1988-1994), that provides some useful historical context of the space research organization from a scientist’s perspective, not an administrator’s.

Through it, Prof. Rao talks about how our space program was carefully crafted with a series of satellites and launch vehicles, and how each one of them has contributed to where the organization, as such, is today: an immutable symbol of power in the Third World and India’s pride. He starts with the foundation of ISRO, goes on to the visions of Vikram Sarabhai and Satish Dhawan, then introduces the story of Aryabhata, our first satellite, followed by Bhaskara I and II, the IRS series, the INSAT program, the ASLV, PSLV and GSLV, and finally, the contributions of all these instruments to the Indian economy. The period in which Prof. Rao served as Chairman coincided with an acceleration of innovations at ISRO – when he assumed the helm, the IRS was being developed; when he left, development of the cryogenic engine was underway.

However, India’s Rise… leaves out that aspect of his work that he was most well-positioned to discuss all along: politics. The Indian polity is heavily invested in ISRO, and constantly looks to it for solutions to a diverse array of problems, from telecommunications to meteorology. While ISRO may never have struggled to receive government funding, its run-ins with the 11 governments in its 45-year tenure will have made for a telling story on the Indian government’s association with on of its most successful scientific/technological bodies. Where Prof. Rao makes comments, it is usually on one of two things: either to say discuss why scientists are better leaders of organizations like ISRO than administrators, or how foreign governments floated or sank technology-transfer deals with India.

… Mr. T.N. Seshan, who was the Additional Secretary in the Department of Space, a senior member of the negotiation team deputed under my leadership, made this trip [to Glavkosmos, a Soviet company that was to equip and provide the launcher for the first-generation Indian Remote Sensing satellites] unpleasant by throwing up tantrums just because he was not the leader of the Indian delegation. Subsequently, Prof. Dhawan had to tell him in no uncertain terms that any high-level delegation such as the above would only be led by a scientist and not an administrator, a healthy practice followed in [the Dept. of Space] form the very beginning. (pp. 124)

This aspect notwithstanding, India’s Rise… is a useful book to have around now, when ISRO seems poised to enter its next era: that of the successful use of its cryogenic engines to lift heavier payloads into higher orbits. It contains a lot of interesting information about different programs and the attention to detail is distributed evenly, if sometimes unnecessarily. There is also an accompanying collection of possibly rare photographs; my favorite shows a rocket’s nose-cone being transported by bicycle to the launchpad. Overall, the book makes for excellent reference, and thanks to Prof. Rao’s scientific background, there is a sound representation of technical concepts devoid of misrepresentation. Here’s my review of it for The Hindu.

And the GSLV flew!

The Copernican
January 6, 2014

Congratulations, ISRO, for successfully launching the GSLV-D5 (and the GSAT-14 satellite with it) on January 5. Even as I write this, ISRO has put out an update on its website: “First orbit raising operation of GSAT-14 is successfully completed by firing the Apogee Motor for 3,134 seconds on Jan 06, 2014.”

With this launch comes the third success in eight launches of the GSLV program since 2001, and the first success with the indigenously developed cryogenic rocket-engine. As The Hindu reported, use of this technology widens India’s launch capability to include 2-2.5 tonne satellites. This propels India into becoming a cost-effective port for launching heavier satellites, not just lighter ones as before.

The GSLV-D5 (which stands for ‘developmental flight 5′) is a variant of the GSLV Mark II rocket, the successor to the GSLV Mark I. Both these rockets have three stages: solid, liquid and cryogenic. The solid stage possesses the design heritage of the American Nike-Apache engine; the liquid stage, of the French Vulcain engine. The third cryogenic upper stage was developed at the Liquid Propulsion Systems Centre, Tamil Nadu—ISRO’s counterpart of NASA’s JPL.

There is a significant difference of capability based on which engines are used. ISRO’s other more successful launch vehicle, the Polar Satellite Launch Vehicle (PSLV), uses four stages: alternating solid and liquid ones. Its payload capacity to the geostationary transfer orbit (GTO), from which the Mars Orbiter Mission was launched, is 1,410 kg. With the cryogenic engine, the GSLV’s capacity to the same orbit is 2,500 kg. By being able to lift more equipment, the GSLV hypothetically foretells our ability to launch more sophisticated instruments in the future.

The better engine

The cryogenic engine’s complexity resides in its ability to enhance the fuel’s flow through the engine.

An engine’s thrust—its propulsive force—is higher if the fuel flows faster through it. Solid fuels don’t flow, but they let off more energy when burnt than liquid fuels. Gaseous fuels barely flow and have to be stored in heavy, pressurised containers.

Liquid fuels flow, have higher energy density than gases, and they can be stored in light tanks that don’t weigh the rocket down as much. The volume they occupy can be further reduced by pressurising them. Recall that the previous launch attempt of the GSLV-D5, in August 2013, was called off 74 minutes before take-off because fuel had leaked from the liquid stage during the pre-pressurisation phase.

Even so, there seems no reason to use gaseous fuels. However, when hydrogen burns in the presence of oxygen, both gases at normal pressure and temperature, the energy released provides an effective exhaust velocity of 4.4 km/s—one of the highest (p. 23, ‘Cosmic Perspectives in Space Physics’, S. Biswas, 2000). It was to use them more effectively that cryogenic engines were developed.

In a cryogenic engine, the gases are cooled to very low temperatures, at which point they become liquids—acquiring the benefits of liquid fuels also. However, not all gases are considered for use. Consider this excerpt from a NASA report written in the 1960s:

A gas is considered to be cryogen if it can be changed to a liquid by the removal of heat and by subsequent temperature reduction to a very low value. The temperature range that is of interest in cryogenics is not defined precisely; however, most researchers consider a gas to be cryogenic if it can be liquefied at or below -240 degrees fahrenheit [-151.11 degrees celsius]. The most common cryogenic fluids are air, argon, helium, hydrogen, methane, neon, nitrogen and oxygen.

The difficulties arose from accommodating tanks of super-cold liquid propellants—which includes both the fuel and the oxidiser—inside a rocket engine. The liquefaction temperature for hydrogen is 20 kelvin, just above absolute zero; for oxygen, 89 kelvin.

Chain of problems

For starters, cryopumps are used to trap the gases and cool them. Then, special pumps called turbopumps are required to move the propellants into the combustion chamber at higher flow-rates and pressures. Next, relatively expensive igniters are required to set off combustion, which also has to be controlled with computers to prevent them from burning off too soon. And so forth.

Because using cryogenic technology drove advancements in one area of a propulsion system, other areas also required commensurate upgrades. Space engineers learnt many lessons from the American Saturn launch vehicles, whose advanced engines (for the time) were born of using cryogenic technology. They flew between 1961 and 1975.

In the book ‘Rocket Propulsion Elements’ (2010) by George Sutton and Oscar Biblarz, some other disadvantages of using cryogenic propellants are described (p. 697):

Cryogenic propellants cannot be used for long periods except when tanks are well insulated and escaping vapours are recondensed. Propellant loading occurs at the launch stand or test facility and requires cryogenic propellant storage facilities.

With cryogenic liquid propellants there is a start delay caused by the time needed to cool the system flow passage hardware to cryogenic temperatures. Cryogenically cooled fluids also continuously vaporise. Moreover, any moisture in the same tank could condense as ice, adulterating the fluid.

It was in simultaneously overcoming all these issues, with no help from other space-faring agencies, that ISRO took time. Now that the Mark II has been successfully launched, the organisation can set its eyes on loftier goals—such as successfully launching the next, mostly different variant of the GSLV: the Mark III, which is projected to have a payload capacity of 4,500-5,000 kg to GTO.

While we are some way off from considering the GSLV for manned missions, which requires mastery of reentry technology and spaceflight survival, the GSLV Mark III, if successful, could make India an invaluable hub for launching heavier satellites at costs lesser than ESA’s Ariane program, which India used in lieu of the GSLV.

Good luck, ISRO!