The stuff we learn after a plane goes missing

(A version of this post, as written by me, first appeared in The Hindu science blog, The Copernican, on March 16, 2014.)

It’s likely any of you knew many of or all the following, but these are things I became aware of from reading news items and analyses of the missing Malaysian Airlines flight 370, currently one of hijacked, crashed into a large water-body or next-plausible-occurrence. While some of them may not directly apply to the search for any survivors or the carrier, all of them shine important and interesting light on how things work.

Ringing phones aren’t actually ringing. Yet. – After the relative of a passenger on board flight 370 called up the person’s phone, it started to ring. This was flashed on TV channels as proof of the plane still being intact, whether or not it was in the air. A couple hours later, some telecom experts wrote in that the first few rings you hear aren’t rings that the call’s receiver is hearing, too. Instead, those are the rings the network relays to you so you don’t cut the call while it looks for the receiver’s device.

Air-traffic controllers don’t always know where the plane is* – Because planes are flying at 35,000 feet, controllers don’t anticipate much to happen to them, and they’re almost always right. This is why, while cruising at that altitude, pilots don’t constantly buzz home to controllers about where their flight is, its altitude, its speed, etc. To be on the safe side, they buzz home over specific intervals, a process that’s automated on some modern models. Between these intervals, of course, the flight might just as well be blinking in and out of extra dimensions but no one is going to have an eye on it.

Radar that controllers have access to don’t work so well beyond a range of 150-350 km** – If civilian aircraft are farther than this, they no longer show up as pings on the scanning screen. In fact, in another system, called automatic dependent surveillance-broadcast (ADS-B), a plane determines its location based on GPS and transmits it down to a controller.  Here again, there’s a distance limit of up to 300 or so km. Beyond this, they communicate over high-frequency radio. Of course, this depends on the quality of equipment, but it’s useful to know such limitations exist.

If a plane’s communication systems have been disabled, there’s no Plan B – There’s radar, then radio, then GPS, then a fourth system where the aircraft’s computers communicate via satellite with the airline’s offices. The effectiveness of radar and radio is contingent on weather conditions. Beyond a particular altitude and, again, depending on the weather, GPS is capable of blinking out. The fourth system can be be manually disabled. If a renegade technician on the flight knows these things and how to work them, he/she can take the flight off the grid.

For pilots, it’s aviate, navigate, and then communicate – If the flight is in some kind of danger, the pilot’s primary responsibility is to do those things necessary to tackle the threat, and try and get the carrier away from the danger area. Only then is he/she obligated to get in touch with the controllers.

The ocean is a LARGE place – Sure, we studied in school that the oceans cover 71% of Earth’s surface and contain 1.3 billion cubic km of water, but those were just numbers – big numbers, but numbers nonetheless. I think our sense of bigness isn’t reliant any bit on numbers but only on physical experiences. I’m 6’4″ tall, but you’ll have to come stand next to me to understand how tall I really am. That said, I now quote former US Navy sailor Jim Wright (from his Facebook post):

… even when you know exactly, and I mean EXACTLY, where to look, it’s still extremely difficult to find scattered bits of airplane or, to be blunt, scattered bits of people in the water. As a navy sailor, I’ve spent days searching for lost aircraft and airmen, and even if you think you know where the bird went down, the winds and the currents can spread the debris across hundreds or even thousands of miles of ocean in fairly short order. No machine, no computer, can search this volume, you have to put human eyeballs on every inch of the search area. You have to inspect every item you come across – and the oceans of the world are FULL of flotsam, jetsam, debris, junk, trash, crap, bits, and pieces. Often neither the sea nor the weather cooperates, it is INCREDIBLY difficult to spot [an] item the size of a human being in the water, among the swells and the spray, even if you know exactly where to look – and the sea conditions in this part of the world are some of the worst, especially this time of year.

Mr. Wright goes on to write that should flight 370 have crashed into the Bay of Bengal, the South China Sea or wherever, its leaked fuel wouldn’t exactly be visible as an oil slick because of two reasons: first, high-grade aircraft fuel evaporates really fast (if it hasn’t already been vaporized on its way down from the sky); second, given the size of the fuel-tank, such a slick might cover a few square kilometers: on an ocean, that’s a blip. The current extended search area spans 30,000 sq. km.

Military threats in militarized zones are discerned by ballistic trajectories of bodies – One of the simplest ways armored units know what they’re seeing in the sky is not a missile but a civilian aircraft is by their trajectory – the shape of their path. Most missiles are ballistic, which means their trajectories are like upturned Us. Aircraft, on the other hand, fly in a straight line. I suppose this really is common sense but it is good to know just what’s keeping me from getting bombed out of the air should I fly over, say, the East China Sea…

The global positioning system doesn’t continuously relay the aircraft’s location to controllers – See * and **.

Smaller nations advance pilots with fewer flying hours than is the norm in bigger nations – According to a piece on CNN, one of flight 370’s two pilots had clocked only 2,763 flying hours as a pilot, and was “transitioning from flight simulator training to the Boeing 777-200ER”. The other pilot had a little over 18,000 hours under his belt. As CNN goes on to explain, smaller nations tend to advance pilots they think are very talented, farther than they could go in the same time in other countries, through intensive training programs. I couldn’t find anything substantive on the nature of these supposedly advanced programs, so I can’t comment further.

Pilot suicide – Okay, what the hell. Nobody wants a person at the controls who’s expressed suicidal tendencies, and it’s the airline’s responsibility to treat or accordingly deal with such people. However, the moment you’ve said that, you realize how difficult such situations could be to predict, not to mention how much more difficult to prevent. A report by the US Federal Aviation Administration titled ‘Aircraft-Assisted Pilot Suicides in the United States‘, from February 2014, describes eight case-studies of flights whose pilots have killed themselves by crashing the aircraft. Each study describes the pilot’s behavior during the flight’s duration and is careful to note no other electric/mechanical failures were present. In the case of flight 370, of course, pilot suicide is just a theory.

The Boeing 777 is one safe carrier – Since its first flight in 1994, the Boeing 777-200ER (for ‘Extended Range’) had an estimated full loss equivalent (FLE) of 0.01 as of December 31, 2012, over 6.9 million flights. According to AirSafe.com, the FLE…

… is the sum of the proportions of passengers killed for each fatal event. For example, 50 out of 100 passengers killed on a flight is an FLE of 0.50, 1 of 100 would be a FLE of 0.01. The fatal event rate for a set of fatal events is found by dividing the total FLE by the number of flights in millions.

The same site also lists the 777-200ER as having the second lowest crash rate – 0.001 per million flights – of all time, among all models with 2 million flights or more, as of September, 2013. Only the Airbus A340 is better with a crash rate of 0, although it has clocked 4 million fewer flights (just saying).

Southeast Asia is a busy area for aviation – Between April-2012 and October-2013, the number of seats per week per Southeast Asian country grew by an average of 19.4%. In the same 18 months, the entire region’s population grew by 6% (both numbers courtesy the Center for Asia-Pacific Aviation). Then, of course, there’s Singapore’s Changi Airport. It’s one of Asia’s busiest, if not the world’s, handling 6,100 flights a week. And it was in this jam-packed area that people were trying to look for one flight.

For more on how we can manage to lose a plane in 2014, check out my previous post Airplanes Can Still Go Missing.

Airplanes can still go missing

Airplanes are one of our largest modes of transportation in terms of physical size. With the exception of ships, airplanes have the highest carrying capacity, are quite environmentally disruptive while in operation, and are equipped with some of the most sophisticated positional tracking technologies.

Yet, one still went missing last week. Fourteen years into the 21st century, while the NSA threatens the privacy of global telecommunications, one airplane goes missing. I don’t mean to trivialize the issue of the Malaysian Airlines Flight 370 turning untraceable, but just that even though some of us are smart enough to build invisibility cloaks, we also still have our problems.

I went around the web trying to understand why this was the case and found some interesting stuff. Even though #370 was only the third flight to go missing in the 21st century (and almost the 1,900th to have crashed), it is the 111th flight to do so since radio-sets were first installed on airplanes in 1917. On average, that’s a little more than one disappearance per year.

One reason finding missing airplanes is so difficult is multiplicity. Airplanes are made up of thousands of components each. When one component malfunctions, it could lead to a form of failure that’s very different from what would happen when a different component malfunctions. Watch an episode of Air Crash Investigation on National Geographic if you don’t believe me—airline investigators trying to figure out what exactly could have wrong often find the blame lies with small deviations from normal practices by the pilots or maintenance crews. I remember an episode titled Disaster on the Potomac that aired in December 2013, which details how the 1982 Air Florida crash that killed 78 people was due to a faulty de-icing procedure that skewed instrument readings in the cockpit.

On top of this, you have environmental factors to deal with. According to a piece by Jordan Golson in Wired on March 11, Col. J. Joseph, an aviation consultant, thinks that when planes break up at higher altitudes, the debris is likely to be moved around by stronger winds. Given that flight #370 was over 11 km up, Col. Joseph thinks windspeed could have been over 180 km/hr, enough to blow pieces out of any geographical context.

Things get worse if the plane crashes into the water. Consider the oft-quoted example of Air France Flight 447, which crashed into the Atlantic Ocean in 2009 with 228 people on board. Rescue missions took almost two days to find the first signs of its wreckage. Before that, in 2007, an Indonesia Boeing 737 crashed near the Makassar Strait near Sulawesi. Its wreckage took 10 days to find. There are many other sad yet interesting examples.

According to a Wall Street Journal analytical piece by Daniel Michaels and John Ostrower on March 11, the search for #370 could be further hampered by the fact that the region it was traversing is one of the busiest on the planet: Southeast Asia. Moreover, according to Golson, radar isn’t good enough after the plane’s farther than about 200 km from the nearest control tower, while precise GPS locations aren’t relayed continuously by the pilots to air-traffic controllers—this is why we rely on a ‘last known location’, not a definitive ‘last location’. At the same time, controllers don’t panic when pilots don’t ping back frequently because, according to pilot Patrick Smith’s blog:

In an emergency, communicating with the ground is secondary to dealing with the problems at hand. As the old adage goes: you aviate, navigate, and communicate — in that order. And so, the fact that no messages or distress signals were sent by the crew is not surprising or an indicator of anything specific.

However, what’s stranger about flight 370 is that it’s a Boeing 777, which comes with an emergency locator that beeps out location signals for many days after a crash. Rescuers are yet to spot one in the area they’re combing. So, as the search for a missing airplane drags on, it’s only our conviction that some trace of the vehicle will surface that lasts, accompanied by stronger and stronger scrutiny of what facts we manage to gather (In the meantime, the Daily Mail has something about an aeronautical black hole you might want to read about).

Riffing off the Rift

At first glance, the Oculus Rift is an ingenious invention – not something you can look at and go “Hey, how come I didn’t think of that?” By letting its users ‘step inside the game’, the Rift holds enormous potential not only to herald the proverbial revolution due a disruptive bit of technology in gaming but to change what gaming itself means. Building on many studies and discussions over the years about what gaming truly represents – a proof-of-work based rewards system that leaves players emotionally fulfilled – with the Rift, they’re for the first time truly equipped to explore more complex gaming constructs involving even more sophisticated rewards systems, including incorporating them into training programs and simulations. That’s just at first glance. At second glance, however, what’s even more remarkable is that the Rift involves nothing new – at least nothing that’s disruptively new – apart from the particular way it’s been assembled.

The device is a melange of components working in sync: motion sensors, gyroscopes, accelerometers, a processor, a pair of stereoscopic lenses, specially designed set of goggles and, necessarily, a game. Essentially, using the Rift is like playing an FPS in a theater with 3D glasses on. The game, built on the versatile Unity game engine, is rendered by the processor on the display device – a stripped down tablet will do – mounted on the front of the goggles that’re then strapped onto your head. These goggles are fit with the stereoscopic lenses that give the illusion of depth necessary to stepping inside the game. Next, with the gaming controller in your hand, you navigate the game you’re seeing play out in front of your eyes. A camera in front of you tracks your head movements and relays it to the processor, which uses the positional information to move your head inside the game. Suddenly, you’re thinking “Hey, how come I didn’t think of that?” A story by a colleague and me for The Hindu on how different developers have decided to take the Rift forward.

What is a fusion reaction?

The Copernican
February 21, 2014

Last week, the National Ignition Facility, USA, announced that it had breached the first step in triggering a fusion reaction. But what is a fusion reaction? Here are some answers from Prof. Bora – which require prior knowledge of high-school physics and chemistry. We’ll start from their basics (with my comments in square brackets).

What is meant by a nuclear reaction?

A process in which two nuclei or a nucleus and a subatomic particle collide to produce one or more different nucleii is known as a nuclear reaction. It implies an induced change in at least in one nucleus and does not apply to any radioactive decay.

What is the difference between fission and fusion reactions?

The main difference between fusion and fission reactions is that fission is the splitting of an atom into two or more smaller ones while fusion is the fusing of two or more smaller atoms into a larger one. They are two different types of energy-releasing reactions in which energy is released from powerful atomic bonds between the particles within the nucleus.

Which elements are permitted to undergo nuclear fusion?

Technically any two light nuclei below iron [in the Periodic Table] can be used for fusion, although some nuclei are better than most others when it comes to energy production. Like in fission, the energy in fusion comes from the “mass defect” (loss in mass) due to the increase in binding energy [that holds subatomic particles inside an atom together]. The greater the change in binding energy (from lower binding energy to higher binding energy), more the mass lost, results in more output energy.

What are the steps of a nuclear fusion reaction?

To create fusion energy, extremely high temperatures (100 million degrees Celsius) are required to overcome the electrostatic force of repulsion that exists between the light nuclei, popularly known as the Coulomb’s barrier [due to the protons’ positive charges]. Fusion, therefore, can occur for any two nuclei provided the temperature, density of the plasma [the superheated soup of charged particles] and confinement durations are met.

Under what conditions will a fusion chain-reaction occur?

When, say, a deuterium (D) and tritium (T) plasma is compressed to very high density, the particles resulting from nuclear reactions give their energy mostly to D and T ions, by nuclear collisions, rather than to electrons as usual. Fusion can thus proceed as a chain reaction, without the need of thermonuclear temperatures.

What are the natural forces at play during nuclear fusion?

The gravitational forces in the stars compress matter, mostly hydrogen, up to very large densities and temperatures at the star-centers, igniting the fusion reaction. The same gravitational field balances the enormous thermal expansion forces, maintaining the thermonuclear reactions in a star, like the sun, at a controlled and steady rate.

In the laboratory, the gravitational force is replaced by magnetic forces in magnetic confinement systems whereas radiation force compresses the fuel, generating even higher pressures and temperature, and resulting in a fusion reaction in the inertial confinement systems.

What approaches have human attempts to achieve nuclear fusion taken?

Two main approaches, namely magnetic containment and inertial containment, have been attempted to achieve fusion.

In the magnetic confinement scheme, various magnetic ‘cages’ have been used, the most successful being the tokamak configuration. Here, magnetic fields are generated by electric coils. Together with the current due to charged particles in the plasma, they confine the plasma into a particular shape. It is then heated to an extremely high temperature for fusion to occur.

In the inertial confinement scheme, extremely high-power lasers are concentrated on a tiny sphere consisting of the D-T mixture, creating tremendous pressure and compression. This generates even higher pressures and temperatures, creating a conducive environment for a fusion reaction to occur.

To create fusion energy in both the schemes, the reaction must be self-sustaining.

What are the hurdles that must be overcome to operate a working nuclear fusion power plant to generate electricity?

Fusion power is in the form of fast neutrons that are released, of an energy of 14 Mev [although MeV is a unit of energy, it denotes a certain mass of the particle according to the mass-energy equivalence; to compare, a non-excited proton has an energy of 938.2 MeV]. This energy will be converted to thermal energy which then would be converted to electrical energy. Hurdles are in the form of special materials that need to be developed that are capable of withstanding extremely high heat flux in a neutron environment. Reliability of operation of fusion reactors is also a big challenge.

What kind of waste products/emissions would be produced by a fusion power plant?

All the plasma facing components are bombarded by neutrons, which will make the first layers of the metallic confinement radioactive for a short period. The confinement will be made of different materials. Efforts are being made by materials scientists to develop special-grade steel to have weaker effects struck by neutrons. All said, such irradiated components will have to be stored for at least 50 years. The extent of contamination should be reduced with the newer structural materials.

Fusion reactions are intrinsically safe as the reaction terminates itself in the event of the failure of any sub-system.

India is one of the seven countries committed to the ITER program in France. Could you tell us what its status is?

ITER project has gradually moved into construction phase. Therefore, Fusion is no more a dream but a reality. Construction at site is progressing rapidly. Various critical components are being fabricated in the seven parties through their domestic agencies.

The first plasma is expected in the end of 2020 as per the 2010 baseline. Indian industries are also involved in producing various subsystems. R&D and prototyping of many of the high tech components are progressing as per plan. India is committed to deliver its share in time.

Type 1a supernova spotted in M82

(A version of this piece appeared on The Hindu, Chennai, website on January 22 as written by me.)

A Type 1a supernova was spotted a few hours ago by stargazers in the starburst galaxy M82, which is only 11.4 million light-years away from Earth (here’s an interactive map and a helpful sky-chart). This is the closest such supernova that has been detected since 1972, and is poised to give astronomers and cosmologists some invaluable insight into how such stellar explosions pan out, and what we can learn about neutrinos, gamma rays and dark energy from them.

See the bright, blinking spot of light on the galaxy's 'lower' half? That's your SN1a.
See the bright, blinking spot of light on the galaxy’s ‘lower’ half? That’s your SN1a. Animation by E. Guido, N. Howes, M. Nicolini

The supernova is a Type 1a supernova (SN1a), which means it’s not the explosion that happens when a star runs out of fuel and blows itself apart. Instead, it’s what happens when a white dwarf pulls in too much material from a nearby star and blows itself apart—having bitten off more than it could chew.

That M82 is a starburst galaxy means it’s rapidly producing stars. This also means it has a lot of old stars, many of which are continuously dying. They could either be dying as Type 2 supernovae—which is the run-out-fuel kind—or Type 1. The SN1a that’s gone off now (i.e. so many millions of years ago) has chosen to go off as Type 1a, and that’s a good thing because we haven’t spotted a Type 1a since 1972 that’s so close.

When the explosion releases light, it doesn’t immediately start its journey and head straight for Earth. Instead, the light gets trapped in the explosion behind lots of matter, and is delayed. In fact, the ‘ghost particles’ that can pass through matter almost undetected, neutrinos, get a headstart. They reach us before light from the explosion does.

However, a Type 1a supernova produces far fewer neutrinos than does a Type 2, so while the neutrinos flying our way will still be valuable, they might not be valuable enough to study a supernova with. On the other hand, the M82-SN1a could be our big chance to study SN-origin gamma rays in the best detail for the first time in more than four decades.

However, since we haven’t had our detectors trained for neutrinos from M82 particularly, how do we know when that white dwarf in M82 blew up? We measure how its brightness varies over time. Using that information, we know the thing blew up 11.4 million years ago. Because a 1a’s variation of brightness over time consistently follows a well-established pattern, white dwarfs across the universe can be used as cosmic candlesticks: astronomers use them to judge the relative distances of nearby objects.

In fact, white dwarfs did play an important role in astronomers discovering that the universe was expanding at an accelerating rate due to dark energy. Paraphrasing astronomer Katharine Mack’s tweet: “With a better estimate of the distance [as judged from their brightness], we get a better link between the distance and the universe’s expansion.”

M82’s relative closeness is useful because it provides a lot more information to work with before it could get (more) adulterated through the distance of space. In fact, according to astronomer Daniel Fischer, the supernova’s been going on for a full week now, and was missed by the bigger budget telescopes because it was, and I quote, ‘too bright’. As Brad Tucker, an astronomer from Berkeley, tweeted,

[tweet https://twitter.com/btucker22/status/425989187188187137 hide_thread=’true’]

So, hadn’t it been for amateur astronomers, who’ve made this remarkable observation, too, we wouldn’t have spotted this beauty. Already, according to Skymania’s Paul Sutherland, astronomers believe they’ve caught this supernova early in its act and think it could brighten even further.

This particular find was made by Russian amateur astronomers on January 22, and later confirmed by multiple sources. In fact, M82-SN1a seems to have appeared in the photographs taken by noted Japanese amateur astronomer Koichi Itagaki on January 14 itself (beating Patrick Wiggins by a day). And if you’re interested in reporting such discoveries, check this page out. If you want to keep up with the social media conversation over M82, follow @astrokatie. She’s going nuts (in a good way).

Itagaki's photos of M82
Itagaki’s photos of M82

A useful book to have around

14oeb_Space_jpg_1719808fIndia’s Rise as a Space Power is a book by Prof. Udupi Ramachandra Rao, former Chairman, ISRO (1988-1994), that provides some useful historical context of the space research organization from a scientist’s perspective, not an administrator’s.

Through it, Prof. Rao talks about how our space program was carefully crafted with a series of satellites and launch vehicles, and how each one of them has contributed to where the organization, as such, is today: an immutable symbol of power in the Third World and India’s pride. He starts with the foundation of ISRO, goes on to the visions of Vikram Sarabhai and Satish Dhawan, then introduces the story of Aryabhata, our first satellite, followed by Bhaskara I and II, the IRS series, the INSAT program, the ASLV, PSLV and GSLV, and finally, the contributions of all these instruments to the Indian economy. The period in which Prof. Rao served as Chairman coincided with an acceleration of innovations at ISRO – when he assumed the helm, the IRS was being developed; when he left, development of the cryogenic engine was underway.

However, India’s Rise… leaves out that aspect of his work that he was most well-positioned to discuss all along: politics. The Indian polity is heavily invested in ISRO, and constantly looks to it for solutions to a diverse array of problems, from telecommunications to meteorology. While ISRO may never have struggled to receive government funding, its run-ins with the 11 governments in its 45-year tenure will have made for a telling story on the Indian government’s association with on of its most successful scientific/technological bodies. Where Prof. Rao makes comments, it is usually on one of two things: either to say discuss why scientists are better leaders of organizations like ISRO than administrators, or how foreign governments floated or sank technology-transfer deals with India.

… Mr. T.N. Seshan, who was the Additional Secretary in the Department of Space, a senior member of the negotiation team deputed under my leadership, made this trip [to Glavkosmos, a Soviet company that was to equip and provide the launcher for the first-generation Indian Remote Sensing satellites] unpleasant by throwing up tantrums just because he was not the leader of the Indian delegation. Subsequently, Prof. Dhawan had to tell him in no uncertain terms that any high-level delegation such as the above would only be led by a scientist and not an administrator, a healthy practice followed in [the Dept. of Space] form the very beginning. (pp. 124)

This aspect notwithstanding, India’s Rise… is a useful book to have around now, when ISRO seems poised to enter its next era: that of the successful use of its cryogenic engines to lift heavier payloads into higher orbits. It contains a lot of interesting information about different programs and the attention to detail is distributed evenly, if sometimes unnecessarily. There is also an accompanying collection of possibly rare photographs; my favorite shows a rocket’s nose-cone being transported by bicycle to the launchpad. Overall, the book makes for excellent reference, and thanks to Prof. Rao’s scientific background, there is a sound representation of technical concepts devoid of misrepresentation. Here’s my review of it for The Hindu.

And the GSLV flew!

The Copernican
January 6, 2014

Congratulations, ISRO, for successfully launching the GSLV-D5 (and the GSAT-14 satellite with it) on January 5. Even as I write this, ISRO has put out an update on its website: “First orbit raising operation of GSAT-14 is successfully completed by firing the Apogee Motor for 3,134 seconds on Jan 06, 2014.”

With this launch comes the third success in eight launches of the GSLV program since 2001, and the first success with the indigenously developed cryogenic rocket-engine. As The Hindu reported, use of this technology widens India’s launch capability to include 2-2.5 tonne satellites. This propels India into becoming a cost-effective port for launching heavier satellites, not just lighter ones as before.

The GSLV-D5 (which stands for ‘developmental flight 5′) is a variant of the GSLV Mark II rocket, the successor to the GSLV Mark I. Both these rockets have three stages: solid, liquid and cryogenic. The solid stage possesses the design heritage of the American Nike-Apache engine; the liquid stage, of the French Vulcain engine. The third cryogenic upper stage was developed at the Liquid Propulsion Systems Centre, Tamil Nadu—ISRO’s counterpart of NASA’s JPL.

There is a significant difference of capability based on which engines are used. ISRO’s other more successful launch vehicle, the Polar Satellite Launch Vehicle (PSLV), uses four stages: alternating solid and liquid ones. Its payload capacity to the geostationary transfer orbit (GTO), from which the Mars Orbiter Mission was launched, is 1,410 kg. With the cryogenic engine, the GSLV’s capacity to the same orbit is 2,500 kg. By being able to lift more equipment, the GSLV hypothetically foretells our ability to launch more sophisticated instruments in the future.

The better engine

The cryogenic engine’s complexity resides in its ability to enhance the fuel’s flow through the engine.

An engine’s thrust—its propulsive force—is higher if the fuel flows faster through it. Solid fuels don’t flow, but they let off more energy when burnt than liquid fuels. Gaseous fuels barely flow and have to be stored in heavy, pressurised containers.

Liquid fuels flow, have higher energy density than gases, and they can be stored in light tanks that don’t weigh the rocket down as much. The volume they occupy can be further reduced by pressurising them. Recall that the previous launch attempt of the GSLV-D5, in August 2013, was called off 74 minutes before take-off because fuel had leaked from the liquid stage during the pre-pressurisation phase.

Even so, there seems no reason to use gaseous fuels. However, when hydrogen burns in the presence of oxygen, both gases at normal pressure and temperature, the energy released provides an effective exhaust velocity of 4.4 km/s—one of the highest (p. 23, ‘Cosmic Perspectives in Space Physics’, S. Biswas, 2000). It was to use them more effectively that cryogenic engines were developed.

In a cryogenic engine, the gases are cooled to very low temperatures, at which point they become liquids—acquiring the benefits of liquid fuels also. However, not all gases are considered for use. Consider this excerpt from a NASA report written in the 1960s:

A gas is considered to be cryogen if it can be changed to a liquid by the removal of heat and by subsequent temperature reduction to a very low value. The temperature range that is of interest in cryogenics is not defined precisely; however, most researchers consider a gas to be cryogenic if it can be liquefied at or below -240 degrees fahrenheit [-151.11 degrees celsius]. The most common cryogenic fluids are air, argon, helium, hydrogen, methane, neon, nitrogen and oxygen.

The difficulties arose from accommodating tanks of super-cold liquid propellants—which includes both the fuel and the oxidiser—inside a rocket engine. The liquefaction temperature for hydrogen is 20 kelvin, just above absolute zero; for oxygen, 89 kelvin.

Chain of problems

For starters, cryopumps are used to trap the gases and cool them. Then, special pumps called turbopumps are required to move the propellants into the combustion chamber at higher flow-rates and pressures. Next, relatively expensive igniters are required to set off combustion, which also has to be controlled with computers to prevent them from burning off too soon. And so forth.

Because using cryogenic technology drove advancements in one area of a propulsion system, other areas also required commensurate upgrades. Space engineers learnt many lessons from the American Saturn launch vehicles, whose advanced engines (for the time) were born of using cryogenic technology. They flew between 1961 and 1975.

In the book ‘Rocket Propulsion Elements’ (2010) by George Sutton and Oscar Biblarz, some other disadvantages of using cryogenic propellants are described (p. 697):

Cryogenic propellants cannot be used for long periods except when tanks are well insulated and escaping vapours are recondensed. Propellant loading occurs at the launch stand or test facility and requires cryogenic propellant storage facilities.

With cryogenic liquid propellants there is a start delay caused by the time needed to cool the system flow passage hardware to cryogenic temperatures. Cryogenically cooled fluids also continuously vaporise. Moreover, any moisture in the same tank could condense as ice, adulterating the fluid.

It was in simultaneously overcoming all these issues, with no help from other space-faring agencies, that ISRO took time. Now that the Mark II has been successfully launched, the organisation can set its eyes on loftier goals—such as successfully launching the next, mostly different variant of the GSLV: the Mark III, which is projected to have a payload capacity of 4,500-5,000 kg to GTO.

While we are some way off from considering the GSLV for manned missions, which requires mastery of reentry technology and spaceflight survival, the GSLV Mark III, if successful, could make India an invaluable hub for launching heavier satellites at costs lesser than ESA’s Ariane program, which India used in lieu of the GSLV.

Good luck, ISRO!

Rethinking cryptocurrency

I’m still unsure about bitcoins’ uncertain future as far as mainstream adoption is concerned, but such issues have been hogging media limelight so much so that people are missing out on why bitcoins are actually awesome. They’re not awesome because they’re worth about $800 apiece (at the time of writing this) or because they threaten to trivialize the existence of banks. These concerns have nothing to do with bitcoins – they’re simply anti-establishment frustrations in post-recession era. Bitcoins, and other cryptocurrencies like it, are awesome because of their technical framework which enables:

  1. Public verification of validity (as opposed to third-party verification)
  2. Zero transaction costs (although this is likely to change)

Thinking about bitcoins as alternatives to dollars only five years into the cryptycurrency’s existence is stupid. Even scoffing at how steep the learning curve is (to learn about how to acquire and moblize bitcoins) is stupid. Instead, what we must be focusing on are the characteristics of the technology that makes the two mentioned techniques possible because they have great reformative potential in a country like India (if adopted correctly, which I suppose is a subjective ideal, but hey). Zero transaction costs enable individual and small enterprises to avoid painful scaling costs, while public verification enables only value to be transferred across a network instead of forcing two parties to share information unrelated ot the transaction itself with a bank, etc. Here’s my OpEd on this idea for The Hindu.

Predatory publishing, vulnerable prey

On December 29, the International Conference on Recent Innovations in Engineering, Science and Technology (ICRIEST) is kicked off in Pune. It’s not a very well-known conference, but might as well have been for all the wrong reasons.

On December 16 and 20, Navin Kabra, from Pune, submitted two papers to ICRIEST. Both were accepted and, following a notification from the conference’s organizers, Mr. Kabra was told he could present the papers on December 29 if he registered himself at a cost of Rs. 5,000.

Herein lies the rub. The papers that Mr. Kabra submitted are meaningless. They claim to be about computer science, but were created entirely by the SCIGen fake-paper generator available here. The first one, titled “Impact of Symmetries on Cryptoanalysis”, is rife with tautological statements, and could not possibly have cleared peer-review. However, in the acceptance letter that Mr. Kabra received by email, paper is claimed to have been accepted after being subjected to some process of scrutiny, scoring 60, 70, 80 and 90.75 among some reviewers.

Why is the conference refusing to reject such a paper, then? Is it subsisting on the incompetence of secretarial staff? Or is it so desperate for papers that rejection rates are absurdly low?

Mr. Kabra’s second paper, “Use of cloud-computing and social media to determine box office performance”, might say otherwise. This one is even more brazen, containing these lines in its introduction:

As is clear from the title of this paper, this paper deals with the entertainment industry. So, we do provide entertainment in this paper. So, if you are reading this paper for entertainment, we suggest a heuristic that will allow you to read this paper efficiently. You should read any paragraph that starts with the first 4 words in bold and italics – those have been written by the author in painstaking detail. However, if a paragraph does not start with bold and italics, feel free to skip it because it is gibberish auto-generated by the good folks at SCIGen.

If this paragraph went through, then the administrators of ICRIEST are likely to possess no semblance of interest in academic research. In fact, they could be running the conference as a front to make some quick bucks.

Mr. Kabra professes an immediate reason for his perpetrating this scheme. “Lots of students are falling prey to such scams, and I want to raise awareness amongst students,” he wrote in an email.

He tells me that for the last three years, students pursuing a Bachelor of Engineering in a college affiliated with the University of Pune have been required to submit their final project to a conference, “a ridiculous requirement” thinks Mr. Kabra. As usual, not all colleges are enforcing this rule; those that are, on the other hand, are pushing students. Beyond falsifying data and plagiarizing reports to get them past evaluators, the next best thing to secure a good grade is to sneak it into some conference.

Research standards in the university are likely not helping, either. Such successful submissions as hoped for by teachers at Indian institutions will never happen for as long as the quality of research in the institution itself is low. Enough scientometric data exists from the last decade to support this, although I don’t know how if it breaks down to graduate and undergraduate research.

(While it may be argued that scientific output is not the only way to measure the quality of scientific research at an institution, you should know something’s afoot when the quantity of output is either very high or very low relative to, say, the corresponding number of citations and the country’s R&D expenditure.)

Another reason to think neither the university nor the students’ ‘mentors’ are helping is someone who spoke on behalf of the University to Mr. Kabra had no idea about ICRIEST. To quote from the Mid-Day article that’s covered this incident,

“I don’t know of any research organisation named IRAJ. I am sorry, I am just not aware about any such conference happening in the city,” said Dr Gajanan Kharate, dean of engineering in the University of Pune.

Does the U-of-Pune care if students have submitted paper to bogus journals? Do they check contents of the research themselves or do they rely on whether students’ ‘papers’ are accepted or not? No matter; what will change hence? I’m not sure. I won’t be surprised if nothing changes at all. However, there is a place to start.

Prof. Jeffrey Beall is the Scholarly Initiatives Librarian at the University of Colorado, Denver, and he maintains an exhaustive list of questionable journals and publishers. This list is well-referenced, constantly updated, and commonly referred to to check for dubious characters that might have approached research scholars.

On the list is the Institute for Research and Journals (IRAJ), which is organizing ICRIEST. In an article in The Hindu on September 26, 2012, Prof. Beall says, “They want others to work for free, and they want to make money off the good reputations of honest researchers.”

Mr. Kabra told me he had registered himself for the presentation—and not before he was able to bargain with them, “like … with a vegetable vendor”, and avail a 50 per cent discount on the fees. As silly as it sounds, this is not the mark of a reputable institution but a telltale sign of a publisher incapable of understanding the indignity of such bargains.

Another publisher on Prof. Beall’s list, Asian Journal of Mathematical Sciences, is sly enough to offer a 50 per cent fee-waiver because they “do not want fees to prevent the publication of worthy work”. Yet another journal, Academy Publish, is just honest: “We currently offer a 75 per cent discount to all invitees.”

Other signs, of course, are the use of words with incorrect spellings, as in “Dear Sir/Mam”.

At the end of the day, Mr. Kabra was unable to go ahead with the presentation because he said he was depressed by the sight of Masters students at ICRIEST—some who’d come there, on the west coast, from the eastern-coast state of Odisha. That’s the journey they’re willing to make when pushed by the lure for grades from one side and the existence of conferences like ICRIEST on the other.

Solving mysteries, by William & Adso

The following is an excerpt from The Name of the Rose, Umberto Eco’s debut novel from 1980. The story is set in an Italian monastery in 1327, and is an intellectually heady murder mystery doused in symbolism and linguistic ambivalence. Two characters, William of Baskerville and Adso of Melk, are conversing about using deductive reasoning to solve mysteries.

“Adso,” William said, “solving a mystery is not the same as deducing from first principles. Nor does it amount simply to collecting a number of particular data from which to infer a general law. It means, rather, facing one or two or three particular data apparently with nothing in common, and trying to imagine whether they could represent so many instances of a general law you don’t yet know, and which perhaps has never been pronounced. To be sure, if you know, as the philosopher says, that man, the horse, and the mule are all without bile and are all long-lived, you can venture the principle that animals without bile live a long time. But take the case of animals with horns. Why do they have horns? Suddenly you realize that all animals with horns are without teeth in the upper jaw. This would be a fine discovery, if you did not also realize that, alas, there are animals without teeth in the upper jaw who, however, do not have horns: the camel, to name one. And finally you realize that all animals without teeth in the upper jaw have four stomachs. Well, then, you can suppose that one who cannot chew well must need four stomachs to digest food better. But what about the horns? You then try to imagine a material cause for horns—say, the lack of teeth provides the animal with an excess of osseous matter that must emerge somewhere else. But is that sufficient explanation? No, because the camel has no upper teeth, has four stomachs, but does not have horns. And you must also imagine a final cause. The osseous matter emerges in horns only in animals without other means of defense. But the camel has a very tough hide and doesn’t need horns. So the law could be …”

“But what have horns to do with anything?” I asked impatiently. “And why are you concerned with animals having horns?”

“I have never concerned myself with them…”

When I first read this book almost seven years ago, I remember reading these lines with awe (I was reading my first books on the philosophy of science then). Like a fool on whom common sense was then lost but somehow not their meaning itself, I memorized the lines, and then promptly forgot the context in which they appeared. While randomly surfing through the web today, I found them once more, so here they are. They belong to the chapter titled “In which Alinardo seems to give valuable information, and William reveals his method of arriving at a probable truth through a series of unquestionable errors.”