Crowd-sourcing a fantasy fiction tale

What if thousands of writers, economists, philosophers, scientists, teachers, industrialists and other many other people from other professions besides were able to pool their intellectual and creative resources to script one epic fantasy-fiction story?

Such an idea would probably form the crux of an average to poor book idea, but the story itself would be awesome, methinks.

Here’s an example. Every great writer, most notably Asimov, whose works of sci-fi/fantasy I’ve read has speculated upon the rapidly changing nature of different professions in their works.

The simplest example manifests in Asimov’s 1957 short story Profession. In the story, children are educated no longer within classrooms but almost instantaneously through a brain-computer interface, a process called taping.

Where are the teachers in this world? They, it seems, would come later, in the guise of professionals who compose and compile those information-heavy tapes. Seeing as Profession is set in the 66th century of human civilization, the taping scenario is entirely plausible. We could get there.

But this is one man’s way of constructing a possible future among infinite others. Upon closer scrutiny, many inconsistencies between Asimov’s world and ours could be chalked up. For one, the author could have presupposed events in our future which might never really happen.

However, such scrutiny would be meaningless because that is not the purpose of Asimov’s work. He writes to amaze, to draw parallels – not necessarily contiguous ones – between our world and a one in the future.

But what if Asimov had been an economist instead of a biochemist? Would he have written his stories any differently?

My (foolish) idea is to just draw up a very general template of a future world, to assign different parts of that template to experts from different professions, and then see how they think their professions would have changed. More than amaze, such a world might enlighten us… and I think it ought to be fascinating for just that reason.

The cloak of fantasy, the necessity of stories to engage all those professions and their intricate gives-and-takes and weave them into a empathetic narrative could then be the work of writers and other such creatively inclined people or, as I like to call them, ‘imagineers’.

This idea has persisted for a long time. It was stoked when I first encountered in my college days the MMORPG called World of Warcraft. In it, many players from across the world come together and play a game set in the fictitious realm called Azeroth, designed by Blizzard, Inc.

However, the game has already been drawn up to its fullest, so to speak. For example, there are objectives for players to attain by playing a certain character. If the character fails, he simply tries again. For the game to progress, the objectives must be attained. That’s what makes a game by definition, anyway.

 

My idea wouldn’t be a game because there are no objectives. My idea would be a game to define those objectives, and in a much more inclusive way. Imagine an alternate universe for all of us to share in. The story goes where our all-encompassing mind would take us.

The downside, of course, would be the loss of absolute flexibility, with so many clashing ideas and, more powerful, egos. But… play it.

How hard is it to violate Pauli’s exclusion principle?

Ultracooled rubidium atoms are bosonic, and start to behave as part of a collective fluid, their properties varying together like shown above. Bosons can do this because they don't obey Pauli's principle.
Ultracooled rubidium atoms are bosonic, and start to behave as part of a collective fluid, their properties varying together like shown above. Bosons can do this because they don’t obey Pauli’s principle. Photo: Wikimedia Commons

A well-designed auditorium always has all its seats positioned on an inclined plane. ​Otherwise it wouldn’t be well-designed, would it? Anyway, this arrangement solves an important problem: It lets people sit anywhere they want to irrespective of their heights.

It won’t matter if a taller person sits in front of a shorter one – the inclination will render their height-differences irrelevant.

However, if the plane had been flat, if all the seats were just placed one behind another instead of raising or lowering their distances from the floor, ​then people would have been forced to follow a particular seating order. Like the discs in a game of Tower of Hanoi, the seats must be filled with shorter people coming first if everyone’s view of the stage must be unobstructed.

It’s only logical.​

A similar thing happens inside atoms. ​While protons and neutrons are packed into a tiny nucleus, electrons orbit the nucleus in relatively much larger orbits. For instance, if the nucleus is 2 m across, then electrons would be orbiting it at up to 10 km away. This is because every electron can only be so far away that its negative charge doesn’t pull it into the nucleus.

However, this doesn’t mean all electrons orbit the nucleus at the same distance. They follow an order. Like the seats on the flat floor where taller people must sit behind shorter ones, more energetic electrons must orbit closer to the nucleus than less energetic ones. Similarly, all electrons of the same energy must orbit the nucleus at the same distance.

Over the years, scientists have observed that around every atom of a known element, there are well-defined energy levels, each accommodating a fixed and known number of electrons. These quantities are determined by various properties of electrons, designated by the particle’s four quantum numbers: n, l, m_s, m_l.

Nomenclature:

1. n is the principle quantum number, and designates the energy-level of the electron.​

​2. l is the azimuthal quantum number, and describes the angular momentum at which the electron is zipping around the nucleus.

​3. m_l is the orbital quantum number and yields the value of l along a specified axis.

4. s is the spin quantum number and describes the “intrinsic” angular momentum, a quantity that doesn’t have a counterpart in Newtonian mechanics.​

So, an electron’s occupation of some energy slot around a nucleus depends on the values of the four quantum numbers. ​And the most significant relation between all of them is the Pauli exclusion principle (PEP): no two electrons with all four same quantum numbers can occupy the same quantum state.

An energy level is an example of a quantum state. This means if two electrons exist at the same level inside an atom, and if their n, l and m_l values are equal, then their m_s value (i.e., spin) must be different: one up, one down. Two electrons with equal n, l, m_l, and m_s values couldn’t occupy the same level in the same atom.

But why?​

The PEP is named for its discoverer, Wolfgang Pauli. Interestingly, Pauli himself couldn’t put a finger why the principle was the way it was. From his Nobel lecture, 1945 (PDF):​

Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had always the feeling and I still have it today, that this is a deficiency. … The impression that the shadow of some incompleteness [falls] here on the bright light of success of the new quantum mechanics seems to me unavoidable.

It wasn’t that the principle’s ontology was sorted over time. In 1963, Richard Feynman said:

“…. Why is it that particles with half-integral spin are Fermi particles (…) whereas particles with integral spin are Bose particles (…)? We apologize for the fact that we can not give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments from quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way to reproduce his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. (…) This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world.

​(R. Feynman, Feynman Lectures of Physics, 3rd Vol., Chap. 4, Addison-Wesley, Reading, Massachusetts, 1963)

The Ramberg-Snow experiment​

In 1990, two scientists, Ramberg and Snow, devised a simple experiment to study the principle. They connected a thin strip of copper to a 50-ampere current source. Then, they placed an X-ray detector over ​the strip. When electric current passed through the strip, X-rays would be emitted, which would then be picked by the detector for analysis.

How did this happen?​

When electrons jump from a higher-energy (i.e., farther) energy-level to a lower-energy (closer) one, they must lose some energy to be permitted their new status. The energy can be lost as light, X-rays, UV- radiation, etc. Because we know how many distinct energy-levels there are in the atoms of each element and how much energy each of those orbitals has, electrons jumping levels for different elements must lose different, but fixed, amounts of energy.

So, when current is passed through copper, extra electrons are introduced into the metal, precipitating the forced occupation of some energy-level, like people sitting in the aisles of a full auditorium.

In this scenario, or in any other one for that matter, an electron jumping from the 2p level to the 1s level in a copper atom ​must lose 8.05 keV as X-rays – no more, no less, no differently.

However, Ramberg and Snow found that, after over two months of data-taking at a basement in Fermilab, Illinois, ​about 1 in 170 trillion trillion X-ray signals didn’t contain 8.05 keV but 7.7 keV.

The 1s orbital usually has space for two electrons going by the PEP. If one slot’s taken and the other’s free, then an electron wanting to jump in from the 2p level must lose 8.05 keV. However, ​if an electron was losing 7.7 keV, where was it going?

After some simple calculations, the scientists made a surprising discovery.​ The electron was squeezing itself in with two other electrons in the 1s level itself – instead of resorting to the aisles, it was sitting on another electron’s lap! This meant that the PEP was being violated with a probability of 1 in 170 trillion trillion.

While this is a laughably minuscule number, it’s nevertheless a positive number even taking into account possibly large errors arising out of the unsophisticated nature of the Ramberg-Snow apparatus. Effectively, where we thought there ought to be no violations, there were.

Just like that, there was a hole in our understanding of the exclusion principle.​

And it was the sort of hole with which we could make lemonade.

Into the kitchen​

So fast forward to 2006, 26 lemonade-hungry physicists, one pretentiously titled experiment, and one problem statement: Could the PEP be violated much more or much less often than once in 170 trillion trillion?

The setup was called the VIP for ‘VIolation of the Pauli Exclusion Principle Experiment’.​ How ingenious. Anyway, the idea was to replicate the Ramberg-Snow experiment in a more sophisticated environment. Instead of a simple circuit that you could build on the table top, they used one in the Gran Sasso National Lab that looked like this.

This is the DEAR (DAΦNE Exotic Atom Research) setup and was slightly modified to make way for the VIP setup. Everything’s self-evident, I suppose. CCD stands for charge-coupled detector, which is basically an X-ray detector.

(The Gran Sasso National Lab, or Laboratori Nazionali del Gran Sasso, is one of the world’s largest underground particle physics laboratories, consisting of around 1,000 scientists working on more than 15 experiments.​ It is located near the Gran Sasso mountain, between the towns of L’Aquila and Teramo in Italy.)​

​After about three years of data-taking, the team of 26 announced that it had bettered the Ramberg-Snow data by three orders of magnitude. According to data made available in 2009, they declared the PEP had been violated only once every 570,000 trillion trillion electronic level-jumps.​

Fewer yet surely

​Hurrah! The principle was being violated 1,000 times less often than thought, but it was being violated still. At this stage, the VIP team seemed to have thought the number could be much lesser, even 100-times lesser. On March 5, 2013, it submitted a paper (PDF) to the arXiv pre-print server containing a proposal for the more-sensitive VIP2.

​You might think that the number is positive, so VIP’s efforts an attempt to figure out how many angels are dancing on the head of a pin.

Well, think about this way. The moment we zero in on one value, one frequency with which anomalous level-jumps take place, then we’ll be in a position to stick the number into a formula and see what that means for the world around us.

Also, electrons are only one kind of a class of particles called fermions, all of which are thought to obey the PEP. Perhaps other experiments conducted with other fermions, such as tau leptons and muons, will throw up some other rate of violation. In that case, we’ll be able to say the misbehavior is actually dependent on some property of the particle, like its mass, spin, charge, etc.

Until that day, we’ve got to keep trying.​

(This blog post first appeared at The Copernican on March 11, 2013.)

Where does the Higgs boson come from?

When the Chelyabinsk meteor – dubbed Chebarkul – entered Earth’s atmosphere at around 17 km/s, it started to heat up due to friction. After a point, cracks already present on the chunk of rock weighing 9,000-tonnes became licensed to widen and eventually split off Chebarkul into smaller parts.

While the internal structure of Chebarkul was responsible for where the cracks widened and at what temperature and other conditions, the rock’s heating was the tipping point. Once it got hot enough, its crystalline structure began to disintegrate in some parts.

Spontaneous symmetry-breaking

About 13.75 billion years ago, this is what happened to the universe. At first, there was a sea of energy, a symmetrically uniform block. Suddenly, this block was rapidly exposed to extreme heat. Once it hit about 1015 kelvin – 173 billion times hotter than our Sun’s surface – the block disintegrated into smaller packets called particles. Its symmetry was broken. The Big Bang had happened.


The Big Bang splashed a copious amount of energy across the universe, whose residue is perceivable as the CMBR.

Quickly, the high temperature fell off, but the particles couldn’t return to their original state of perfect togetherness. The block was broken forever, and the particles now had to fend for themselves. There was a disturbance, or perturbations, in the system, and the forces started to act. Physicists today call this the Nambu-Goldstone (NG) mode, named for Jeffrey Goldstone and Yoichiro Nambu.

In the tradition of particle physics treating with everything in terms of particles, the forces in the NG mode were characterised in terms of NG bosons. The exchange of these bosons between two particles meant they were exchanging forces. Since each boson is also a particle, a force can be thought of as the exchange of energy between two particles or bodies.

This is just like the concept of phonons in condensed matter physics: when atoms part of a perfectly arranged array vibrate, physicists know they contain some extra energy that makes them restless. They isolate this surplus in the form of a particle called a phonon, and address the entire array’s surplus in terms of multiple phonons. So, as a series of restlessness moves through the solid, it’ll be like a sound wave moving through it. Simplifies the math.

Anyway, the symmetry-breaking also gave rise to some fundamental forces. They’re called ‘fundamental’ because of their primacy, and because they’re still around. They were born because the disturbances in the energy block, encapsulated as the NG bosons, were interacting with an all-pervading background field called the Higgs field.

The Higgs field has four components, two charged and two uncharged. Another, more common, example of a field is the electric field, which has two components: some strength at a point (charged) and the direction of the strength at that point (neutral). Components of the Higgs field perturbed the NG bosons in a particular way to give rise to four fundamental forces, one for each component.

So, just like in Chebarkul’s case, where its internal structure dictated where the first cracks would appear, in the block’s case, the heating had disturbed the energy block to awaken different “cracks” at different points.

The Call of Cthulhu

The first such “crack” to be born was the electroweak force. As the surroundings of these particles continued to cool, the electroweak force split into two: electromagnetic (eM) and weak forces.

The force-carrier for the eM force is called a photon. Photons can exist at different energies, and at each energy-level, they have a corresponding frequency. If a photon happens to be in the “visible range” of energy-levels, then each frequency shows itself as a colour. And so on…

The force-carriers of the weak forces are the W+, W-, and Z bosons. At the time the first W/Z bosons came to life, they were massless. We know now because of Einstein’s mass-energy equivalence that this means the bosons had no energy. How were they particulate, then?

Imagine an auditorium where an important lecture’s about to be given. You get there early, your friend is late, and you decide to reserve a seat for her. Then, your friend finally arrives 10 minutes after the lecture’s started and takes her seat. In this scenario, after your arrival, the seat was there all along as ‘friend’s seat’, even though your friend took her time to get there.

Similarly, the W/Z bosons, which became quite massive later on, were initially massless. They had to have existed when the weak force came to life, if only to account for a new force that had been born. The debut of massiveness happened when they “ate” the NG bosons – the disturbed block’s surplus energy – and became very heavy.

Unfortunately for them, their snacking was irreversible. The W/Z bosons couldn’t regurgitate the NG bosons, so they were doomed to be forever heavy and, consequently, short-ranged. That’s why the force that they mediate is called the weak force: because it acts over very small distances.

You’ll notice that the W+, W-, and Z bosons make up for only three components of the Higgs field. What about the fourth component?

Enter: Higgs boson

That’s the Higgs boson. And now, getting closer to pinning down the Higgs boson means we’re also getting closer to pinning down the Higgs mechanism as valid, a quantum mechanical formulation within which we understand the behaviours of these particles and forces. This formulation is called the Standard Model.

(This blog post first appeared at The Copernican on March 8, 2013.)

Higgs boson closer than ever

The article, as written by me, appeared in The Hindu on March 7, 2013.

Ever since CERN announced that it had spotted a Higgs boson-like particle on July 4, 2012, their flagship Large Hadron Collider (LHC), apart from similar colliders around the world, has continued running experiments to gather more data on the elusive particle.

The latest analysis of the results from these runs was presented at a conference now underway in Italy.

While it is still too soon to tell if the one spotted in July 2012 was the Higgs boson as predicted in 1964, the data is convergent toward the conclusion that the long-sought particle does exist and with the expected properties. More results will be presented over the upcoming weeks.

In time, particle physicists hope that it will once and for all close an important chapter in physics called the Standard Model (SM).

The announcements were made by more than 15 scientists from CERN on March 6 via a live webcast from the Rencontres de Moriond, an annual particle physics forum that has been held in La Thuile, Italy, since 1966.

“Since the properties of the new particle appear to be very close to the ones predicted for the SM Higgs, I have personally no further doubts,” Dr. Guido Tonelli, former spokesperson of the CMS detector at CERN, told The Hindu.

Interesting results from searches for other particles, as well as the speculated nature of fundamental physics beyond the SM, were also presented at the forum, which runs from March 2-16.

Physicists exploit the properties of the Higgs to study its behaviour in a variety of environments and see if it matches with the theoretical predictions. A key goal of the latest results has been to predict the strength with which the Higgs couples to other elementary particles, in the process giving them mass.

This is done by analysing the data to infer the rates at which the Higgs-like particle decays into known lighter particles: W and Z bosons, photons, bottom quarks, tau leptons, electrons, and muons. These particles’ signatures are then picked up by detectors to infer that a Higgs-like boson decayed into them.

The SM predicts these rates with good precision.

Thus, any deviation from the expected values could be the first evidence of new, unknown particles. By extension, it would also be the first sighting of ‘new physics’.

Bad news for new physics, good news for old

After analysis, the results were found to be consistent with a Higgs boson of mass near 125-126 GeV, measured at both 7- and 8-TeV collision energies through 2011 and 2012.

The CMS detector observed that there was fairly strong agreement between how often the particle decayed into W bosons and how often it ought to happen according to theory. The ratio between the two was pinned at 0.76 +/- 0.21.

Dr. Tonelli said, “For the moment, we have been able to see that the signal is getting stronger and even the difficult-to-measure decays into bottom quarks and tau-leptons are beginning to appear at about the expected frequency.”

The ATLAS detector, parallely, was able to observe with 99.73 per cent confidence-level that the analysed particle had zero-spin, which is another property that brings it closer to the predicted SM Higgs boson.

At the same time, the detector also observed that the particle’s decay to two photons was 2.3 standard-deviations higher than the SM prediction.

Dr. Pauline Gagnon, a scientist with the ATLAS collaboration, told this Correspondent via email, “We need to asses all its properties in great detail and extreme rigour,” adding that for some aspects they would need more data.

Even so, the developments rule out signs of any new physics around the corner until 2015, when the LHC will reopen after a two-year shutdown and multiple upgrades to smash protons at doubled energy.

As for the search for Supersymmetry, a favoured theoretical concept among physicists to accommodate phenomena that haven’t yet found definition in the Standard Model: Dr. Pierluigi Campana, LHCb detector spokesperson, told The Hindu that there have been only “negative searches so far”.

Ironing out an X-ray wrinkle

A version of this post, as written by me, originally appeared in The Copernican science blog on March 1, 2013.

One of the techniques to look for and measure the properties of a black hole is to spot X-rays of specific energies coming from a seemingly localized source. The radiation emanates from heated objects like gas molecules and dust particles in the accretion disc around the black hole that have been compressed and heated to very high temperatures as the black hole prepares to gorge on them.

However, the X-rays are often obscured by gas clouds surrounding the black hole, even at farther distances, and then other objects on the path of its long journey to Earth. And this is a planet whose closest black hole is 246 quadrillion km away. This is why the better telescopes that study X-rays are often in orbit around Earth instead of on the ground to minimize further distortions due to our atmosphere.

NASA’s NuSTAR

One of the most powerful such X-ray telescopes is NASA’s NuSTAR (Nuclear Spectroscopic Telescope Array), and on February 28, the government body released data from the orbiting eye almost a year after it was launched in June 2012. NuSTAR studies higher-energy the sources and properties of higher-energy X-rays in space. In this task, it is also complemented by the ESA’s XMM-Newton space-telescope, which studies lower-energy X-rays.

The latest data concerns the black hole at the centre of the galaxy NGC 1365, which is two million times the mass of our Sun, earning it the title of Supermassive Black Hole (SMBH). Around this black hole is an accretion disc, a swirling vortex of gases, metals, molecules, basically anything unfortunate enough to have come close and subsequently been ripped apart. Out of this, NuSTAR and XMM-Newton zeroed in on the X-ray emissions characteristic of iron.

What the data revealed, as has been the wont of technology these days, is a surprise.

What are signatures for?

Emission signatures are used because we know everything about them and we know what makes each one unique. For instance, knowing the rise and dip of X-ray brightness coming from an object at different temperatures allows us to tell whether the source is iron or something else.

By extension, knowing that the source is iron lets us attribute the signature’s distortions to different causes.

And the NuSTAR data has provided the first experimental proof that iron’s signature’s distortion is not due to gas-obscuration, but due to another model called prograde rotation which attributes the distortion to the black hole’s gravitational pull.

A clearer picture

As scientists undertake a detailed analysis of the NASA data, they will also resolve iron’s signature. This means the plot of its emissions at different temperatures and times will be split up into different “colours” (i.e., frequencies) to see how much each colour has been distorted.

With NuSTAR in the picture, this analysis will assume its most precise avatar yet because the telescope’s ability to track higher-energy X-rays lets it penetrate well enough into the gas clouds around black holes, something that optical telescopes like the one at the Keck Observatory can’t. What’s more, the data will also be complete at the higher-energy levels, earlier left blank because XMM-Newton or NASA’s Chandra couldn’t see in that part of the spectrum.

If the prograde rotation model is conclusively proven after continued analysis and more measurements, then for the first time, scientists will have a precision-measurement tool on their hands to calculated black hole spin.

How? If the X-ray distortions are due to the black hole’s gravitational pull and nothing else, then the rate of its spin should be showing itself as the amount of distortion in the emission signature. The finer details for this can be picked out from the resolved data, and a black hole’s exact spin for the first time be pinned down.

The singularity

NGC 1365 is a spiral galaxy about 60 million light-years in the direction of the constellation Fornax and a prominent member of the much-studied Fornax galaxy cluster. Apart from the black hole, the galaxy hosts other interesting features such as a central bar of gas and dust adjacent to prominent star-forming regions and a type-1a supernova discovered as recently as October 27, 2012.

As we sit here, I can’t help but imagine us peering into our tiny telescopes, picking up on feebly small bits of information, and adding an extra line in our textbooks, but in reality being thrust into an entirely new realm of knowledge, understanding, and awareness.

Now, we know there’s a black hole out there – a veritable freak of nature – spinning as fast as the general theory of relativity will allow!

For once, a case against the DAE.

I met with physicist M.V. Ramana on February 18, 2013, for an interview after his talk on nuclear energy in India at the Madras Institute of Development Studies. He is currently appointed jointly with the Nuclear Futures Laboratory and the Program on Science and Global Security, both at Princeton University.

Contrary to many opponents of nuclear power and perpetrators of environmentalist messages around the world, Ramana kept away from polemics and catharsis. He didn’t have to raise his voice to make a good argument; he simply raised the quality of his reasoning. For once, it felt necessary to sit down and listen.

What was striking about Ramana was that he was not against nuclear power – although there’s no way to tell otherwise – but against how it has been handled in the country.

With the energy crisis currently facing many regions, I feel that the support for nuclear power is becoming consolidated at the cost of having to overlook how it’s being mishandled. One look at how the government’s let the Kudankulam NPP installation fester will tell you all that you need to know: The hurry, the insecurity about a delayed plant, etc.

For this reason, Ramana’s stance was and is important. The DAE is screwing things up, and the Center’s holding hands with it on this one. After February 28, when the Union Budget was announced, we found the DAE has been awarded a whopping 55.59% YoY increase from last year for this year: Rs. 8,920 crore (2012-2013 RE) to Rs. 13,879 crore (2013-2014 BE).

That’s a lot of money, so it’d pay to know what they might be doing wrong, and make some ‘Voice’ about it.

Here’s a transcript of my interview of Ramana. I’m sorry, there’s no perceptible order in which we’ve discussed topics. My statements/questions are in bold.

You were speaking about the DAE’s inability to acquire and retain personnel. Is that still a persistent problem?

MVR: This is not something we know a lot about. We’ve heard this. This is been rumoured for a while, and in around 2007, [the DAE] spoke about it publicly for the first time that I knew of. We’d heard these kinds of things since the mid-1990s as we saw a wave of multinationals – the Motorolas and the Texas Instruments – come; they drew people not just from the DAE but also from the DRDO. So, they see these people as technically trained, and so on. The structural elements that cause that migration – I think they are still there.

That is one thing. The second, I think, is a longer trend. If you look at the people who get into the DAE – I’ve heard informally, anecdotally, etc. – you’ve got the best and the brightest, so to say. It was considered to be a great career to have, so people from the metropolises would go there, people who had studied in the more elite colleges would go there, people with PhDs from abroad would go there.

Increasingly, I’m told, that the DAE has to set its sights on mofussil towns, for people who want to come to the DAE out of where they are, so they’ll get people. The questions: what kind of people? What of the caliber of those people? There are some questions about that. We don’t know a great deal, except anecdotally.

You spoke about how building reactors with unproven designs were spelling a lot of problems for the DAE. Could you give us a little primer on that?

MVR: If you look at many different reactors that they have built, a bulk was based on these HWRs, which were imported from Canada. And when the first HWR was imported – into Rajasthan – it was based upon one in Canada in a place called Pickering. The early reactors were at Pickering and Douglas Point and so on.

These had started functioning when the construction of the Rajasthan plant started. You found that many of them had lots of problems with N-shields, etc., and they were being reproduced here as well. So that was one set of things.

The second problem, actually, is more interesting in some ways: These coolant channels were sagging. This was a problem that manifested itself not initially but after roughly about 15-20 years, in the mid-80s. Then, only in the 90s did the DAE get into retubing and trying to use a different material for that. So, that tells you that these problems didn’t manifest on day #1; they happen much later.

These are two examples. Currently, the kind of reactors that are being built – for example, the PFBR, with a precise design that hasn’t been done elsewhere – have borrowed elements from different countries. The exact design hasn’t been done anywhere else. There are no exact precedents for these reactors. The example I would give is that of the French design.

France went from a small design called the Rhapsody to one which is 250-MW called Phoenix and then moved to a 1,200 MW design called the Super Phoenix. The Phoenix has actually operated relatively OK, though it’s a small reactor. In India, Rhapsody’s clone is the FBTR in some sense. The FBTR itself could not be exactly cloned because of the 1974 nuclear test: they had to change the design: they didn’t have enough plutonium, and so on.

That’s a different story. Not even going through the route of France where it went from Rhapsody to Phoenix to Super Phoenix, India went from what is essentially a roughly 10-MW reactor – in fact, less than that in terms of electrical capacity – they went to 500 MW – a big jump, a huge jump.

In the case of going from Phoenix to Super Phoenix, you saw huge numbers of problems which had not been seen in Phoenix. So, I would assume that the PFBR would have a lot of teething troubles again. When you think that BRs are the way to go, what I would expect to see is that the DAE takes some time to see what kinds of problems arise and then build the next set of reactors, preferably trying to clone it or only correcting it for those very features that went wrong.

These criticisms are aimed not at India’s nuclear program but the DAE’s handling of it.

MVR: Well, the both of them are intertwined.

They are intertwined, but if you had an alternative, you’d go for someone else to handle the program…?

MVR: Would I go for it? I think that, you know, that’s wishful thinking. In India, for better or for worse, if you have nuclear power, you have the DAE, and if you have DAE, you have nuclear power. You’re not going to get rid of one without the other. It’s wishful for me to think that, you know, somehow the DAE is going to go away and its system of governance is going to go away, and a new set of players come in.

So you’d be cynical about private parties entering the sector, too.

MVR: Private parties I think are an interesting case.

What about their feasibility in India?

MVR: I’m not a lawyer, but right now, as far as I can understand the Atomic Energy Act (1962) and all its subsequent editions, the Indian law allows for up to 49 per cent participation by the private sector. So far, no company has been willing to do that.

This is something which could change and one of the things that the private sector wanted to be in place before they do anything of that sort is this whole liability legislation. Now that the liability legislation is taking shape, it’s possible at some point the Reliances and the Tatas might want to enter the scene.

But I think that the structure of legislation is not going to change any time in the near future. Private parties will be able to put some money in and take some money out, but NPCIL will be the controlling body. To the extent that private parties want to enter this business: They want to enter the business to try and make money out of it, and not necessarily to try and master the technology and try new designs, etc. That’s my reading.

Liquid sodium as coolant: Pros and cons?

MVR: The main pro is that, because it’s a molten metal, it can conduct heat more efficiently compared to water. The other pro is that if you have water at the kind of temperatures at which the heat transfer takes place, the water will actually become steam.

So what you do is you increase the pressure. You have pressurized water and, because of that, whenever you have a break or a crack in the pipe, the water can flash into steam. That’s a problem. In sodium, that’s not the case. Those are the only two pros I can think off the top of my head.

The cons? The main con is that sodium has bad interactions with water and with air, and two, it becomes radioactive, and-

It becomes radioactive?

MVR: Yeah. Normal sodium is sodium-23, and when it works its way through a reactor, it can absorb a neutron and become sodium-24, which is a gamma-emitter. When there are leaks, for example, the whole area becomes a very high gamma dose. So you have to actually wait for the sodium to become cool and so on and so forth. That’s another reason why, if there are accidents, it takes a much longer time [to clean up].

Are there any plants around the world that use liquid sodium as a coolant?

MVR: All BRs use liquid sodium as coolant. The only exceptions primarily are in Russia where they’ve used lead, and both Pb and sodium have another problem: Like sodium, lead at room temperature is actually solid, so you have to always keep it heated. Even when you shutdown the reactor, you’ve to keep a little heater going on and stuff like that. In principle, for example, if you have a complete power shutdown – like the station blackout that happened at Fukushima, etc. – you can imagine the sodium getting frozen.

Does lead suffer from the same neutron-absorption problem that sodium does?

MVR: Probably, yes; I don’t know off the top of my head because there’s not that much experience. It should absorb neutrons and become an isotope of lead and so on. But what kind of an emitter it is, I don’t know.

Problems with Na – continued…

MVR: One more important problem is this whole question of sodium void coefficients. Since you’re a science man, let me explain more carefully what happens. Imagine that you have a reactor, using liquid sodium as a coolant, and for whatever reason, there is some local heating that happens.

For example, there may be a blockage of flow inside the pipes, or something like that, so less amount of sodium is coming, and as the sodium is passing through, it’s trying to take away all the heat. What will happen is that the sodium can actually boil off. Let’s imagine that happens.

Then you have a small bubble; in this, sort of, stream of liquid sodium, you have a small bubble of sodium vapor. When the sodium becomes vapor, it’s less effective at scattering neutrons and slowing them down. What exactly happens is that- There are multiple effects which are happening.

Some neutrons go faster and that changes the probability of their interaction, some of them are scattered out, etc. What essentially happens is that the reactivity of the reactor could increase. When that happens, you call it a positive sodium void coefficient. The opposite is a negative.

The ‘positive’ means that the feedback loop is positive. There’s a small amount of increase in reactivity, the feedback loop is positive, the reactivity becomes more, and so on. If the reactor isn’t quickly shut down, this can actually spiral into a major accident.

So, it’s good if at all times a negative void coefficient is maintained.

MVR: Yes. This is what happened in Chernobyl. In the RBMK-type reactor in Chernobyl, the void coefficient at low power levels was positive. In the normal circumstances it was not positive – for whatever reasons (because of the nature of cross-sections, etc. – we don’t need to get into that).

On that fateful day in April, 1986, they were actually conducting an experiment in the low-power range without presumably realizing this problem and that’s what actually led to the accident. Ever since there, the nuclear-reactor-design community has typically emphasized either having a negative void coefficient, or at least trying to reduce it as much as possible.

As far as I know, the PFBR being constructed in Kalpakkam has the highest positive void coefficient amongst all the BRs I know of. It’s +4.3 or something like that.

What’s the typical value?

MVR: The earlier reactors are all of the order of +2, +2.5, something of that sort. You can actually lower it. One way, for example, is to make sure that some of these neutrons, as they escape, don’t cause further fissions, but instead, they go into some of the blanket elements. They’re absorbed. When you do that, that’ll lower the void coefficient.

So, these are control rods?

MVR: These aren’t control rods. In a BR, there’s a core, and then there are blanket elements outside. Imagine that I don’t keep my blanket just outside but also put it inside, in some spots so some of these neutrons, instead of going and causing further fissions and increasing the reactivity, they will go hit one of the blanket elements, be absorbed by those. So, that neutron is out of the equation.

Once you take away a certain number of neutrons, you change the function from an exponentially increasing one to an exponentially decreasing one. To do that, what you’ll have to do is to actually increase the amount of fissile plutonium that you put in at the beginning, to compensate for the fact that most of [the neutrons] are not going and hitting the other things. So, your price as it were, for reducing the void coefficient is more plutonium, which means more cost.

So you’re offsetting the risk with cost.

MVR: Yeah, and also, if you’re thinking about BRs as a strategy for increasing the amount of nuclear power, you’re probably reducing the breeding ratio (the amount of energy the extra Pt will produce, and so on and so forth). So, the time taken to set up each reactor will be more. So, those kinds of issues are tradeoffs. In those tradeoffs, what the DAE has done is to use a design that’s riskier, probably at some cost.

They’re going for a quicker yield.

MVR: Yes. I think what they’re doing in part is that they’ve convinced themselves that this is not going to have any accidents, that it’s perfectly safe – that has to do with a certain ideology. The irony is that, despite that, you’re going to be producing very expensive power.

Could you comment on the long-term global impact of the Fukushima accident? And not just in terms for what it means for the nuclear-research community.

MVR: I would say two things. One is that the impact of Fukushima has been very varied across different countries. Broadly speaking, I characterized it [for a recent piece I wrote] in three folds following an old economist called Albert Hirschman. I called it ‘Exit’, ‘Voice’, and ‘Loyalty’.

This economist looked at how people responded to organizational decline. Let’s say there’s a product you’ve bought, and over time it doesn’t do well. There are three things that you can do. You can choose not to buy it and buy something else; this is ‘Exit’. You can write to the manufacturer or the organization that you belong to, you make noise about it. You try to improve it and so on. This is ‘Voice’.

The third is to keep quiet and continue persisting with it. This is ‘Loyalty’. And if you look at countries, they’ve done roughly similar things. There are countries like Germany and Switzerland which have just exited. This is true with other countries also which didn’t have nuclear power programs but were planning to. Venezuela, for example: Chavez had just signed a deal with Russia. After Fukushima, he said, “Sorry, it’s a bad idea.” Also, Israel: Netanyahu also said that.

Then, there are a number of countries where the government has said “we actually want to go on with nuclear power but because of public protest, we’ve been forced to change direction”. Italy is probably the best example. And before the recent elections, Japan would’ve been a good example. You know, these are fluid categories, and things can change.

Mostly political reasons.

MVR: Yes, for political reasons. For also reasons of what kind of or nature of government you have, etc.

And then finally there are a whole bunch of countries which have been loyal to nuclear power. India, China, United States, and so on. In all these countries, what you find is two things. One is a large number of arguments why Fukushima is inapplicable to their country. Basically, DAE and all of these guys say, “Fukushima is not going to happen here.” And then maybe they will set up a sort of task force, they’ll say, “We’ll have a little extra water always, we’ll have some strong backup diesel generators,” blah-blah-blah.

Essentially, the idea is [Fukushima] is not going to change our loyalty to nuclear power.

The second thing is that there’s been one real effect: The rate at which nuclear power is going to grow has been slowed down. There’s no question about that. Fukushima in many cases consolidated previous trends, and in some cases started new trends. I would not have expected Venezuela to do this volte-face, but in Germany, it’s not really that surprising. Different places have different dynamics.

But I think that, overall, it has slowed down things. The industry’s response to this has been to say, “Newer reactors are going to be safer.” And they talk about passive safety and things like that. I don’t know how to evaluate these things. There are problems with passive safety also.

What’re you skeptical about?

MVR: I’m skeptical about whether new reactors are going to be safer.

Are you cynical?

MVR: I’m not cynical. Well, I think there’s some cynicism involved, but I’d call it observation. The skepticism is about new reactor designs are going to be immune to accidents. Because of incomplete knowledge, and so on, you might be protecting against Fukushima, but you’ll not be protecting against Chernobyl. Chernobyl didn’t have a tsunami triggering it – things of that sort.

EUCLID/ESA: A cosmic vision looking into the darkness

I spoke to Dr. Giuseppe Racca and Dr. Rene Laureijs, both of the ESA, regarding the EUCLID mission, which will be the world’s first space-telescope launched to study dark energy and dark matter. For the ESA, EUCLID will be the centerpiece of their Cosmic Vision program (2015-2025). Dr. Racca is the mission’s project manager while Dr. Laureijs is a project scientist.

Could you explain, in simple terms, what the Lagrange point is, and how being able to study the universe from that vantage point could help the study? 

GR: Sun-Earth Lagrangian point 2 (SEL2) is a point in space about 1.5 million km from Earth in the direction opposite to the sun, co-rotating with the Earth around the Sun. It is a nice and calm point to make observations. It is not disturbed by the heat fluxes from the Earth but at the same time is not too far away to allow to send to Earth the large amount of data from the observation. The orbit around SEL2 that Euclid will employ is rather large and it is easy to reach (in terms of launcher capability) and not expensive to control (in terms of fuel required for the orbit corrections and maintenance manoeuvres).

Does Euclid in any way play into a broader program by ESA to delve into the Cosmic Frontier? Are there future upgrades/extensions planned? 

RL: Euclid is the second approved medium class mission of ESA’s Cosmic Vision programme. The first one is Solar Orbiter, which studies the Sun at short distance. The Cosmic Vision programme sets out a plan for Large, Medium and Small size missions in the decade 2015-2025. ESA’s missions Planck, which is presently in operation in L2, and Euclid will study the beginning, the evolution, and the predicted end of our Universe.

GR: A theme of this programme is: “How did the Universe originate and what is it made of?” Euclid is the first mission of this part of Cosmic Vision 2015-2025. There will be other missions, which have not been selected yet.

What’s NASA’s role in all of this? What are the different ways in which they will be participating in the Euclid mission? Is this a mission-specific commitment or, again, is it encompassed by a broader participation agreement?

GR: The NASA participation in the Euclid mission is very important but rather limited in extent. They will provide the Near-infrared detectors for one of the two Euclid instruments. In addition they will contribute to the scientific investigation with a team of about 40 US scientists. Financially speaking NASA contribution is limited to some 3-4% of the total Euclid mission cost.

RL: The Euclid Memorandum of Understanding between ESA and NASA is mission specific and does not involve a broader participation agreement. First of all, NASA will provide the detectors for the infrared instrument. Secondly, NASA will support 40 US scientists to participate in the scientific exploitation of the data. These US scientists will be part of the larger Euclid Consortium, which contains nearly 1000 mostly European scientists.

Do you have any goals in mind? Anything specific or exciting that you expect to find? Who gets the data?

GR: The goals of the Euclid mission are extremely exciting: in few words we want to investigate the nature and origin of the unseen Universe: the dark matter, five times more abundant than the ordinary matter made of atoms, and the dark energy, causing the accelerating expansion of the Universe. The “dark Universe” is reckoned today to amount at 95% of the total matter-energy density. Euclid will survey about 40% of the sky, looking back in cosmic time up to 10 billion years. A smaller part (1% of the sky) will look back to when the universe was only few million years old. This three dimensional survey will allow to map the extent and history of dark matter and dark energy. The results of the mission will allow to understand the nature of the dark matter and its position as part of an extension of the current standard model. Concerning the dark energy we will be able to distinguish between the so called “quintessence” or a modification necessary to current theories of gravity, including General Relativity.

RL: Euclid goals are to measure the accelerated expansion of the universe which tells us about Dark Energy, to determine the properties of gravity on cosmic scales, to learn about the properties of dark matter, and to refine the initial conditions leading to the Universe we see now. These goals have been chosen carefully, the instrumentation of Euclid is optimised to reach these goals as best as possible. The Euclid data opens the discovery space for many other areas in astronomy: Euclid will literally measure billions of stars and galaxies at visible and infrared wavelengths, with a very high image quality, comparable to that of Hubble Space Telescope. The most exiting prospect is the availability of these sharp images, which will certainly reveal new classes of objects with new science. The nominal mission will last for 6 years, but the first year of data will become already public 26 months after the start of the survey.

When will the EUCLID data be released?

GR: The Euclid data will be released to the public one year after their collection and will be made available to all researchers in the world.

A different kind of experiment at CERN

This article, as written by me, appeared in The Hindu on January 24, 2012.

At the Large Hadron Collider (LHC) at CERN, near Geneva, Switzerland, experiments are conducted by many scientists who don’t quite know what they will see, but know how to conduct the experiments that will yield answers to their questions. They accelerate beams of particles called protons to smash into each other, and study the fallout.

There are some other scientists at CERN who know approximately what they will see in experiments, but don’t know how to do the experiment itself. These scientists work with beams of antiparticles. According to the Standard Model, the dominant theoretical framework in particle physics, every particle has a corresponding particle with the same mass and opposite charge, called an anti-particle.

In fact, at the little-known AEgIS experiment, physicists will attempt to produce an entire beam composed of not just anti-particles but anti-atoms by mid-2014.

AEgIS is one of six antimatter experiments at CERN that create antiparticles and anti-atoms in the lab and then study their properties using special techniques. The hope, as Dr. Jeffrey Hangst, the spokesperson for the ALPHA experiment, stated in an email, is “to find out the truth: Do matter and antimatter obey the same laws of physics?”

Spectroscopic and gravitational techniques will be used to make these measurements. They will improve upon, “precision measurements of antiprotons and anti-electrons” that “have been carried out in the past without seeing any difference between the particles and their antiparticles at very high sensitivity,” as Dr. Michael Doser, AEgIS spokesperson, told this Correspondent via email.

The ALPHA and ATRAP experiments will achieve this by trapping anti-atoms and studying them, while the ASACUSA and AEgIS will form an atomic beam of anti-atoms. All of them, anyway, will continue testing and upgrading through 2013.

Working principle

Precisely, AEgIS will attempt to measure the interaction between gravity and antimatter by shooting an anti-hydrogen beam horizontally through a vacuum tube and then measuring how it much sags due to the gravitational pull of the Earth to a precision of 1 per cent.

The experiment is not so simple because preparing anti-hydrogen atoms is difficult. As Dr. Doser explained, “The experiments concentrate on anti-hydrogen because that should be the most sensitive system, as it is not much affected by magnetic or electric fields, contrary to charged anti-particles.”

First, antiprotons are derived from the Antiproton Decelerator (AD), a particle storage ring which “manufactures” the antiparticles at a low energy. At another location, a nanoporous plate is bombarded with anti-electrons, resulting in a highly unstable mixture of both electrons and anti-electrons called positronium (Ps).

The Ps is then excited to a specific energy state by exposure to a 205-nanometre laser and then an even higher energy state called a Rydberg level using a 1,670-nanometre laser. Last, the excited Ps traverses a special chamber called a recombination trap, when it mixes with antiprotons that are controlled by precisely tuned magnetic fields. With some probability, an antiproton will “trap” an anti-electron to form an anti-hydrogen atom.

Applications

Before a beam of such anti-hydrogen atoms is generated, however, there are problems to be solved. They involve large electric and magnetic fields to control the speed of and collimate the beams, respectively, and powerful cryogenic systems and ultra-cold vacuums. Thus, Dr. Doser and his colleagues will spend many months making careful changes to the apparatus to ensure these requirements work in tandem by 2014.

While antiparticles were first discovered in 1959, “until recently, it was impossible to measure anything about anti-hydrogen,” Dr. Hangst wrote. Thus, the ALPHA and AEgIS experiments at CERN provide a seminal setting for exploring the world of antimatter.

Anti-particles have been used effectively in many diagnostic devices such as PET scanners. Consequently, improvements in our understanding of them feed immediately into medicine. To name an application: Antiprotons hold out the potential of treating tumors more effectively.

In fact, the feasibility of this application is being investigated by the ACE experiment at CERN.

In the words of Dr. Doser: “Without the motivation of attempting this experiment, the experts in the corresponding fields would most likely never have collaborated and might well never have been pushed to solve the related interdisciplinary problems.”

Aaron Swartz is dead.

This article, as written by me and a friend, appeared in The Hindu on January 16, 2013.

In July, 2011, Aaron Swartz was indicted by the district of Massachusetts for allegedly stealing more than 4.8 million articles from the online academic literature repository JSTOR via the computer network at the Massachusetts Institute of Technology. He was charged with, among others, wire-fraud, computer-fraud, obtaining information from a protected computer, and criminal forfeiture.

After paying a $100,000-bond for release, he was expected to stand trial in early 2013 to face the charges and, if found guilty, a 35-year prison sentence and $1 million in fines. More than the likelihood of the sentence, however, what rankled him most was that he was labelled a “felon” by his government.

On January 11, Friday, Swartz’s fight, against information localisation as well as the label given to him, ended when he hung himself in his New York apartment. He was only 26. At the time of his death, JSTOR did not intend to press charges and had decided to release 4.5 million of its articles into the public domain. It seems as though this crime had no victims.

But, he was so much more than an alleged thief of intellectual property. His life was a perfect snapshot of the American Dream. But the nature of his demise shows that dreams are not always what they seem.

At the age of 14, Swartz became a co-author of the RSS (Rich Site Summary) 1.0 specification, now a widely used method for subscribing to web content. He went on to attend Stanford University, dropped out, founded a popular social news website and then sold it — leaving him a near millionaire a few days short of his 20th birthday.

A recurring theme in his life and work, however, were issues of internet freedom and public access to information, which led him to political activism. An activist organisation he founded campaigned heavily against the Stop Online Piracy Act (SOPA) bill, and eventually killed it. If passed, SOPA would have affected much of the world’s browsing.

At a time that is rife with talk of American decline, Swartz’s life reminds us that for now, the United States still remains the most innovative society on Earth, while his death tells us that it is also a place where envelope pushers discover, sometimes too late, that the line between what is acceptable and what is not is very thin.

The charges that he faced, in the last two years before his death, highlight the misunderstood nature of digital activism — an issue that has lessons for India. For instance, with Section 66A of the Indian IT Act in place, there is little chance of organising an online protest and blackout on par with the one that took place over the SOPA bill.

While civil disobedience and street protests usually carry light penalties, why should Swartz have faced long-term incarceration just because he used a computer instead? In an age of Twitter protests and online blackouts, his death sheds light on the disparities that digital activism is subjected to.

His act of trying to liberate millions of scholarly articles was undoubtedly political activism. But had he undertaken such an act in the physical world, he would have faced only light penalties for trespassing as part of a political protest. One could even argue that MIT encouraged such free exchange of information — it is no secret that its campus network has long been extraordinarily open with minimal security.

What then was the point of the public prosecutors highlighting his intent to profit from stolen property worth “millions of dollars” when Swartz’s only aim was to make them public as a statement on the problems facing the academic publishing industry? After all, any academic would tell you that there is no way to profit off a hoard of scientific literature unless you dammed the flow and then released it per payment.

In fact, JSTOR’s decision to not press charges against him came only after they had reclaimed their “stolen” articles — even though Laura Brown, the managing director of JSTOR, had announced in September 2011, that journal content from 500,000 articles would be released for free public viewing and download. In the meantime, Swartz was made to face 13 charges anyway.

Assuming the charges are reasonable at all, his demise will then mean that the gap between those who hold onto information and those who would use it is spanned only by what the government thinks is criminal. That the hammer fell so heavily on someone who tried to bridge this gap is tragic. Worse, long-drawn, expensive court cases are becoming roadblocks on the path towards change, especially when they involve prosecutors incapable of judging the difference between innovation and damage on the digital frontier. It doesn’t help that it also neatly avoids the aura of illegitimacy that imprisoning peaceful activists would have for any government.

Today, Aaron Swartz is dead. All that it took to push a brilliant mind over the edge was a case threatening to wipe out his fortunes and ruin the rest of his life. In the words of Lawrence Lessig, American academic activist, and his former mentor at the Harvard University Edmond J. Safra Centre for Ethics: “Somehow, we need to get beyond the ‘I’m right so I’m right to nuke you’ ethics of our time. That begins with one word: Shame.”

LHC to re-awaken in 2015 with doubled energy, luminosity

This article, as written by me, appeared in The Hindu on January 10, 2012.

After a successful three-year run that saw the discovery of a Higgs-boson-like particle in early 2012, the Large Hadron Collider (LHC) at CERN, near Geneva, Switzerland, will shut down for 18 months for maintenance and upgrades.

This is the first of three long shutdowns, scheduled for 2013, 2017, and 2022. Physicists and engineers will use these breaks to ramp up one of the most sophisticated experiments in history even further.

According to Mirko Pojer, Engineer In-charge, LHC-operations, most of these changes were planned in 2011. They will largely concern fixing known glitches on the ATLAS and CMS particle-detectors. The collider will receive upgrades to increase its collision energy and frequency.

Presently, the LHC smashes two beams, each composed of precisely spaced bunches of protons, at 3.5-4 tera-electron-volts (TeV) per beam.

By 2015, the beam energy will be pushed up to 6.5-7 TeV per beam. Moreover, the bunches which were smashed at intervals of 50 nanoseconds will do so at 25 nanoseconds.

After upgrades, “in terms of performance, the LHC will deliver twice the luminosity,” Dr. Pojer noted in an email to this Correspondent, with reference to the integrated luminosity. Precisely, it is the number of collisions that the LHC can deliver per unit area which the detectors can track.

The instantaneous luminosity, which is the luminosity per second, will be increased to 1×1034 per centimetre-squared per second, ten-times greater than before, and well on its way to peaking at 7.73×1034 per centimetre-squared per second by 2022.

As Steve Myers, CERN’s Director for Accelerators and Technology, announced in December 2012, “More intense beams mean more collisions and a better chance of observing rare phenomena.” One such phenomenon is the appearance of a Higgs-boson-like particle.

The CMS experiment, one of the detectors on the LHC-ring, will receive some new pixel sensors, a technology responsible for tracking the paths of colliding particles. To make use of the impending new luminosity-regime, an extra layer of these advanced sensors will be inserted around a smaller beam pipe.

If results from it are successful, CMS will receive the full unit in late-2016.

In the ATLAS experiment, unlike with CMS which was built with greater luminosities in mind, pixel sensors are foreseen to wear out within one year after upgrades. As an intermediate solution, a new layer of sensors called the B-layer will be inserted within the detector for until 2018.

Because of the risk of radiation damage due to more numerous collisions, specific neutron shields will be fit, according to Phil Allport, ATLAS Upgrade Coordinator.

Both ATLAS and CMS will also receive evaporative cooling systems and new superconducting cables to accommodate the higher performance that will be expected of them in 2015. The other experiments, LHCb and ALICE, will also undergo inspections and upgrades to cope with higher luminosity.

An improved failsafe system will be installed and the existing one upgraded to prevent accidents such as the one in 2008.

Then, an electrical failure damaged 29 magnets and leaked six tonnes of liquid helium into the tunnel, precipitating an eight-month shutdown.

Generally, as Martin Gastal, CMS Experimental Area Manager, explained via email, “All sub-systems will take the opportunity of this shutdown to replace failing parts and increase performance when possible.”

All these changes have been optimised to fulfil the LHC’s future agenda. This includes studying the properties of the newly discovered particle, and looking for signs of new theories of physics like supersymmetry and higher dimensions.

(Special thanks to Achintya Rao, CMS Experiment.)