Returning to WordPress… fourth time round.

My blog got me my job. After all, it did make a cameo appearance during my interview, drawing an “Impressive!” from the Editor of the newspaper sitting opposite me. Ever since that episode in early June, I decided that I was justified in spending almost three hours on it each day, checking the stats, making small changes to the design, keeping an eye out for new options and themes, weeding out spam comments, etc.

It is also since then that I have been comfortable spending some money on it. First, I bought a domain, set up some space at Hostgator, and set up WordPress. That didn’t last long because I had a bad time with Hostgator. It could’ve just been that once, but I decided to move. Next in line was Posterous, but once I learnt that it was being bought by Twitter, I decided to move again. I didn’t like the idea of my blog’s host being affiliated with a social networking service, you see.

The third option was Blogger. While its immense flexibility was welcoming, I found it too… unclassy, if I may say so. It didn’t proffer any style of its own, nor did it show any inclination for it. While WordPress.com restricts access to themes’ CSS files, Blogger has almost no restrictions nor offerings. This means that if I wanted a particularly styled theme, I’d have to code it up from scratch, instead of being able to choose from over 200 themes like in WordPress. That much of flexibility isn’t always great, I learnt.

The final stop was Squarespace. At the end of the day, Squarespace doesn’t fall short on many counts (if it falls short at all). For $96 p.a., it offers one free domain, 20 GB of hosting space, a wealth of templates all easily customized, and a minimalist text editor that I think I will miss the most. Where I think it doesn’t match up to WordPress is social networking.

Bloggers on WordPress have the option of following other blogs, liking posts, sharing stuff they like on their own blogs, and generally availing the option to interact more strongly than just by sharing posts on Facebook/Twitter or leaving comments. In fact, I think WordPress also has a lot of “bloggers” who don’t have blogs of their own but are logged in to interact with authors they like.

So, for the fourth time, I returned to WordPress… and here I am. Will I continue to be here? I don’t know. I’m sure something else will come along and I’ll put some of my money in it, perhaps only to find out why WordPress is so awesome for the fifth time.

Some more questions concerning Herschel…

The Herschel Space Observatory, a.k.a. Herschel, was the largest space telescope at the time of its launch and still is. With a collecting area twice as large as the Hubble Space Telescope’s, and operating in the far-infrared part of the spectrum, Herschel could look through clouds of gases and dust in the farthest reaches universe and pick up even really faint signals from distant stars and nebulae.

To do this, it depends on three instruments – PACS, SPIRE and HIFI – that are cooled to fractions of degrees above absolute zero, much colder than anything you could find in the solar system. At these temperatures, the instruments are at their most sensitive. The frigidity it achieved using liquid helium, a superfluid coolant that constantly boils off as it removes heat from the instruments. By the end of this month (March, 2013), all the helium will have boiled off, leaving PACS, SPIRE and HIFI effectively blind.

I wrote an article in The Hindu on March 28, a run-of-the-mill news piece that had to leave out some interesting bits of information I’d gathered from the lead scientist, mission manager, and project scientist I’d spoken to. I’ve attached my questions and their answers that contain said bits of information. I think they’re important because

Here are the answers (My questions are in bold).

Herschel was foreseen to run out of helium in early 2013. Considering its unique position in space, why wasn’t a “warm” experiment installed on-board?

MATT GRIFFIN – Lead Scientist, SPIRE Instrument, Herschel Space Observatory

Herschel was designed to operate in the far infrared part of the spectrum – wavelengths typically hundreds of time longer than the wavelengths of visible light.  For the far infrared, extreme cooling is always required. For a telescope operating at shorter wavelengths (about ten times longer in wavelength than visible light) a “warm mission” is feasible.  This could have been done with Herschel, but it would have required that the surface of the telescope be made far more precise and smooth. That would have made it very much more expensive, leaving less money available for the rest of the spacecraft and the instruments.

Any space mission must be built within a certain budget, and it is usually best to design it to be as effective as possible for a certain wavelength range.  Herschel actually covers a very wide range – from 55 to nearly 700 microns in wavelength.  That’s more than a factor of ten, which is very impressive. To make a warm mission possible would have meant making the telescope good enough to work at ten times shorter wavelength, and adding a fourth instrument.

Herschel was and is the only space telescope observing in the submillimeter and far infrared part of the spectrum. After it goes blind, are there any plans to replace it with a more comprehensive telescope? Or how far do you think the loss of data because of its close-of-ops will be offset by upcoming ground-based telescopes such as the ALMA?

GORAN PILBRATT – Project Scientist, European Space Research and Technology Centre, Noordwijk

There are currently no concrete ESA (or anywhere else) plans for a replacement or follow-up mission. What many people hope for is the Japanese SPICA mission, which may fly beyond 2020 with an ESA telescope and a European instrument called SAFARI, both based on Herschel experience. Time will tell. Of course the NASA JWST will be important to almost every astronomer which it finally flies. In the meantime ALMA and also the flying SOFIA observatory are of interest. There is also a lot of follow-up observing to be done with many different ground-based telescopes based on Herschel data. This is already happening.

LEO METCALFE – Mission Manager, ESA Centre, Madrid

After the anticipated exhaustion of the Liquid Helium cryogen which keeps the Herschel instruments cold, scientific observations with Herschel will cease. However, the data gathered during the 4-years of operations, stored in the Herschel Science Archive (HSA) at ESAC, will remain available to the worldwide astronomical community for the foreseeable future. Until the end of 2017 ESA, for much of the time in collaboration with the instrument teams and NASA Herschel Science Centre, will actively support users in the exploitation of the data.

That said, there is no comparable mission in the currently approved ESA programme considering launches into the early 2020s. The Japanese Aerospace Exploration Agency (JAXA) mission SPICA is of comparable size to Herschel and will operate out to wavelengths a little over 210 microns – in the far-infrared, but only barely reaching what would generally be termed the sub-millimetre region. It may be launched before 2020.

Because of absorption of infrared radiation by the Earth’s atmosphere, ground based telescopes have limited capacity to compete with orbital systems over much of the Herschel wavelength range.

However, the Atacama Large Millimeter Array (ALMA) in the Chilean Andes overlaps in its wavelength coverage with the sub-millimeter parts of the Herschel range, but a typical map size for Alma might be on the order of, say, 10 arcseconds (the full Moon spans about 1800 arcesconds, to give some scale), while a typical Herschel map might cover an area 10 arcminutes (600 arcseconds) on a side. Instead of large area coverage, ALMA provides extremely high spatial resolutions (down to small fractions of an arcsecond), far finer than Herschel could achieve.

So ALMA is well suited to the detailed follow-up of Herschel observations of single high-interest sources, rather than providing comparable coverage to Herschel.

There must be a lot of data left to be analysed that was gathered by Herschel. While creating a legacy archive, will you also be following some threads of investigation over others?

MATT GRIFFIN – Lead Scientist, SPIRE Instrument, Herschel Space Observatory

Although Herschel’s life was limited, it was designed to make observations very quickly and efficiently, and it has collected a huge amount of data.  It will be very important during the next few years, in what we call the Post-Operations period, to process all the data in the best and the most uniform way, and to make it available in an easy-to-use archive for future astronomers.

This means that the real scientific power of Herschel is still to be realised, as its results will be used for many years in the future. Only a small fraction of the data from Herschel has so far been fully investigated.

It is clear that when the data are fully explored, and when Herschel’s observations are followed up with other telescopes, a great deal more will be learned.  This is especially true for the large surveys that Herschel has done – surveys of many thousands of distant galaxies, and surveys of clouds of gas and dust in our own galaxy in which stars are forming. In the coming years, although Herschel will no longer operate, its scientific project will continue – to understand the birth processes of stars and galaxies.

When did you start working with the Herschel mission? How has your experience been with it? What does the team that worked on Herschel move on to after this?

LEO METCALFE – Mission Manager, ESA Centre, Madrid

In 1982 the Far Infrared and Sub-millimetre Telescope (FIRST) was proposed to ESA. This mission concept evolved and eventually was named Herschel, in honour of the great German/English astronomer William Herschel.

The build-up of the ESA team for Herschel started in earnest in the early 2000s. I came on board as Herschel Science Operations Manager in 2007, with the main task of integrating and training the ESA Science Operations Team and the wider Science Ground Segment (SGA – which includes the external-to-ESA Instrument Control Centres) to be a smoothly functioning system in time for launch, which took place in May 2009.

So my experience of Herschel began with the recruitment of many of the operational team members and the integration of the Science Ground Segment (SGS) focussed on the pre-launch end-to-end testing of the entire observatory system, with data flowing from the spacecraft then on the ground in the test facility at ESTEC in the Netherlands, and continued through a series of pre-flight simulations which put the SGS through all the procedures they would need to follow during operations.

As a result we “hit the ground running” after launch, and the operations of the SGS have been smooth throughout the mission. Those operations have spanned the Launch and Early Orbit (LEOP) Phase, the in-flight Commissioning Phase, the Performance Verification, Science Demonstration, and Routine Operations Phases of the mission, and have included the recovery from the early failure of the prime chain of the HIFI instrument, and the handling of various lesser contingencies caused by ionising radiation induced corruptions of on-board instrument memory, among others.

It has been a fast paced and exciting mission which in the end has returned data from almost 35000 individual science observations. It’s going to be hard to adjust to not having an active spacecraft up there.

Concerning what happens to the team(s) that have worked on Herschel: The ESA team that supervised the construction of the Spacecraft already moved on to other missions soon after Herschel was launched.

The Science Operations Team at the Herschel Science Centre at HSC/ESAC in Spain, together with the Instrument Control Centres (ICCs) formed by the teams that built the scientific instruments (distributed through the ESA member states) and the Mission Operations Centre at ESOC in Germany, have been responsible for the operation of the Spacecraft and its instruments in flight. Those teams will now run down.

A fraction of the people will continue to work in the Herschel project through its almost 5-year Post-operations Phase mentioned already above, while the remainder have or will seek positions with upcoming missions like Rosetta, Gaia, BepiColombo, Solar Orbiter, Euclid … or in some cases may move on to other phases of their careers outside the space sector.

We are talking about people who are highly experienced software engineers, or PhD physicists or astronomers. Generally they are highly employable.

EOM

claimtoken-5162301aad18f

Big science, bigger nationalism.

Nature India ran a feature on March 21 about three Indian astrophysicists who had contributed to the European Space Agency’s Planck mission that studied the universe’s CMBR, etc. I was wary even before I started to read it. Why? Because of that first farce in July, 2012, that’s why.

That was when many Indians called for the ‘boson’ in the ‘Higgs boson’ to be celebrated with as much jest as was the ‘Higgs’. Oddly, Kolkata sported no cultural drapes that took ownership of the ‘boson’ as opposed to Edinburgh, quick to embrace the ‘Higgs’.

Why? Because a show of Indians celebrating India’s contributions to science through claims of ownership betrays that it’s not a real sense of ownership at all, but just another attempt to hog the limelight. If we wanted to own the ‘boson’ in honor of Satyendra Nath Bose, we’d have ensured he was remembered by the common man even outside the context of the Higgs boson. For his work with Einstein in establishing the Bose-Einstein statistics, for starters.

This is an attitude I find divisive and abhorrent. At the least, that circumstantial shout-out leaves no cause to remember S.N. Bose for the rest of the time. At the most, it paints a false picture of what ownership of scientific knowledge manifests itself as in the 21st century. The Indian contribution, the Chilean contribution, the Russian contribution… these are divisive tendencies in a world constantly aspiring to Big Science that is more seamless and painless.

Ownership of scientific knowledge in the 21st century, I believe, cannot be individuated. It belongs to no one and everyone at the same time. In the past, using science-related decorations to impinge upon our contributions to science may have inspired someone to believe we did good. Today, however, it’s simply taking a stand on a very slippery slope.

I understand how scientific achievement in the last century or so had gained a colonial attitude, and how there are far more Indians who have received the Nobel Prize as Americans than as Indians themselves. However, the scientific method has also gotten more rigorous, more demanding in terms of resources and time. While America may have shot ahead in the last century of scientific achievement, awareness of its possession of numerous individuals on the rosters of academic excellence is coeval tribute to some other country’s money and intellectual property, too.

I understand how news items of a nation’s contributions to an international project could improve the public’s perception of where and how their tax-money is being spent. However, the alleviation of any ills in this area must not arise solely from the notification that a contribution was made. It should arise through a dissemination of the importance of that contribution, too. The latter is conspicuous by its absence… to me, at least.

We put faces to essentially faceless achievements and then forget their features over time.

I wish there had been an entity to point my finger at. It could’ve been just the government, it could’ve been just a billion Indians. It could’ve been just misguided universities. It could’ve been just the Indian media. Unfortunately, it’s a potent mix of all these possibilities, threatening to blow up with nationalistic fervor in a concordant world.

As for that Nature India article, it did display deference to the jingoism. How do I figure? Because its an asymmetric celebration of achievement, especially an achievement not rooted in governmental needs even.

~

This post also appeared in ‘The Copernican’ science blog at The Hindu on March 28, 2013.

Crowd-sourcing a fantasy fiction tale

What if thousands of writers, economists, philosophers, scientists, teachers, industrialists and other many other people from other professions besides were able to pool their intellectual and creative resources to script one epic fantasy-fiction story?

Such an idea would probably form the crux of an average to poor book idea, but the story itself would be awesome, methinks.

Here’s an example. Every great writer, most notably Asimov, whose works of sci-fi/fantasy I’ve read has speculated upon the rapidly changing nature of different professions in their works.

The simplest example manifests in Asimov’s 1957 short story Profession. In the story, children are educated no longer within classrooms but almost instantaneously through a brain-computer interface, a process called taping.

Where are the teachers in this world? They, it seems, would come later, in the guise of professionals who compose and compile those information-heavy tapes. Seeing as Profession is set in the 66th century of human civilization, the taping scenario is entirely plausible. We could get there.

But this is one man’s way of constructing a possible future among infinite others. Upon closer scrutiny, many inconsistencies between Asimov’s world and ours could be chalked up. For one, the author could have presupposed events in our future which might never really happen.

However, such scrutiny would be meaningless because that is not the purpose of Asimov’s work. He writes to amaze, to draw parallels – not necessarily contiguous ones – between our world and a one in the future.

But what if Asimov had been an economist instead of a biochemist? Would he have written his stories any differently?

My (foolish) idea is to just draw up a very general template of a future world, to assign different parts of that template to experts from different professions, and then see how they think their professions would have changed. More than amaze, such a world might enlighten us… and I think it ought to be fascinating for just that reason.

The cloak of fantasy, the necessity of stories to engage all those professions and their intricate gives-and-takes and weave them into a empathetic narrative could then be the work of writers and other such creatively inclined people or, as I like to call them, ‘imagineers’.

This idea has persisted for a long time. It was stoked when I first encountered in my college days the MMORPG called World of Warcraft. In it, many players from across the world come together and play a game set in the fictitious realm called Azeroth, designed by Blizzard, Inc.

However, the game has already been drawn up to its fullest, so to speak. For example, there are objectives for players to attain by playing a certain character. If the character fails, he simply tries again. For the game to progress, the objectives must be attained. That’s what makes a game by definition, anyway.

 

My idea wouldn’t be a game because there are no objectives. My idea would be a game to define those objectives, and in a much more inclusive way. Imagine an alternate universe for all of us to share in. The story goes where our all-encompassing mind would take us.

The downside, of course, would be the loss of absolute flexibility, with so many clashing ideas and, more powerful, egos. But… play it.

How hard is it to violate Pauli’s exclusion principle?

Ultracooled rubidium atoms are bosonic, and start to behave as part of a collective fluid, their properties varying together like shown above. Bosons can do this because they don't obey Pauli's principle.
Ultracooled rubidium atoms are bosonic, and start to behave as part of a collective fluid, their properties varying together like shown above. Bosons can do this because they don’t obey Pauli’s principle. Photo: Wikimedia Commons

A well-designed auditorium always has all its seats positioned on an inclined plane. ​Otherwise it wouldn’t be well-designed, would it? Anyway, this arrangement solves an important problem: It lets people sit anywhere they want to irrespective of their heights.

It won’t matter if a taller person sits in front of a shorter one – the inclination will render their height-differences irrelevant.

However, if the plane had been flat, if all the seats were just placed one behind another instead of raising or lowering their distances from the floor, ​then people would have been forced to follow a particular seating order. Like the discs in a game of Tower of Hanoi, the seats must be filled with shorter people coming first if everyone’s view of the stage must be unobstructed.

It’s only logical.​

A similar thing happens inside atoms. ​While protons and neutrons are packed into a tiny nucleus, electrons orbit the nucleus in relatively much larger orbits. For instance, if the nucleus is 2 m across, then electrons would be orbiting it at up to 10 km away. This is because every electron can only be so far away that its negative charge doesn’t pull it into the nucleus.

However, this doesn’t mean all electrons orbit the nucleus at the same distance. They follow an order. Like the seats on the flat floor where taller people must sit behind shorter ones, more energetic electrons must orbit closer to the nucleus than less energetic ones. Similarly, all electrons of the same energy must orbit the nucleus at the same distance.

Over the years, scientists have observed that around every atom of a known element, there are well-defined energy levels, each accommodating a fixed and known number of electrons. These quantities are determined by various properties of electrons, designated by the particle’s four quantum numbers: n, l, m_s, m_l.

Nomenclature:

1. n is the principle quantum number, and designates the energy-level of the electron.​

​2. l is the azimuthal quantum number, and describes the angular momentum at which the electron is zipping around the nucleus.

​3. m_l is the orbital quantum number and yields the value of l along a specified axis.

4. s is the spin quantum number and describes the “intrinsic” angular momentum, a quantity that doesn’t have a counterpart in Newtonian mechanics.​

So, an electron’s occupation of some energy slot around a nucleus depends on the values of the four quantum numbers. ​And the most significant relation between all of them is the Pauli exclusion principle (PEP): no two electrons with all four same quantum numbers can occupy the same quantum state.

An energy level is an example of a quantum state. This means if two electrons exist at the same level inside an atom, and if their n, l and m_l values are equal, then their m_s value (i.e., spin) must be different: one up, one down. Two electrons with equal n, l, m_l, and m_s values couldn’t occupy the same level in the same atom.

But why?​

The PEP is named for its discoverer, Wolfgang Pauli. Interestingly, Pauli himself couldn’t put a finger why the principle was the way it was. From his Nobel lecture, 1945 (PDF):​

Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had always the feeling and I still have it today, that this is a deficiency. … The impression that the shadow of some incompleteness [falls] here on the bright light of success of the new quantum mechanics seems to me unavoidable.

It wasn’t that the principle’s ontology was sorted over time. In 1963, Richard Feynman said:

“…. Why is it that particles with half-integral spin are Fermi particles (…) whereas particles with integral spin are Bose particles (…)? We apologize for the fact that we can not give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments from quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way to reproduce his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. (…) This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world.

​(R. Feynman, Feynman Lectures of Physics, 3rd Vol., Chap. 4, Addison-Wesley, Reading, Massachusetts, 1963)

The Ramberg-Snow experiment​

In 1990, two scientists, Ramberg and Snow, devised a simple experiment to study the principle. They connected a thin strip of copper to a 50-ampere current source. Then, they placed an X-ray detector over ​the strip. When electric current passed through the strip, X-rays would be emitted, which would then be picked by the detector for analysis.

How did this happen?​

When electrons jump from a higher-energy (i.e., farther) energy-level to a lower-energy (closer) one, they must lose some energy to be permitted their new status. The energy can be lost as light, X-rays, UV- radiation, etc. Because we know how many distinct energy-levels there are in the atoms of each element and how much energy each of those orbitals has, electrons jumping levels for different elements must lose different, but fixed, amounts of energy.

So, when current is passed through copper, extra electrons are introduced into the metal, precipitating the forced occupation of some energy-level, like people sitting in the aisles of a full auditorium.

In this scenario, or in any other one for that matter, an electron jumping from the 2p level to the 1s level in a copper atom ​must lose 8.05 keV as X-rays – no more, no less, no differently.

However, Ramberg and Snow found that, after over two months of data-taking at a basement in Fermilab, Illinois, ​about 1 in 170 trillion trillion X-ray signals didn’t contain 8.05 keV but 7.7 keV.

The 1s orbital usually has space for two electrons going by the PEP. If one slot’s taken and the other’s free, then an electron wanting to jump in from the 2p level must lose 8.05 keV. However, ​if an electron was losing 7.7 keV, where was it going?

After some simple calculations, the scientists made a surprising discovery.​ The electron was squeezing itself in with two other electrons in the 1s level itself – instead of resorting to the aisles, it was sitting on another electron’s lap! This meant that the PEP was being violated with a probability of 1 in 170 trillion trillion.

While this is a laughably minuscule number, it’s nevertheless a positive number even taking into account possibly large errors arising out of the unsophisticated nature of the Ramberg-Snow apparatus. Effectively, where we thought there ought to be no violations, there were.

Just like that, there was a hole in our understanding of the exclusion principle.​

And it was the sort of hole with which we could make lemonade.

Into the kitchen​

So fast forward to 2006, 26 lemonade-hungry physicists, one pretentiously titled experiment, and one problem statement: Could the PEP be violated much more or much less often than once in 170 trillion trillion?

The setup was called the VIP for ‘VIolation of the Pauli Exclusion Principle Experiment’.​ How ingenious. Anyway, the idea was to replicate the Ramberg-Snow experiment in a more sophisticated environment. Instead of a simple circuit that you could build on the table top, they used one in the Gran Sasso National Lab that looked like this.

This is the DEAR (DAΦNE Exotic Atom Research) setup and was slightly modified to make way for the VIP setup. Everything’s self-evident, I suppose. CCD stands for charge-coupled detector, which is basically an X-ray detector.

(The Gran Sasso National Lab, or Laboratori Nazionali del Gran Sasso, is one of the world’s largest underground particle physics laboratories, consisting of around 1,000 scientists working on more than 15 experiments.​ It is located near the Gran Sasso mountain, between the towns of L’Aquila and Teramo in Italy.)​

​After about three years of data-taking, the team of 26 announced that it had bettered the Ramberg-Snow data by three orders of magnitude. According to data made available in 2009, they declared the PEP had been violated only once every 570,000 trillion trillion electronic level-jumps.​

Fewer yet surely

​Hurrah! The principle was being violated 1,000 times less often than thought, but it was being violated still. At this stage, the VIP team seemed to have thought the number could be much lesser, even 100-times lesser. On March 5, 2013, it submitted a paper (PDF) to the arXiv pre-print server containing a proposal for the more-sensitive VIP2.

​You might think that the number is positive, so VIP’s efforts an attempt to figure out how many angels are dancing on the head of a pin.

Well, think about this way. The moment we zero in on one value, one frequency with which anomalous level-jumps take place, then we’ll be in a position to stick the number into a formula and see what that means for the world around us.

Also, electrons are only one kind of a class of particles called fermions, all of which are thought to obey the PEP. Perhaps other experiments conducted with other fermions, such as tau leptons and muons, will throw up some other rate of violation. In that case, we’ll be able to say the misbehavior is actually dependent on some property of the particle, like its mass, spin, charge, etc.

Until that day, we’ve got to keep trying.​

(This blog post first appeared at The Copernican on March 11, 2013.)

Where does the Higgs boson come from?

When the Chelyabinsk meteor – dubbed Chebarkul – entered Earth’s atmosphere at around 17 km/s, it started to heat up due to friction. After a point, cracks already present on the chunk of rock weighing 9,000-tonnes became licensed to widen and eventually split off Chebarkul into smaller parts.

While the internal structure of Chebarkul was responsible for where the cracks widened and at what temperature and other conditions, the rock’s heating was the tipping point. Once it got hot enough, its crystalline structure began to disintegrate in some parts.

Spontaneous symmetry-breaking

About 13.75 billion years ago, this is what happened to the universe. At first, there was a sea of energy, a symmetrically uniform block. Suddenly, this block was rapidly exposed to extreme heat. Once it hit about 1015 kelvin – 173 billion times hotter than our Sun’s surface – the block disintegrated into smaller packets called particles. Its symmetry was broken. The Big Bang had happened.


The Big Bang splashed a copious amount of energy across the universe, whose residue is perceivable as the CMBR.

Quickly, the high temperature fell off, but the particles couldn’t return to their original state of perfect togetherness. The block was broken forever, and the particles now had to fend for themselves. There was a disturbance, or perturbations, in the system, and the forces started to act. Physicists today call this the Nambu-Goldstone (NG) mode, named for Jeffrey Goldstone and Yoichiro Nambu.

In the tradition of particle physics treating with everything in terms of particles, the forces in the NG mode were characterised in terms of NG bosons. The exchange of these bosons between two particles meant they were exchanging forces. Since each boson is also a particle, a force can be thought of as the exchange of energy between two particles or bodies.

This is just like the concept of phonons in condensed matter physics: when atoms part of a perfectly arranged array vibrate, physicists know they contain some extra energy that makes them restless. They isolate this surplus in the form of a particle called a phonon, and address the entire array’s surplus in terms of multiple phonons. So, as a series of restlessness moves through the solid, it’ll be like a sound wave moving through it. Simplifies the math.

Anyway, the symmetry-breaking also gave rise to some fundamental forces. They’re called ‘fundamental’ because of their primacy, and because they’re still around. They were born because the disturbances in the energy block, encapsulated as the NG bosons, were interacting with an all-pervading background field called the Higgs field.

The Higgs field has four components, two charged and two uncharged. Another, more common, example of a field is the electric field, which has two components: some strength at a point (charged) and the direction of the strength at that point (neutral). Components of the Higgs field perturbed the NG bosons in a particular way to give rise to four fundamental forces, one for each component.

So, just like in Chebarkul’s case, where its internal structure dictated where the first cracks would appear, in the block’s case, the heating had disturbed the energy block to awaken different “cracks” at different points.

The Call of Cthulhu

The first such “crack” to be born was the electroweak force. As the surroundings of these particles continued to cool, the electroweak force split into two: electromagnetic (eM) and weak forces.

The force-carrier for the eM force is called a photon. Photons can exist at different energies, and at each energy-level, they have a corresponding frequency. If a photon happens to be in the “visible range” of energy-levels, then each frequency shows itself as a colour. And so on…

The force-carriers of the weak forces are the W+, W-, and Z bosons. At the time the first W/Z bosons came to life, they were massless. We know now because of Einstein’s mass-energy equivalence that this means the bosons had no energy. How were they particulate, then?

Imagine an auditorium where an important lecture’s about to be given. You get there early, your friend is late, and you decide to reserve a seat for her. Then, your friend finally arrives 10 minutes after the lecture’s started and takes her seat. In this scenario, after your arrival, the seat was there all along as ‘friend’s seat’, even though your friend took her time to get there.

Similarly, the W/Z bosons, which became quite massive later on, were initially massless. They had to have existed when the weak force came to life, if only to account for a new force that had been born. The debut of massiveness happened when they “ate” the NG bosons – the disturbed block’s surplus energy – and became very heavy.

Unfortunately for them, their snacking was irreversible. The W/Z bosons couldn’t regurgitate the NG bosons, so they were doomed to be forever heavy and, consequently, short-ranged. That’s why the force that they mediate is called the weak force: because it acts over very small distances.

You’ll notice that the W+, W-, and Z bosons make up for only three components of the Higgs field. What about the fourth component?

Enter: Higgs boson

That’s the Higgs boson. And now, getting closer to pinning down the Higgs boson means we’re also getting closer to pinning down the Higgs mechanism as valid, a quantum mechanical formulation within which we understand the behaviours of these particles and forces. This formulation is called the Standard Model.

(This blog post first appeared at The Copernican on March 8, 2013.)

Higgs boson closer than ever

The article, as written by me, appeared in The Hindu on March 7, 2013.

Ever since CERN announced that it had spotted a Higgs boson-like particle on July 4, 2012, their flagship Large Hadron Collider (LHC), apart from similar colliders around the world, has continued running experiments to gather more data on the elusive particle.

The latest analysis of the results from these runs was presented at a conference now underway in Italy.

While it is still too soon to tell if the one spotted in July 2012 was the Higgs boson as predicted in 1964, the data is convergent toward the conclusion that the long-sought particle does exist and with the expected properties. More results will be presented over the upcoming weeks.

In time, particle physicists hope that it will once and for all close an important chapter in physics called the Standard Model (SM).

The announcements were made by more than 15 scientists from CERN on March 6 via a live webcast from the Rencontres de Moriond, an annual particle physics forum that has been held in La Thuile, Italy, since 1966.

“Since the properties of the new particle appear to be very close to the ones predicted for the SM Higgs, I have personally no further doubts,” Dr. Guido Tonelli, former spokesperson of the CMS detector at CERN, told The Hindu.

Interesting results from searches for other particles, as well as the speculated nature of fundamental physics beyond the SM, were also presented at the forum, which runs from March 2-16.

Physicists exploit the properties of the Higgs to study its behaviour in a variety of environments and see if it matches with the theoretical predictions. A key goal of the latest results has been to predict the strength with which the Higgs couples to other elementary particles, in the process giving them mass.

This is done by analysing the data to infer the rates at which the Higgs-like particle decays into known lighter particles: W and Z bosons, photons, bottom quarks, tau leptons, electrons, and muons. These particles’ signatures are then picked up by detectors to infer that a Higgs-like boson decayed into them.

The SM predicts these rates with good precision.

Thus, any deviation from the expected values could be the first evidence of new, unknown particles. By extension, it would also be the first sighting of ‘new physics’.

Bad news for new physics, good news for old

After analysis, the results were found to be consistent with a Higgs boson of mass near 125-126 GeV, measured at both 7- and 8-TeV collision energies through 2011 and 2012.

The CMS detector observed that there was fairly strong agreement between how often the particle decayed into W bosons and how often it ought to happen according to theory. The ratio between the two was pinned at 0.76 +/- 0.21.

Dr. Tonelli said, “For the moment, we have been able to see that the signal is getting stronger and even the difficult-to-measure decays into bottom quarks and tau-leptons are beginning to appear at about the expected frequency.”

The ATLAS detector, parallely, was able to observe with 99.73 per cent confidence-level that the analysed particle had zero-spin, which is another property that brings it closer to the predicted SM Higgs boson.

At the same time, the detector also observed that the particle’s decay to two photons was 2.3 standard-deviations higher than the SM prediction.

Dr. Pauline Gagnon, a scientist with the ATLAS collaboration, told this Correspondent via email, “We need to asses all its properties in great detail and extreme rigour,” adding that for some aspects they would need more data.

Even so, the developments rule out signs of any new physics around the corner until 2015, when the LHC will reopen after a two-year shutdown and multiple upgrades to smash protons at doubled energy.

As for the search for Supersymmetry, a favoured theoretical concept among physicists to accommodate phenomena that haven’t yet found definition in the Standard Model: Dr. Pierluigi Campana, LHCb detector spokesperson, told The Hindu that there have been only “negative searches so far”.

Ironing out an X-ray wrinkle

A version of this post, as written by me, originally appeared in The Copernican science blog on March 1, 2013.

One of the techniques to look for and measure the properties of a black hole is to spot X-rays of specific energies coming from a seemingly localized source. The radiation emanates from heated objects like gas molecules and dust particles in the accretion disc around the black hole that have been compressed and heated to very high temperatures as the black hole prepares to gorge on them.

However, the X-rays are often obscured by gas clouds surrounding the black hole, even at farther distances, and then other objects on the path of its long journey to Earth. And this is a planet whose closest black hole is 246 quadrillion km away. This is why the better telescopes that study X-rays are often in orbit around Earth instead of on the ground to minimize further distortions due to our atmosphere.

NASA’s NuSTAR

One of the most powerful such X-ray telescopes is NASA’s NuSTAR (Nuclear Spectroscopic Telescope Array), and on February 28, the government body released data from the orbiting eye almost a year after it was launched in June 2012. NuSTAR studies higher-energy the sources and properties of higher-energy X-rays in space. In this task, it is also complemented by the ESA’s XMM-Newton space-telescope, which studies lower-energy X-rays.

The latest data concerns the black hole at the centre of the galaxy NGC 1365, which is two million times the mass of our Sun, earning it the title of Supermassive Black Hole (SMBH). Around this black hole is an accretion disc, a swirling vortex of gases, metals, molecules, basically anything unfortunate enough to have come close and subsequently been ripped apart. Out of this, NuSTAR and XMM-Newton zeroed in on the X-ray emissions characteristic of iron.

What the data revealed, as has been the wont of technology these days, is a surprise.

What are signatures for?

Emission signatures are used because we know everything about them and we know what makes each one unique. For instance, knowing the rise and dip of X-ray brightness coming from an object at different temperatures allows us to tell whether the source is iron or something else.

By extension, knowing that the source is iron lets us attribute the signature’s distortions to different causes.

And the NuSTAR data has provided the first experimental proof that iron’s signature’s distortion is not due to gas-obscuration, but due to another model called prograde rotation which attributes the distortion to the black hole’s gravitational pull.

A clearer picture

As scientists undertake a detailed analysis of the NASA data, they will also resolve iron’s signature. This means the plot of its emissions at different temperatures and times will be split up into different “colours” (i.e., frequencies) to see how much each colour has been distorted.

With NuSTAR in the picture, this analysis will assume its most precise avatar yet because the telescope’s ability to track higher-energy X-rays lets it penetrate well enough into the gas clouds around black holes, something that optical telescopes like the one at the Keck Observatory can’t. What’s more, the data will also be complete at the higher-energy levels, earlier left blank because XMM-Newton or NASA’s Chandra couldn’t see in that part of the spectrum.

If the prograde rotation model is conclusively proven after continued analysis and more measurements, then for the first time, scientists will have a precision-measurement tool on their hands to calculated black hole spin.

How? If the X-ray distortions are due to the black hole’s gravitational pull and nothing else, then the rate of its spin should be showing itself as the amount of distortion in the emission signature. The finer details for this can be picked out from the resolved data, and a black hole’s exact spin for the first time be pinned down.

The singularity

NGC 1365 is a spiral galaxy about 60 million light-years in the direction of the constellation Fornax and a prominent member of the much-studied Fornax galaxy cluster. Apart from the black hole, the galaxy hosts other interesting features such as a central bar of gas and dust adjacent to prominent star-forming regions and a type-1a supernova discovered as recently as October 27, 2012.

As we sit here, I can’t help but imagine us peering into our tiny telescopes, picking up on feebly small bits of information, and adding an extra line in our textbooks, but in reality being thrust into an entirely new realm of knowledge, understanding, and awareness.

Now, we know there’s a black hole out there – a veritable freak of nature – spinning as fast as the general theory of relativity will allow!

For once, a case against the DAE.

I met with physicist M.V. Ramana on February 18, 2013, for an interview after his talk on nuclear energy in India at the Madras Institute of Development Studies. He is currently appointed jointly with the Nuclear Futures Laboratory and the Program on Science and Global Security, both at Princeton University.

Contrary to many opponents of nuclear power and perpetrators of environmentalist messages around the world, Ramana kept away from polemics and catharsis. He didn’t have to raise his voice to make a good argument; he simply raised the quality of his reasoning. For once, it felt necessary to sit down and listen.

What was striking about Ramana was that he was not against nuclear power – although there’s no way to tell otherwise – but against how it has been handled in the country.

With the energy crisis currently facing many regions, I feel that the support for nuclear power is becoming consolidated at the cost of having to overlook how it’s being mishandled. One look at how the government’s let the Kudankulam NPP installation fester will tell you all that you need to know: The hurry, the insecurity about a delayed plant, etc.

For this reason, Ramana’s stance was and is important. The DAE is screwing things up, and the Center’s holding hands with it on this one. After February 28, when the Union Budget was announced, we found the DAE has been awarded a whopping 55.59% YoY increase from last year for this year: Rs. 8,920 crore (2012-2013 RE) to Rs. 13,879 crore (2013-2014 BE).

That’s a lot of money, so it’d pay to know what they might be doing wrong, and make some ‘Voice’ about it.

Here’s a transcript of my interview of Ramana. I’m sorry, there’s no perceptible order in which we’ve discussed topics. My statements/questions are in bold.

You were speaking about the DAE’s inability to acquire and retain personnel. Is that still a persistent problem?

MVR: This is not something we know a lot about. We’ve heard this. This is been rumoured for a while, and in around 2007, [the DAE] spoke about it publicly for the first time that I knew of. We’d heard these kinds of things since the mid-1990s as we saw a wave of multinationals – the Motorolas and the Texas Instruments – come; they drew people not just from the DAE but also from the DRDO. So, they see these people as technically trained, and so on. The structural elements that cause that migration – I think they are still there.

That is one thing. The second, I think, is a longer trend. If you look at the people who get into the DAE – I’ve heard informally, anecdotally, etc. – you’ve got the best and the brightest, so to say. It was considered to be a great career to have, so people from the metropolises would go there, people who had studied in the more elite colleges would go there, people with PhDs from abroad would go there.

Increasingly, I’m told, that the DAE has to set its sights on mofussil towns, for people who want to come to the DAE out of where they are, so they’ll get people. The questions: what kind of people? What of the caliber of those people? There are some questions about that. We don’t know a great deal, except anecdotally.

You spoke about how building reactors with unproven designs were spelling a lot of problems for the DAE. Could you give us a little primer on that?

MVR: If you look at many different reactors that they have built, a bulk was based on these HWRs, which were imported from Canada. And when the first HWR was imported – into Rajasthan – it was based upon one in Canada in a place called Pickering. The early reactors were at Pickering and Douglas Point and so on.

These had started functioning when the construction of the Rajasthan plant started. You found that many of them had lots of problems with N-shields, etc., and they were being reproduced here as well. So that was one set of things.

The second problem, actually, is more interesting in some ways: These coolant channels were sagging. This was a problem that manifested itself not initially but after roughly about 15-20 years, in the mid-80s. Then, only in the 90s did the DAE get into retubing and trying to use a different material for that. So, that tells you that these problems didn’t manifest on day #1; they happen much later.

These are two examples. Currently, the kind of reactors that are being built – for example, the PFBR, with a precise design that hasn’t been done elsewhere – have borrowed elements from different countries. The exact design hasn’t been done anywhere else. There are no exact precedents for these reactors. The example I would give is that of the French design.

France went from a small design called the Rhapsody to one which is 250-MW called Phoenix and then moved to a 1,200 MW design called the Super Phoenix. The Phoenix has actually operated relatively OK, though it’s a small reactor. In India, Rhapsody’s clone is the FBTR in some sense. The FBTR itself could not be exactly cloned because of the 1974 nuclear test: they had to change the design: they didn’t have enough plutonium, and so on.

That’s a different story. Not even going through the route of France where it went from Rhapsody to Phoenix to Super Phoenix, India went from what is essentially a roughly 10-MW reactor – in fact, less than that in terms of electrical capacity – they went to 500 MW – a big jump, a huge jump.

In the case of going from Phoenix to Super Phoenix, you saw huge numbers of problems which had not been seen in Phoenix. So, I would assume that the PFBR would have a lot of teething troubles again. When you think that BRs are the way to go, what I would expect to see is that the DAE takes some time to see what kinds of problems arise and then build the next set of reactors, preferably trying to clone it or only correcting it for those very features that went wrong.

These criticisms are aimed not at India’s nuclear program but the DAE’s handling of it.

MVR: Well, the both of them are intertwined.

They are intertwined, but if you had an alternative, you’d go for someone else to handle the program…?

MVR: Would I go for it? I think that, you know, that’s wishful thinking. In India, for better or for worse, if you have nuclear power, you have the DAE, and if you have DAE, you have nuclear power. You’re not going to get rid of one without the other. It’s wishful for me to think that, you know, somehow the DAE is going to go away and its system of governance is going to go away, and a new set of players come in.

So you’d be cynical about private parties entering the sector, too.

MVR: Private parties I think are an interesting case.

What about their feasibility in India?

MVR: I’m not a lawyer, but right now, as far as I can understand the Atomic Energy Act (1962) and all its subsequent editions, the Indian law allows for up to 49 per cent participation by the private sector. So far, no company has been willing to do that.

This is something which could change and one of the things that the private sector wanted to be in place before they do anything of that sort is this whole liability legislation. Now that the liability legislation is taking shape, it’s possible at some point the Reliances and the Tatas might want to enter the scene.

But I think that the structure of legislation is not going to change any time in the near future. Private parties will be able to put some money in and take some money out, but NPCIL will be the controlling body. To the extent that private parties want to enter this business: They want to enter the business to try and make money out of it, and not necessarily to try and master the technology and try new designs, etc. That’s my reading.

Liquid sodium as coolant: Pros and cons?

MVR: The main pro is that, because it’s a molten metal, it can conduct heat more efficiently compared to water. The other pro is that if you have water at the kind of temperatures at which the heat transfer takes place, the water will actually become steam.

So what you do is you increase the pressure. You have pressurized water and, because of that, whenever you have a break or a crack in the pipe, the water can flash into steam. That’s a problem. In sodium, that’s not the case. Those are the only two pros I can think off the top of my head.

The cons? The main con is that sodium has bad interactions with water and with air, and two, it becomes radioactive, and-

It becomes radioactive?

MVR: Yeah. Normal sodium is sodium-23, and when it works its way through a reactor, it can absorb a neutron and become sodium-24, which is a gamma-emitter. When there are leaks, for example, the whole area becomes a very high gamma dose. So you have to actually wait for the sodium to become cool and so on and so forth. That’s another reason why, if there are accidents, it takes a much longer time [to clean up].

Are there any plants around the world that use liquid sodium as a coolant?

MVR: All BRs use liquid sodium as coolant. The only exceptions primarily are in Russia where they’ve used lead, and both Pb and sodium have another problem: Like sodium, lead at room temperature is actually solid, so you have to always keep it heated. Even when you shutdown the reactor, you’ve to keep a little heater going on and stuff like that. In principle, for example, if you have a complete power shutdown – like the station blackout that happened at Fukushima, etc. – you can imagine the sodium getting frozen.

Does lead suffer from the same neutron-absorption problem that sodium does?

MVR: Probably, yes; I don’t know off the top of my head because there’s not that much experience. It should absorb neutrons and become an isotope of lead and so on. But what kind of an emitter it is, I don’t know.

Problems with Na – continued…

MVR: One more important problem is this whole question of sodium void coefficients. Since you’re a science man, let me explain more carefully what happens. Imagine that you have a reactor, using liquid sodium as a coolant, and for whatever reason, there is some local heating that happens.

For example, there may be a blockage of flow inside the pipes, or something like that, so less amount of sodium is coming, and as the sodium is passing through, it’s trying to take away all the heat. What will happen is that the sodium can actually boil off. Let’s imagine that happens.

Then you have a small bubble; in this, sort of, stream of liquid sodium, you have a small bubble of sodium vapor. When the sodium becomes vapor, it’s less effective at scattering neutrons and slowing them down. What exactly happens is that- There are multiple effects which are happening.

Some neutrons go faster and that changes the probability of their interaction, some of them are scattered out, etc. What essentially happens is that the reactivity of the reactor could increase. When that happens, you call it a positive sodium void coefficient. The opposite is a negative.

The ‘positive’ means that the feedback loop is positive. There’s a small amount of increase in reactivity, the feedback loop is positive, the reactivity becomes more, and so on. If the reactor isn’t quickly shut down, this can actually spiral into a major accident.

So, it’s good if at all times a negative void coefficient is maintained.

MVR: Yes. This is what happened in Chernobyl. In the RBMK-type reactor in Chernobyl, the void coefficient at low power levels was positive. In the normal circumstances it was not positive – for whatever reasons (because of the nature of cross-sections, etc. – we don’t need to get into that).

On that fateful day in April, 1986, they were actually conducting an experiment in the low-power range without presumably realizing this problem and that’s what actually led to the accident. Ever since there, the nuclear-reactor-design community has typically emphasized either having a negative void coefficient, or at least trying to reduce it as much as possible.

As far as I know, the PFBR being constructed in Kalpakkam has the highest positive void coefficient amongst all the BRs I know of. It’s +4.3 or something like that.

What’s the typical value?

MVR: The earlier reactors are all of the order of +2, +2.5, something of that sort. You can actually lower it. One way, for example, is to make sure that some of these neutrons, as they escape, don’t cause further fissions, but instead, they go into some of the blanket elements. They’re absorbed. When you do that, that’ll lower the void coefficient.

So, these are control rods?

MVR: These aren’t control rods. In a BR, there’s a core, and then there are blanket elements outside. Imagine that I don’t keep my blanket just outside but also put it inside, in some spots so some of these neutrons, instead of going and causing further fissions and increasing the reactivity, they will go hit one of the blanket elements, be absorbed by those. So, that neutron is out of the equation.

Once you take away a certain number of neutrons, you change the function from an exponentially increasing one to an exponentially decreasing one. To do that, what you’ll have to do is to actually increase the amount of fissile plutonium that you put in at the beginning, to compensate for the fact that most of [the neutrons] are not going and hitting the other things. So, your price as it were, for reducing the void coefficient is more plutonium, which means more cost.

So you’re offsetting the risk with cost.

MVR: Yeah, and also, if you’re thinking about BRs as a strategy for increasing the amount of nuclear power, you’re probably reducing the breeding ratio (the amount of energy the extra Pt will produce, and so on and so forth). So, the time taken to set up each reactor will be more. So, those kinds of issues are tradeoffs. In those tradeoffs, what the DAE has done is to use a design that’s riskier, probably at some cost.

They’re going for a quicker yield.

MVR: Yes. I think what they’re doing in part is that they’ve convinced themselves that this is not going to have any accidents, that it’s perfectly safe – that has to do with a certain ideology. The irony is that, despite that, you’re going to be producing very expensive power.

Could you comment on the long-term global impact of the Fukushima accident? And not just in terms for what it means for the nuclear-research community.

MVR: I would say two things. One is that the impact of Fukushima has been very varied across different countries. Broadly speaking, I characterized it [for a recent piece I wrote] in three folds following an old economist called Albert Hirschman. I called it ‘Exit’, ‘Voice’, and ‘Loyalty’.

This economist looked at how people responded to organizational decline. Let’s say there’s a product you’ve bought, and over time it doesn’t do well. There are three things that you can do. You can choose not to buy it and buy something else; this is ‘Exit’. You can write to the manufacturer or the organization that you belong to, you make noise about it. You try to improve it and so on. This is ‘Voice’.

The third is to keep quiet and continue persisting with it. This is ‘Loyalty’. And if you look at countries, they’ve done roughly similar things. There are countries like Germany and Switzerland which have just exited. This is true with other countries also which didn’t have nuclear power programs but were planning to. Venezuela, for example: Chavez had just signed a deal with Russia. After Fukushima, he said, “Sorry, it’s a bad idea.” Also, Israel: Netanyahu also said that.

Then, there are a number of countries where the government has said “we actually want to go on with nuclear power but because of public protest, we’ve been forced to change direction”. Italy is probably the best example. And before the recent elections, Japan would’ve been a good example. You know, these are fluid categories, and things can change.

Mostly political reasons.

MVR: Yes, for political reasons. For also reasons of what kind of or nature of government you have, etc.

And then finally there are a whole bunch of countries which have been loyal to nuclear power. India, China, United States, and so on. In all these countries, what you find is two things. One is a large number of arguments why Fukushima is inapplicable to their country. Basically, DAE and all of these guys say, “Fukushima is not going to happen here.” And then maybe they will set up a sort of task force, they’ll say, “We’ll have a little extra water always, we’ll have some strong backup diesel generators,” blah-blah-blah.

Essentially, the idea is [Fukushima] is not going to change our loyalty to nuclear power.

The second thing is that there’s been one real effect: The rate at which nuclear power is going to grow has been slowed down. There’s no question about that. Fukushima in many cases consolidated previous trends, and in some cases started new trends. I would not have expected Venezuela to do this volte-face, but in Germany, it’s not really that surprising. Different places have different dynamics.

But I think that, overall, it has slowed down things. The industry’s response to this has been to say, “Newer reactors are going to be safer.” And they talk about passive safety and things like that. I don’t know how to evaluate these things. There are problems with passive safety also.

What’re you skeptical about?

MVR: I’m skeptical about whether new reactors are going to be safer.

Are you cynical?

MVR: I’m not cynical. Well, I think there’s some cynicism involved, but I’d call it observation. The skepticism is about new reactor designs are going to be immune to accidents. Because of incomplete knowledge, and so on, you might be protecting against Fukushima, but you’ll not be protecting against Chernobyl. Chernobyl didn’t have a tsunami triggering it – things of that sort.

EUCLID/ESA: A cosmic vision looking into the darkness

I spoke to Dr. Giuseppe Racca and Dr. Rene Laureijs, both of the ESA, regarding the EUCLID mission, which will be the world’s first space-telescope launched to study dark energy and dark matter. For the ESA, EUCLID will be the centerpiece of their Cosmic Vision program (2015-2025). Dr. Racca is the mission’s project manager while Dr. Laureijs is a project scientist.

Could you explain, in simple terms, what the Lagrange point is, and how being able to study the universe from that vantage point could help the study? 

GR: Sun-Earth Lagrangian point 2 (SEL2) is a point in space about 1.5 million km from Earth in the direction opposite to the sun, co-rotating with the Earth around the Sun. It is a nice and calm point to make observations. It is not disturbed by the heat fluxes from the Earth but at the same time is not too far away to allow to send to Earth the large amount of data from the observation. The orbit around SEL2 that Euclid will employ is rather large and it is easy to reach (in terms of launcher capability) and not expensive to control (in terms of fuel required for the orbit corrections and maintenance manoeuvres).

Does Euclid in any way play into a broader program by ESA to delve into the Cosmic Frontier? Are there future upgrades/extensions planned? 

RL: Euclid is the second approved medium class mission of ESA’s Cosmic Vision programme. The first one is Solar Orbiter, which studies the Sun at short distance. The Cosmic Vision programme sets out a plan for Large, Medium and Small size missions in the decade 2015-2025. ESA’s missions Planck, which is presently in operation in L2, and Euclid will study the beginning, the evolution, and the predicted end of our Universe.

GR: A theme of this programme is: “How did the Universe originate and what is it made of?” Euclid is the first mission of this part of Cosmic Vision 2015-2025. There will be other missions, which have not been selected yet.

What’s NASA’s role in all of this? What are the different ways in which they will be participating in the Euclid mission? Is this a mission-specific commitment or, again, is it encompassed by a broader participation agreement?

GR: The NASA participation in the Euclid mission is very important but rather limited in extent. They will provide the Near-infrared detectors for one of the two Euclid instruments. In addition they will contribute to the scientific investigation with a team of about 40 US scientists. Financially speaking NASA contribution is limited to some 3-4% of the total Euclid mission cost.

RL: The Euclid Memorandum of Understanding between ESA and NASA is mission specific and does not involve a broader participation agreement. First of all, NASA will provide the detectors for the infrared instrument. Secondly, NASA will support 40 US scientists to participate in the scientific exploitation of the data. These US scientists will be part of the larger Euclid Consortium, which contains nearly 1000 mostly European scientists.

Do you have any goals in mind? Anything specific or exciting that you expect to find? Who gets the data?

GR: The goals of the Euclid mission are extremely exciting: in few words we want to investigate the nature and origin of the unseen Universe: the dark matter, five times more abundant than the ordinary matter made of atoms, and the dark energy, causing the accelerating expansion of the Universe. The “dark Universe” is reckoned today to amount at 95% of the total matter-energy density. Euclid will survey about 40% of the sky, looking back in cosmic time up to 10 billion years. A smaller part (1% of the sky) will look back to when the universe was only few million years old. This three dimensional survey will allow to map the extent and history of dark matter and dark energy. The results of the mission will allow to understand the nature of the dark matter and its position as part of an extension of the current standard model. Concerning the dark energy we will be able to distinguish between the so called “quintessence” or a modification necessary to current theories of gravity, including General Relativity.

RL: Euclid goals are to measure the accelerated expansion of the universe which tells us about Dark Energy, to determine the properties of gravity on cosmic scales, to learn about the properties of dark matter, and to refine the initial conditions leading to the Universe we see now. These goals have been chosen carefully, the instrumentation of Euclid is optimised to reach these goals as best as possible. The Euclid data opens the discovery space for many other areas in astronomy: Euclid will literally measure billions of stars and galaxies at visible and infrared wavelengths, with a very high image quality, comparable to that of Hubble Space Telescope. The most exiting prospect is the availability of these sharp images, which will certainly reveal new classes of objects with new science. The nominal mission will last for 6 years, but the first year of data will become already public 26 months after the start of the survey.

When will the EUCLID data be released?

GR: The Euclid data will be released to the public one year after their collection and will be made available to all researchers in the world.