Another window on ‘new physics’ closes

This reconstructed image of two high-energy protons colliding at the LHC shows a B_s meson (blue) produced that then decays into two muons (pink), about 50 mm from the collision point.
This reconstructed image of two high-energy protons colliding at the LHC shows a B_s meson (blue) produced that then decays into two muons (pink), about 50 mm from the collision point. Image: LHCb/CERN

The Standard Model of particle physics is a theory that has been pieced together over the last 40 years after careful experiments. It accurately predicts the behaviour of various subatomic particles across a range of situations. Even so, it’s not complete: it can explain neither gravity nor anything about the so-called dark universe.

Physicists searching for a theory that can have to pierce through the Standard Model. This can be finding some inconsistent mathematics or detecting something that can’t be explained by it, like looking for particles ‘breaking down’, i.e. decaying, into smaller ones at a rate greater than allowed by the Model.

The Large Hadron Collider, CERN, on the France-Switzerland border, produces the particles, and particle detectors straddling the collider are programmed to look for aberrations in their decay, among other things. One detector in particular, called the LHCb, looks for signs of a particle called B_s (read “B sub s”) meson decaying into two smaller particles called muons.

On July 19, physicists from the LHCb experiment confirmed at an ongoing conference in Stockholm that the B_s meson decays to two muons at a rate consistent with the Model’s predictions (full paperhere). The implication is that one more window through which physicists could have a peek of the physics beyond the Model is now shut.

The B_s meson

This meson has been studied for around 25 years, and its decay-rate to two muons has been predicted to be about thrice every billion times, 3.56 ± 0.29 per billion to be exact. The physicists’ measurements from the LHCb showed that it was happening about 2.9 times per billion. A team working with another detector, the CMS, reported it happens thrice every billion decays. These are number pretty consistent with the Model’s. In fact, scientists think the chance of an error in the LHCb readings is 1 in 3.5 million, low enough to claim a discovery.

However, this doesn’t mean the search for ‘new physics’ is over. There are many other windows, such as the search for dark matter, observations of neutrino oscillations, studies of antimatter and exotic theories like Supersymmetry, to keep scientists going.

The ultimate goal is to find one theory that can explain all phenomena observed in the universe – from subatomic particles to supermassive black holes to dark matter – because they are all part of one nature.

In fact, physicists are fond of Sypersymmetry, a theory that posits that there is one as-yet undetected particle for every one that we have detected, because it promises to retain the naturalness. In contrast, the Standard Model has many perplexing, yet accurate, explanations that is keeping physicists from piecing together the known universe in a smooth way. However, in order to find any evidence for Supersymmetry, we’ll have to wait until at least 2015, when the CERN supercollider will reopen upgraded for higher energy experiments.

And as one window has closed after an arduous 25-year journey, the focus on all the other windows will intensify, too.

(This blog post first appeared at The Copernican on July 19, 2013.)

Second star found to have magnetic-field flips also flips them fast

Tau Bootis A is a Sun-like white-dwarf star about 51 light-years from Earth. Its magnetic field changes polarity once every year as opposed to the 11 years it takes our Sun. While astronomers don’t really know why this is the case, they have a pretty interesting hypothesis: Tau Bootis A has a giant planet orbiting really close to it, and its gravitational field could be ‘dragging’ on the outer, convective layers of its host star to speed up its polarity reversals. Here’s an explanation of how this could work. It’s pretty fascinating that while we had the Sun’s cycle figured, just the second star we study that shows this behaviour defies most of our expectations.

Ambitious gamma-ray telescope takes shape

I wrote a shortened version of this piece for The Hindu on July 4, 2013. This is the longer version, with some more details thrown in.

Scientists and engineers from 27 countries including India are pitching for a next-generation gamma-ray telescope that could transform the future of high-energy astrophysics.

Called the Cherenkov Telescope Array (CTA), the proposed project is a large array of telescopes to complement existing observatories, the most potent of which are in orbit around Earth. By building it on land, scientists feel the CTA could be much more sophisticated than orbiting observatories, which are limited by logistical constraints.

Werner Hofmann, CTA spokesperson of the Max Planck Institute for Nuclear Physics, Germany, told Nature, a comparable orbiting telescope would have to be “the size of a football stadium”.

The CTA’s preliminary designs reveal that it boasts of greater angular resolution, and 10 times more sensitivity and energy-coverage, than existing telescopes. The collaboration will finalise the locations for setting up the CTA, which will consist of two networked arrays in the northern and southern hemispheres, by end-2013. Construction is slated for 2015 at a cost of $268 million.

One proposed northern hemisphere location is in Ladakh, Jammu and Kashmir.

Indian CTA collaboration

Dr. Pratik Majumdar, Saha Institute of Nuclear Physics (SINP), Kolkata, said via email, “A survey was undertaken in the late 1980s. Hanle, in Ladakh, was a good site fulfilling most of our needs: very clear and dark skies throughout the year, with a large number of photometric and spectroscopic nights at par with other similar places in the world, like La Palma in Canary Islands and Arizona desert, USA.”

However, it serves to note that the Indian government does not permit foreign nationals to visit Hanle. “I do think India needs to be more proactive about opening up to people from abroad, especially in science and technology, in order to benefit from international collaboration – unfortunately this is not happening,”said Dr. Subir Sarkar, Rudolf Peierls Centre for Theoretical Physics, Oxford University, via email. Dr. Sarkar is a member of the collaboration.

Each network will consist of four 23-metre telescopes to image weaker gamma-ray signals, and dozens of 12-metre and 2-4-metre telescopes to image the really strong ones. Altogether, they will cover an area of 10 sq. km on ground.

Scientists from SINP are also conducting simulations to better understand the performance of CTA.

Led by it, the Indian collaboration comprises Indian Institute of Astrophysics, Bhabha Atomic Research Centre, and Tata Institute of Fundamental Research (TIFR). They will be responsible for building the calibration system with the Max Planck Institute, and developing structural sub-systems of various telescopes to be fabricated in India.

Dr. B.S. Acharya, TIFR, believes the CTA can add great value to existing telescopes in India, especially the HAGAR gamma-ray telescope array in Hanle. “It is a natural extension of our work on ground-based gamma-ray astronomy in India, since 1969,” he said in an email to this Correspondent.

Larger, more powerful

While existing telescopes, like MAGIC (Canary Islands) and VERITAS (Arizona), and the orbiting Fermi-LAT and Swift, are efficient up to the 100-GeV energy mark, the CTA will be able to reach up to 100,000 GeV with the same efficiency.

Gamma rays originate from sources like dark matter annihilation, dying stars and supermassive black holes, whose physics has been barely understood. Such sources accelerate protons and electrons to huge energies and these interact with ambient matter, radiation and magnetic fields to generate gamma rays, which then travel through space.

When such a high-energy gamma-ray hits atoms in Earth’s upper atmosphere, a shower of particles are produced that cascade downward. Individual telescopes pick these up for analysis, but a network of telescopes spread over a large area would collect greater amounts, tracking them back better to their sources.

Here, CTA’s large collection area will come to play.

“No telescope based at one point on Earth can see the whole sky. The proposed CTA southern observatory will be able to study the centre of the galaxy, while the northern observatory will focus on extragalactic sources,” said Dr. Sarkar.

Gamma-ray astronomy has seen global interest since the early 1950s, when astronomers began to believe some cosmic phenomena ought to emit the radiation. After developing the telescopes in the 1960s to analyse it, some 150 sources have been mapped. The CTA is expected to chart a 1,000 more.

The HESS II gamma-ray telescope in the Khoma Highland, Namibia, is currently the world's largest telescope for gamma-ray astrophysics, possessing a 28-meter wide mirror.
The HESS II gamma-ray telescope in the Khoma Highland, Namibia, is currently the world’s largest telescope for gamma-ray astrophysics, possessing a 28-meter wide mirror.

O Voyager, where art thou?

On September 5, 1977, NASA launched the Voyager 1 space probe to study the Jovian planets Jupiter and Saturn, and their moons, and the interstellar medium, the gigantic chasm between various star-systems in the universe. It’s been 35 years and 9 months, and Voyager has kept on, recently entering the boundary between our System and the Milky Way.

In 2012, however, when nine times farther from the Sun than is Neptune, the probe entered into a part of space completely unknown to astronomers.

On June 27, three papers were published in Science discussing what Voyager 1 had encountered, a region at the outermost edge of the Solar System they’re calling the ‘heliosheath depletion region’. They think it’s a feature of the heliosphere, the imagined bubble in space beyond whose borders the Sun has no influence.

“The principal result of the magnetic field observations made by our instrument on Voyager is that the heliosheath depletion region is a previously undetected part of the heliosphere,” said Dr. Leonard Burlaga, an astrophysicist at the NASA-Goddard Space Flight Centre, Maryland, and an author of one of the papers.

“If it were the region beyond the heliosphere, the interstellar medium, we would have expected a change in the magnetic field direction when we crossed the boundary of the region. No change was observed.”

More analysis of the magnetic field observations showed that the heliosheath depletion region has a weak magnetic field – of 0.1 nano-Tesla (nT), 0.6 million times weaker than Earth’s – oriented in such a direction that it could only have arisen because of the Sun. Even so, this weak field was twice as strong as what lay outside it in its vicinity. Astronomers would’ve known why, Burlaga clarifies, if it weren’t for the necessary instrument on the probe being long out of function.

When the probe crossed over into the region, this spike in strength was recorded within a day. Moreover, Burlaga and others have found that the spike happened thrice and a drop in strength twice, leaving Voyager 1 within the region at the time of their analysis. In fact, after August 25, 2012, no drops have been recorded. The implication is that it is not a smooth region.

“It is possible that the depletion region has a filamentary character, and we entered three different filaments. However, it is more likely that the boundary of the depletion region was moving toward and away from the sun,” Burlaga said.

The magnetic field and its movement through space are not the only oddities characterising the heliosheath depletion region. Low-energy ions blown outward by the Sun constantly emerge out of the heliosphere, but they were markedly absent within the depletion region. Burlaga was plainly surprised: “It was not predicted or even suggested.”

Analysis by Dr. Stamatios Krimigis, the NASA principal investigator for the Low-Energy Charged Particle (LECP) experiment aboard Voyager 1 and an author of the second paper, also found that cosmic rays, which are highly energised charged particles produced by various sources outside the System through unknown mechanisms, weren’t striking Voyager’s detectors equally from all directions. Instead, more hits were being recorded in certain directions inside the heliosheath depletion region.

Burlaga commented, “The sharp increase in the cosmic rays indicate that cosmic rays were able to enter the heliosphere more readily along the magnetic fields of the depletion region.”

Even though Voyager 1 was out there, Krimigis feels that humankind is blind: astronomers’ models were, are, clearly inadequate, and there is no roadmap of what lies ahead. “I feel like Columbus who thought he had gotten to West India, when in fact he had gone to America,” Krimigis contemplates. “We find that nature is much more imaginative than we are.”

With no idea of how the strange region originated or whence, we’ll just have to wait and see what additional measurements tell us. Until then, the probe will continue approaching the gateway to the Galaxy.

This blog post, as written by me, first appeared in The Hindu‘s science blog on June 29, 2013.

Self-siphoning beads

This is the coolest thing I’ve seen all day, and I’m pretty sure it’ll be the coolest thing you’d have seen all day, too: The Chain of Self-siphoning Beads, a.k.a. Physics Brainmelt.

[youtube=http://www.youtube.com/watch?feature=player_embedded&v=6ukMId5fIi0]

It’s so simple; just think of the forces acting on the beads. Once a chain link is pulled up and let down, its kinetic and potential energies give it momentum going downward, and this pulls the rest of the chain up. The reason the loop doesn’t collapse is that it’s got some energy travelling along itself in the form of the beads’ momentum as they traverse that path. If the beads had been stationary, then the mass of the beads in the loop would’ve brought it down. Like a bicycle: A standing one would’ve toppled over; a moving one keeps moving.

Who talks like that?

Four-and-half years of engineering, one year of blogging and one year of J-school later, I’m a sub-editor with an Indian national daily and not doing bad at all, if you asked me. I’m not particularly important to the organization as such, but among my friends, given my background, I’m the one with a newspaper. I’m the one they call if they need an ad printed, if they need a product reviewed, if they need a chance to be published.

So, when a 20-year old from BITS, Dubai (where I studied) mailed me this, I had no f**king idea what to say.

NIce to hear bck frm u……. actually s i said m a growing writer…. i jzzt completed my frst novel n [name removed] is editing it…. i wanted to write articles n get dem published in reputed newspapers like urs….so i wanted help wid dat…. cn u jzzt give me a few guidelines so dat i cud creat sm f my best works n send dem to u…..

  1. I’m given to understand the QWERTY keyboard was designed to make typing easier for words spelled like they were originally spelled using fingers designed by evolution for a human hand. So, doesn’t typing ‘just’ have to be easier than ‘jzzt’, ‘.’ easier than ‘…………’? It’s one thing to make language work for you; it’s another to use symbols like you have no idea how they should be.
  2. Why are you so lazy that you can’t finish a word before going on to the next one? Do you think a journalist – who has lots to lose by spelling words wrong – would appreciate ‘creat’, ‘cn’, ‘sm’, ‘frst’? Don’t you think the vowel is an important part of language? It’s the letter that permits the sounding, genius.
  3. If you’re looking for a chance to get published, don’t assume I will give you the chance to be published if the best I’ve seen from you is “i jzzt completed my frst novel n so-so”, “cn u jzzt-” I cannot even.

And then to think anyone with a smartphone and a Twitter account can be stereotyped to be this way. Ugh.

A NASA photograph of the Voyager space probe, 1977.
A NASA photograph of the Voyager space probe, 1977. Photo: Wikimedia Commons

On September 5, 1977, NASA launched the Voyager 1 space probe to study the Jovian planets Jupiter and Saturn, and their moons, and the interstellar medium, the gigantic chasm between various star-systems in the universe. It’s been 35 years and 9 months, and Voyager has kept on, recently entering the boundary between our System and the Milky Way.

In 2012, however, when nine times farther from the Sun than is Neptune, the probe entered into a part of space completely unknown to astronomers.

On June 27, three papers were published in Science discussing what Voyager 1 had encountered, a region at the outermost edge of the Solar System they’re calling the ‘heliosheath depletion region’. They think it’s a feature of the heliosphere, the imagined bubble in space beyond whose borders the Sun has no influence.

“The principal result of the magnetic field observations made by our instrument on Voyager is that the heliosheath depletion region is a previously undetected part of the heliosphere,” said Dr. Leonard Burlaga, an astrophysicist at the NASA-Goddard Space Flight Centre, Maryland, and an author of one of the papers.

“If it were the region beyond the heliosphere, the interstellar medium, we would have expected a change in the magnetic field direction when we crossed the boundary of the region. No change was observed.”

More analysis of the magnetic field observations showed that the heliosheath depletion region has a weak magnetic field – of 0.1 nano-Tesla (nT), 0.6 million times weaker than Earth’s – oriented in such a direction that it could only have arisen because of the Sun. Even so, this weak field was twice as strong as what lay outside it in its vicinity. Astronomers would’ve known why, Burlaga clarifies, if it weren’t for the necessary instrument on the probe being long out of function.

When the probe crossed over into the region, this spike in strength was recorded within a day. Moreover, Burlaga and others have found that the spike happened thrice and a drop in strength twice, leaving Voyager 1 within the region at the time of their analysis. In fact, after August 25, 2012, no drops have been recorded. The implication is that it is not a smooth region.

“It is possible that the depletion region has a filamentary character, and we entered three different filaments. However, it is more likely that the boundary of the depletion region was moving toward and away from the sun,” Burlaga said.

The magnetic field and its movement through space are not the only oddities characterising the heliosheath depletion region. Low-energy ions blown outward by the Sun constantly emerge out of the heliosphere, but they were markedly absent within the depletion region. Burlaga was plainly surprised: “It was not predicted or even suggested.”

Analysis by Dr. Stamatios Krimigis, the NASA principal investigator for the Low-Energy Charged Particle (LECP) experiment aboard Voyager 1 and an author of the second paper, also found that cosmic rays, which are highly energised charged particles produced by various sources outside the System through unknown mechanisms, weren’t striking Voyager’s detectors equally from all directions. Instead, more hits were being recorded in certain directions inside the heliosheath depletion region.

Burlaga commented, “The sharp increase in the cosmic rays indicate that cosmic rays were able to enter the heliosphere more readily along the magnetic fields of the depletion region.”

Even though Voyager 1 was out there, Krimigis feels that humankind is blind: astronomers’ models were, are, clearly inadequate, and there is no roadmap of what lies ahead. “I feel like Columbus who thought he had gotten to West India, when in fact he had gone to America,” Krimigis contemplates. “We find that nature is much more imaginative than we are.”

With no idea of how the strange region originated or whence, we’ll just have to wait and see what additional measurements tell us. Until then, the probe will continue approaching the gateway to the Galaxy.

(This blog post first appeared on The Copernican on June 28, 2013.)

A Periodic Table of history lessons

This is pretty cool. Twitter user @jamiebgall tweeted this picture he’d made of the Periodic Table, showing each element alongside the nationality of its discoverer.

discovery_elements

It’s so simple, yet it says a lot about different countries’ scientific programs and, if you googled a bit, their focuses during different years in history. For example,

  1. A chunk of the transuranic actinides originated out of American labs, possibly arising out of the rapid developments in particle accelerator technology in the early 20th century.
  2. Hydrogen was discovered by a British scientist (Henry Cavendish) in the late 18th century, pointing at the country’s early establishment of research and experiment institutions. UK scientists were also responsible for the discovery of 23 elements in all.
  3. The 1904 Nobel Prizes in physics and chemistry went to Lord Rayleigh and William Ramsay, respectively, for discovering four of the six noble gases. One of the other two, helium, was co-discovered by Pierre Janssen (France) and Joseph Lockyer (UK). Radon was discovered by Friedrich Dorn (Germany) in 1898.
  4. Elements 107 to 112 were discovered by Germans at the Gesselschaft fur Schwerionenforschung, Darmstadt. Elements 107, 108 and 109 were discovered by Peter Armbruster and Gottfried Munzenberg in 1982-1994. Elements 111 and 112 were discovered by Sigurd Hoffman, et al, in 1994-1996. All of them owed their origination to the UNILAC (Universal Linear Accelerator) commissioned in 1975.
  5. The discovery of aluminium, the most abundant metal in the Earth’s crust, is attributed to Hans Christian Oersted (Denmark in 1825) even though Humphry Davy had developed an aluminium-iron alloy before him. The Dane took the honours because he was the first to isolate the metal.
  6. Between 1944 and 1952, the USA discovered seven elements; this ‘discovery density’ is beaten only by the UK, which discovered six elements in 1807 and 1808. In both countries, however, these discoveries were made by a small group of people finding one element after another. In the USA, elements 93-98 and 101 were discovered by teams led by Glenn T. Seaborg at UCal, Berkeley. In the UK, Lord Rayleigh and Sir Ramsay took the honours with the noble gases section.

And so forth…

A battery of power

Lithium ion batteries have found increasing usage in recent times, finding use in everything from portable electronics to heavy transportation. While they have their own set of problems, they’re not unsolvable. And when they are solved, they’ll also have to find other reasons to persist in a market whose demands are soaring.

The simplest upgrade that can be mounted on it is to increase its charge capacity. It will then last longer per application, reducing the frequency of replacement. During charging, electrical energy from a chemical reaction is stored in a material, inside the battery. So, the battery’s charge capacity is this material’s charge capacity.

At the moment, the material is graphite. It is widely available and easy to handle. Replacing it without disrupting how a battery is made or in what conditions it has to be stored will be helpful. Thus, a material as ‘easy’ as graphite would be the ideal substitute. Like silicon.

Silicon v. graphite

Studies have shown that silicon has 400 times the charge capacity of graphite. It is abundantly available, very resilient to heat, and is easy to produce, store and dispose. However, there’s a big problem. “The lithium-silicon system has a much higher capacity than Li-graphite, but shows a strong volume change during charging and discharging,” said Dr. Thomas Fassler, Chair of Inorganic Chemistry, Technical University of Munich.

When charging, an external voltage is provided that overpowers the battery’s internal voltage, forcing lithium ions to migrate from the positive to the negative electrode, where they’re stored in the material in question. When discharging, the ions move out of the negative electrode and into the positive, generating a current that a connected appliance draws.

If the storage material at the negative electrode is made of silicon, lithium ions entering the silicon atomic lattice stretch the lattice, making it taut. With further charging, its volume could change, fracturing then breaking the lattice. At the same time, silicon’s abundance and ubiquity are enticing attributes for materials scientists.

Two recent studies, from June 4 and June 6, propose workarounds to this problem. The earlier one was from researchers in Stanford University, Yi Cui and Zhenan Bao, assisted by scientists from Tsinghua University, Beijing, and the University of Texas, Austin. Use silicon, they say, but bolster its ability to withstand expansion while charging.

The hydrogel bolster

“Our team has used silicon-hydrogel composites to replace carbon to increase charge storage capacity by many times,” said Dr. Yi Cui. He is the David Filo and Jerry Yang Faculty Scholar, Department of Materials Science and Engineering.

Using a process called in situ synthesis polymerization, they gave silicon nanoparticles a uniform coating of a hydrogel, which is a network of polyaniline polymer chains dispersed in water. This substance is porous and flexible yet strong. When lithium ions enter the silicon lattice, it expands into space created by the hydrogel pores while being held in place.

Cui and Bao also found that the network of polymer chains formed a pathway through which the lithium ions could be transported. At the same time, because the hydrogel contains water, with which lithium is highly reactive, the battery could be ignited if not handled properly.

For such a significant problem, the scientists found a very simple solution. “We baked the water off before sealing the battery,” Bao said.

Hard to make, hard to break

The second study, from June 6, was published in the Angewandte Chemie International Edition. Instead of the elegant and industrially reproducible hydrogel solution, Dr. Fassler, who led the study, synthesized a new, sophisticated material called lithium borosilicide. He’s calling it ‘tum’ after his university.

Tum is a unique material. It is as hard as diamond. Unlike the allotrope, however, the arrangement of molecules in the tum lattice forms channels, like tubes, throughout the crystal. This facilitates an increased storage of lithium ions as well as assists in their transportation.

About the choice of boron to go with silicon, Fassler said, “Intuition and extended experimental experience is necessary to find out the proper ratio of starting materials as well as the correct parameters.” To test their out-of-the-box solution, Fassler, and his student Michael Zeilinger, went to Arizona State University and used their high-pressure chemistry lab to apply 100,000 atmospheres of pressure and 900 degrees Celsius to synthesize tum.

They found that it was stable to air and moisture, and could withstand up to 800 degrees Celsius. However, they still don’t know what the charge capacity of this new compound is. “We will build a so-called electrochemical half-cell and test it versus elemental lithium,” Fassler said.

The synthesis mechanism is no doubt inhibiting. Such high pressures and temperatures required to produce industrially commensurate quantities of tum will clearly be incompatible with the ubiquity that lithium-ion batteries enjoy. Fassler is hopeful, though. “In case the electrochemical performance turns out good, chemists will look for other, cheaper, synthetic approaches,” he said.

Rethinking the battery

Another solution to increasing the performance of lithium-ion batteries was proposed at Oak Ridge National Laboratory (ORNL), Tennessee, in the first week of June.

Led by Chengdu Lian, the team reinvented the internal structure of the battery and replaced the liquid electrolyte with a solid, sulphur-based one. This eliminated the risk of flammability and increased the charge capacity of the setup by almost 100 times, but necessitated elevated temperatures to enhance the ionic conductivity of the materials.

Commenting on the ORNL solution, Yi Cui said, “Recently, high ionic conductivity of solid electrolytes was discovered, it looks promising down the road. However, the high inter-facial resistance at the solid-solid interface still needs to be addressed. Also, the new electrode materials have very large deadweight.” He added that the cyclic performance was good – at 300 charge-discharge cycles – but not outstanding.

A battery of power

As the Stanford team continues testing its hydrogel solution, and awaits commercial deployment, the Munich team will verify tum’s electrochemical capability, and the ORNL team will try to up its battery’s performance. These solutions are important for American because, in many other countries, the battery industry is a critical part of the economy. As The Economist is quick to detail, Japan, South Korea and China are great examples.

Knowing that rechargeable and portable sources of power will play a critical role in the then-emerging electronics industry, Japan invested big in lithium-ion batteries in the 1990s. Soon, South Korea and China followed suit. America, on the other hand, kept away because manufacturing these batteries provided low return on investment at a time when it only wanted its economy to grow. Now, it’s playing catch up.

All because it didn’t see coming how lithium-ion batteries would become sources of power – electrochemical and economic.

This post, as written by me, first appeared in The Copernican science blog on June 19, 2013.

Hello and welcome to my personal blog. I’m a science reporter and blogger at The Hindu, an Indian national daily. I’m interested in high-energy physics, the history and philosophy of science, and photography. When no one’s looking, I fiddle with code and call myself a programmer. I enjoy working with the infrastructure that props up newsrooms.

(I can’t delete this post because Walter Murch has commented on it.)