How do you study a laser firing for one-quadrillionth of a second?

I’m grateful to Mukund Thattai, at the National Centre for Biological Sciences, Bengaluru, for explaining many of the basic concepts at work in the following article.

An important application of lasers today is in the form of extremely short-lived laser pulses used to illuminate extremely short-lived events that often play out across extremely short distances. The liberal use of ‘extreme’ here is justified: these pulses last for no more than one-quadrillionth of a second each. By the time you blink your eye once, 100 trillion of these pulses could have been fired. Some of the more advanced applications even require pulses that last 1,000-times shorter.

In fact, thanks to advances in laser physics, there are branches of study today called attophysics and femtochemistry that employ such fleeting pulses to reveal hidden phenomena that many of the most powerful detectors may be too slow to catch. The atto- prefix denotes an order of magnitude of -18. That is, one attosecond is 1 x 10-18 seconds and one attometer is 1 x 10-18 metres. To quote from this technical article, “One attosecond compares to one second in the way one second compares to the age of the universe. The timescale is so short that light in vacuum … travels only about 3 nanometers during 1 attosecond.”

One of the more common applications is in the form of the pump-probe technique. An ultra-fast laser pulse is first fired at, say, a group of atoms, which causes the atoms to move in an interesting way. This is the pump. Within fractions of a second, a similarly short ‘probe’ laser is fired at the atoms to discern their positions. By repeating this process many times over, and fine-tuning the delay between the pump and probe shots, researchers can figure out exactly how the atoms responded across very short timescales.

In this application and others like it, the pulses have to be fired at controllable intervals and to deliver very predictable amounts of energy. The devices that generate these pulses often provide these features, but it is often necessary to independently study the pulses and fine-tune them according to different applications’ needs. This post discusses one such way and how physicists improved on it.

As electromagnetic radiation, every laser pulse is composed of an electric field and a magnetic field oscillating perpendicular to each other. Of these, consider the electric field (only because it’s easier to study; thanks to Maxwell’s equations, what we learn about the electric field can be inferred accordingly for the magnetic field as well):

Credit: Peter Baum & Stefan Lochbrunner, LMU München Fakultät für Physik, 2002

The blue line depicts the oscillating electric wave, also called the carrier wave (because it carries the energy). The dotted line around it depicts the wave’s envelope. It’s desirable to have the carrier’s crest and the envelope’s crest coincide – i.e. for the carrier wave to peak at the same point the envelope as a whole peaks. However, trains of laser pulses, generated for various applications, typically drift: the crest of every subsequent carrier wave is slightly more out of step with the envelope’s crest. According to one paper, it arises “due to fluctuations of dispersion, caused by changes in path length, and pump energy experienced by consecutive pulses in a pulse train.” In effect, the researcher can’t know the exact amount of energy contained in each pulse, and how that may affect the target.

The extent to which the carrier wave and the envelope are out of step is expressed in terms of the carrier-envelope offset (CEO) phase, measured in degrees (or radians). Knowing the CEO phase is crucial for experiments that involve ultra-precise measurements because the phase is likely to affect the measurements in question, and needs to be adjusted for. According to the same paper, “Fluctuations in the [CEO phase] translate into variations in the electric field that hamper shot-to-shot reproducibility of the experimental conditions and deteriorate the temporal resolution.”

Ignore all the symbols and notice the carrier wave – especially how its peak within the envelope shifts with every next pulse. The offset between the two peaks is called the carrier-envelope offset phase. Credit: HartmutG/Wikimedia Commons, CC BY-SA 3.0

This is why, in turn, physicists have developed techniques to measure the CEO phase and other properties of propagating waves. One of them is called attosecond streaking. Physicists stick a gas of atoms in a container, fire a laser at it to ionise them and release electrons. The field to be studied is then fired into this gas, so its electric-wave component pushes on these electrons. Specifically, as the electric field’s waves rise and fall, they accelerate the electrons to different extents over time, giving rise to streaks of motion – and the technique’s name. A time-of-flight spectrometer measures this streaking to determine the field’s properties. (The magnetic field also affects the electrons, but it suffices to focus on the electric field for this post.)

This sounds straightforward but the setup is cumbersome: the study needs to be conducted in a vacuum and electron time-of-flight spectrometers are expensive. But while there are other ways to measure the wave properties of extreme fields, attosecond streaking has been one of the most successful (in one instance, it was used to measure the CEO phase at a shot frequency of 400,000 times per second).

As a workaround, physicists from Germany and Canada recently reported in the journal Optica a simpler way, based on one change. Instead of setting up a time-of-flight spectrometer, they propose using the pushed electrons to induce an electric current in electrodes, in such a way that the properties of the current contain information about the CEO phase. This way, researchers can drop both the spectrometer and, because the electrons aren’t being investigated directly, the vacuum chamber.

The researchers used fused silica, a material with a wide band-gap, for the electrodes. The band-gap is the amount of energy a material’s electrons need to be imparted so they can ‘jump’ from the valence band to the conduction band, turning the material into a conductor. The band-gap in metals is zero: if you placed a metallic object in an electric field, it will develop an internal current linearly proportional to the field strength. Semiconductors have a small band-gap, which means some electric fields can give rise to a current while others can’t – a feature that modern electronics exploit very well.

Dielectric materials have a (relatively) large band-gap. When it is exposed to a low electric field, a dielectric won’t conduct electricity but its internal arrangement of positive and negative charges will move slightly, creating a minor internal electric field. But when the field strength crosses a particular threshold, the material will ‘break down’ and become a conductor – like a bolt of lightning piercing the air.

Next, the team circularly polarised the laser pulse to be studied. Polarisation refers to the electric field’s orientation in space, and the effect of circular polarisation is to cause the electric field to rotate. And as the field moves forward, its path traces a spiral, like so:

A circularly polarised electric field. Credit: Dave3457/Wikimedia Commons

The reason for doing this, according to the team’s paper, is that when the circularly polarised laser pulse knocks electrons out of atoms, the electrons’ momentum is “perpendicular to the direction of the maximum electric field”. So as the CEO phase changes, the electrons’ directions of drift also change. The team used an arrangement of three electrodes, connected to each other in two circuits (see diagram below) such that the electrons flowing in different directions induce currents of proportionately different strengths in the two arms. Amplifiers attached to the electrodes then magnify these currents and open them up for further analysis. Since the envelope’s peak, or maximum, can be determined beforehand as well as doesn’t drift over time, the CEO phase can be calculated straightforwardly.

(The experimental setup, shown below, is a bit different: since the team had to check if their method works, they deliberately insert a CEO phase in the pulse and check if the setup picks up on it.)

The two tips of the triangular electrodes are located 60 µm apart, on the same plane, and the horizontal electrode is 90 µm below the plane. The beam moves from the red doodle to the mirror, and then towards the electrodes. The two wedges are used to create the ‘artificial’ CEO phase. Source: https://doi.org/10.1364/OPTICA.7.000035

The team writes towards the end of the paper, “The most important asset of the new technique, besides its striking simplicity, is its potential for single-shot [CEO phase] measurements at much higher repetition rates than achievable with today’s techniques.” It attributes this feat to attosecond streaking being limited by the ability of the time-of-flight spectrometer whereas its setup is limited, in the kHz range, only by the time the amplifiers need to boost the electric signals, and in the “multi-MHz” range by the ability of the volume of gas being struck to respond sufficiently rapidly to the laser pulses. The team also states that its electrode-mediated measurement method renders the setup favourable to radiation of longer wavelengths as well.

Featured image: A collection of lasers of different frequencies in the visible-light range. Credit: 彭嘉傑/Wikimedia Commons, CC BY 2.5 Generic.

Where is the coolest lab in the universe?

The Large Hadron Collider (LHC) performs an impressive feat every time it accelerates billions of protons to nearly the speed of light – and not in terms of the energy alone. For example, you release more energy when you clap your palms together once than the energy imparted to a proton accelerated by the LHC. The impressiveness arises from the fact that the energy of your clap is distributed among billions of atoms while the latter all resides in a single particle. It’s impressive because of the energy density.

A proton like this should have a very high kinetic energy. When lots of protons with such amounts of energy come together to form a macroscopic object, the object will have a high temperature. This is the relationship between subatomic particles and the temperature of the object they make up. The outermost layer of a star is so hot because its constituent particles have a very high kinetic energy. Blue hypergiant stars, thought to be the hottest stars in the universe, like Eta Carinae have a surface temperature of 36,000 K and a surface 57,600-times larger than that of the Sun. This isn’t impressive on the temperature scale alone but also on the energy density scale: Eta Carinae ‘maintains’ a higher temperature over a larger area.

Now, the following headline and variations thereof have been doing the rounds of late, and they piqued me because I’m quite reluctant to believe they’re true:

This headline, as you may have guessed by the fonts, is from Nature News. To be sure, I’m not doubting the veracity of any of the claims. Instead, my dispute is with the “coolest lab” claim and on entirely qualitative grounds.

The feat mentioned in the headline involves physicists using lasers to cool a tightly controlled group of atoms to near-absolute-zero, causing quantum mechanical effects to become visible on the macroscopic scale – the feature that Bose-Einstein condensates are celebrated for. Most, if not all, atomic cooling techniques endeavour in different ways to extract as much of an atom’s kinetic energy as possible. The more energy they remove, the cooler the indicated temperature.

The reason the headline piqued me was that it trumpets a place in the universe called the “universe’s coolest lab”. Be that as it may (though it may not technically be so; the physicist Wolfgang Ketterle has achieved lower temperatures before), lowering the temperature of an object to a remarkable sliver of a kelvin above absolute zero is one thing but lowering the temperature over a very large area or volume must be quite another. For example, an extremely cold object inside a tight container the size of a shoebox (I presume) must be lacking much less energy than a not-so-extremely cold volume across, say, the size of a star.

This is the source of my reluctance to acknowledge that the International Space Station could be the “coolest lab in the universe”.

While we regularly equate heat with temperature without much consequence to our judgment, the latter can be described by a single number pertaining to a single object whereas the former – heat – is energy flowing from a hotter to a colder region of space (or the other way with the help of a heat pump). In essence, the amount of heat is a function of two differing temperatures. In turn it could matter, when looking for the “coolest” place, that we look not just for low temperatures but for lower temperatures within warmer surroundings. This is because it’s harder to maintain a lower temperature in such settings – for the same reason we use thermos flasks to keep liquids hot: if the liquid is exposed to the ambient atmosphere, heat will flow from the liquid to the air until the two achieve a thermal equilibrium.

An object is said to be cold if its temperature is lower than that of its surroundings. Vladivostok in Russia is cold relative to most of the world’s other cities but if Vladivostok was the sole human settlement and beyond which no one has ever ventured, the human idea of cold will have to be recalibrated from, say, 10º C to -20º C. The temperature required to achieve a Bose-Einstein condensate is the temperature required at which non-quantum-mechanical effects are so stilled that they stop interfering with the much weaker quantum-mechanical effects, given by a formula but typically lower than 1 K.

The deep nothingness of space itself has a temperature of 2.7 K (-270.45º C); when all the stars in the universe die and there are no more sources of energy, all hot objects – like neutron stars, colliding gas clouds or molten rain over an exoplanet – will eventually have to cool to 2.7 K to achieve equilibrium (notwithstanding other eschatological events).

This brings us, figuratively, to the Boomerang Nebula – in my opinion the real coolest lab in the universe because it maintains a very low temperature across a very large volume, i.e. its coolness density is significantly higher. This is a protoplanetary nebula, which is a phase in the lives of stars within a certain mass range. In this phase, the star sheds some of its mass that expands outwards in the form of a gas cloud, lit by the star’s light. The gas in the Boomerang Nebula, from a dying red giant star changing to a white dwarf at the centre, is expanding outward at a little over 160 km/s (576,000 km/hr), and has been for the last 1,500 years or so. This rapid expansion leaves the nebula with a temperature of 1 K. Astronomers discovered this cold mass in late 1995.

(“When gas expands, the decrease in pressure causes the molecules to slow down. This makes the gas cold”: source.)

The experiment to create a Bose-Einstein condensate in space – or for that matter anywhere on Earth – transpired in a well-insulated container that, apart from the atoms to be cooled, was a vacuum. So as such, to the atoms, the container was their universe, their Vladivostok. They were not at risk of the container’s coldness inviting heat from its surroundings and destroying the condensate. The Boomerang Nebula doesn’t have this luxury: as a nebula, it’s exposed to the vast emptiness, and 2.7 K, of space at all times. So even though the temperature difference between itself and space is only 1.7 K, the nebula also has to constantly contend with the equilibriating ‘pressure’ imposed by space.

Further, according to Raghavendra Sahai (as quoted by NASA), one of the nebula’s cold spots’ discoverers, it’s “even colder than most other expanding nebulae because it is losing its mass about 100-times faster than other similar dying stars and 100-billion-times faster than Earth’s Sun.” This implies there is a great mass of gas, and so atoms, whose temperature is around 1 K.

All together, the fact that the nebula has maintained a temperature of 1 K for around 1,500 years (plus a 5,000-year offset, to compensate for the distance to the nebula) and over 3.14 trillion km makes it a far cooler “coolest” place, lab, whatever.

Good writing is an atom

https://twitter.com/HochTwit/status/1174875013708746752

The act of writing well is like an atom, or the universe. There is matter but it is thinly distributed, with lots of empty space in between. Removing this seeming nothingness won’t help, however. Its presence is necessary for things to remain the way they are and work just as well. Similarly, writing is not simply the deployment of words. There is often the need to stop mid-word and take stock of what you have composed thus far and what the best way to proceed could be, even as you remain mindful of the elegance of the sentence you are currently constructing and its appropriate situation in the overarching narrative. In the end, there will be lots of words to show for your effort but you will have spent even more time thinking about what you were doing and how you were doing it. Good writing, like the internal configuration of a set of protons, neutrons and electrons, is – physically speaking – very little about the labels attached to describe them. And good writing, like the vacuum energy of empty space, acquires its breadth and timelessness because it encompasses a lot of things that one cannot directly see.