Powerful microscopy technique brings proteins into focus

Cryo-electron microscopy (cryo-EM) as a technology has become more important because the field that it revolutionised – structural biology – has become more important. The international scientific community had this rise in fortunes, so to speak, acknowledged when the Nobel Prize for chemistry was awarded to three people in 2017 for perfecting its use to study important biomolecules and molecular processes.

(Who received the prize is immaterial, considering more than just three people are likely to have contributed to the development of cryo-EM; however, the prize-giving committee’s choice of field to spotlight is a direction worth following.)

In 2015, two separate groups of scientists used cryo-EM to image objects 2.8 Å and 2.2 Å (1 nm is one-billionth of a metre; 1 Å is one-tenth of this) wide. These distances are considered to be atomic because they represent the ability to image features about as big as a smallish atom, comparable to that of, say, sodium. Before cryo-EM, scientists could image such distances only with X-ray crystallography, which requires the samples to be studied to be crystallised first. This isn’t always possible.

But though cryo-EM didn’t require specimens to be crystallised, they had to be placed in a vacuum first. In vacuum, water evaporates, and when water evaporates from biological objects like tissue, the specimen could lose its structural integrity and collapse or deform. The trio that won the chemistry prize in 2017 developed multiple workarounds for this and other problems. Taken together, their innovations allowed scientists to find cryo-EM to be more and more valuable for research.

One of the laureates, Joachim Frank, developed computational techniques in the 1970s and 1980s to enhance, correct and in other ways modify images obtained with cryo-EM. And one of these techniques in turn was particularly important.

An object will reflect a wave if the object’s size is comparable to the wave’s wavelength. Humans see a chair or table because the chair or table reflects visible light, and our eyes detect the reflected electromagnetic waves. A cryo-EM ‘sees’ its samples using electrons, which have a smaller wavelength than photons and can thus reveal even smaller objects.

However, there’s a catch. The more energetic an electron is, the lower its wavelength is, and the smaller the feature it can resolve – but a high-energy electron can also damage the specimen altogether. Frank’s contributions allowed scientists to reduce the number of electrons or their energy to obtain equally good images of their specimens, leading to resolutions of 2.2 Å.

Today, structural biology continues to be important, but its demands have become more exacting. To elucidate the structures of smaller and smaller molecules, scientists need cryo-EM and other tools to be able to resolve smaller and smaller features, but come up against significant physical barriers.

For example, while Frank’s techniques allowed scientists to reduce the number of electrons required to obtain the image of a sample, using fewer probe particles also meant a lower signal-to-noise ratio (SNR). So the need for new techniques, new solutions, to these old problems has become apparent.

In a paper published online on October 21, a group of scientists from Belgium, the Netherlands and the UK describe “three technological developments that further increase the SNR of cryo-EM images”. These are a new kind of electron source, a new energy filter and a new electron camera.

The electron source is something the authors call a cold field emission electron gun (CFEG). Some electron microscopes use field emission guns (FEGs) to shoot sharply focused, coherent beams of electrons optimised to have energies that will produce a bright image. A CFEG is a FEG that reduces the brightness in favour of reducing the average difference in energies between electrons. The higher this difference – or the energy spread – is, the more blur there will be in the image.

The authors’ pitch is that FEGs help produce brighter but more blurred images than CFEGs, and that CFEGs help produce significantly better images when the goal is to image features smaller than 2 Å. Specifically, they write, the SNR increases 2.5x at a resolution of 1.5 Å and 9.5x at 1.2 Å.

The second improvement has to do with the choice of electrons used to compose the final image. The electrons fired by the gun (CFEG or otherwise) go on to have one of two types of collisions with the specimen. In an elastic collision, the electron’s kinetic energy doesn’t change – i.e. it doesn’t impart its kinetic energy to the specimen. In an inelastic collision, the electron’s kinetic energy changes because the electron has passed on some of it to the specimen itself. This energy transfer can produce noise, lower the SNR and distort the final image.

The authors propose using a filter that removes electrons that have undergone inelastic collisions from the final assessment. In simple terms, the filter comprises a slit through which only electrons of a certain energy can pass and a prism that bends their path towards a detector. This said, they do acknowledge that it will be interesting to explore in future whether inelastically scattered electrons can be be better accounted for instead of being eliminated altogether – akin to silencing a classroom by expelling unruly children versus retaining them and teaching them to keep quiet.

The final improvement is to use the “next-generation” Falcon 4 direct-electron detector. This is the latest iteration in a line of products developed by Thermo Fisher Scientific, to count the number of electrons impinging on a surface as accurately as possible, their relative location and at a desirable exposure. The Falcon 4 has a square detection area 14 µm to a side, a sampling frequency of 248 Hz and a “sub-pixel accuracy” (according to the authors) that allows the device to not lose track of electrons even if they impinge close to each other on the detector.

A schematic overview of the experimental setup. Credit: https://doi.org/10.1038/s41586-020-2829-0

Combining all three improvements, the authors write that they were able to image a human membrane protein called ß3 GABA_A R with a resolution of 1.7 Å and mouse apoferritin at 1.22 Å. (The protein called ferritin binds to iron and stores/releases it; apoferritin is ferritin sans iron.)

A reconstructed image of GABA_A R. The red blobs are water molecules. NAG is N-acetyl glucosamine. Credit: https://doi.org/10.1038/s41586-020-2829-0

“The increased SNR of cryo-EM images enabled by the technology described here,” the authors conclude, “will expand [the technique] to more difficult samples, including membrane proteins in lipid bilayers, small proteins and structurally heterogeneous macromolecular complexes.”

At these resolutions, scientists are closing in on images not just of macromolecules of biological importance but of parts of these molecules – and can in effect elucidate the structures that correspond to specific functions or processes. This is somewhat like going from knowing that viruses infect cells to determining the specific parts of a virus and a cell implicated in the infiltration process.

A very germane example is that of the novel coronavirus. In April this year, a group of researchers from France and the US reported the cryo-EM structure of the virus’s spike glycoprotein, which binds to the ACE2 protein on the surface of some cells to gain entry. By knowing this structure, other researchers can design the more perfect inhibitors to disrupt the glycoprotein’s function, as well as vaccines that mimic its presence to provoke the desired immune response.

In this regard, a resolution of 1-2 Å corresponds to the dimensions of individual covalent bonds. So by extending the cryo-EM’s ability to decipher smaller and smaller features, researchers can strike at smaller, more precise molecular mechanisms to produce more efficient, perhaps more closely controlled and finely targeted, effects.

Featured image: Scientists using a 300-kV cryo-EM at the Max Planck Institute of Molecular Physiology, Dortmund. Credit: MPI Dortmund.

Where is the coolest lab in the universe?

The Large Hadron Collider (LHC) performs an impressive feat every time it accelerates billions of protons to nearly the speed of light – and not in terms of the energy alone. For example, you release more energy when you clap your palms together once than the energy imparted to a proton accelerated by the LHC. The impressiveness arises from the fact that the energy of your clap is distributed among billions of atoms while the latter all resides in a single particle. It’s impressive because of the energy density.

A proton like this should have a very high kinetic energy. When lots of protons with such amounts of energy come together to form a macroscopic object, the object will have a high temperature. This is the relationship between subatomic particles and the temperature of the object they make up. The outermost layer of a star is so hot because its constituent particles have a very high kinetic energy. Blue hypergiant stars, thought to be the hottest stars in the universe, like Eta Carinae have a surface temperature of 36,000 K and a surface 57,600-times larger than that of the Sun. This isn’t impressive on the temperature scale alone but also on the energy density scale: Eta Carinae ‘maintains’ a higher temperature over a larger area.

Now, the following headline and variations thereof have been doing the rounds of late, and they piqued me because I’m quite reluctant to believe they’re true:

This headline, as you may have guessed by the fonts, is from Nature News. To be sure, I’m not doubting the veracity of any of the claims. Instead, my dispute is with the “coolest lab” claim and on entirely qualitative grounds.

The feat mentioned in the headline involves physicists using lasers to cool a tightly controlled group of atoms to near-absolute-zero, causing quantum mechanical effects to become visible on the macroscopic scale – the feature that Bose-Einstein condensates are celebrated for. Most, if not all, atomic cooling techniques endeavour in different ways to extract as much of an atom’s kinetic energy as possible. The more energy they remove, the cooler the indicated temperature.

The reason the headline piqued me was that it trumpets a place in the universe called the “universe’s coolest lab”. Be that as it may (though it may not technically be so; the physicist Wolfgang Ketterle has achieved lower temperatures before), lowering the temperature of an object to a remarkable sliver of a kelvin above absolute zero is one thing but lowering the temperature over a very large area or volume must be quite another. For example, an extremely cold object inside a tight container the size of a shoebox (I presume) must be lacking much less energy than a not-so-extremely cold volume across, say, the size of a star.

This is the source of my reluctance to acknowledge that the International Space Station could be the “coolest lab in the universe”.

While we regularly equate heat with temperature without much consequence to our judgment, the latter can be described by a single number pertaining to a single object whereas the former – heat – is energy flowing from a hotter to a colder region of space (or the other way with the help of a heat pump). In essence, the amount of heat is a function of two differing temperatures. In turn it could matter, when looking for the “coolest” place, that we look not just for low temperatures but for lower temperatures within warmer surroundings. This is because it’s harder to maintain a lower temperature in such settings – for the same reason we use thermos flasks to keep liquids hot: if the liquid is exposed to the ambient atmosphere, heat will flow from the liquid to the air until the two achieve a thermal equilibrium.

An object is said to be cold if its temperature is lower than that of its surroundings. Vladivostok in Russia is cold relative to most of the world’s other cities but if Vladivostok was the sole human settlement and beyond which no one has ever ventured, the human idea of cold will have to be recalibrated from, say, 10º C to -20º C. The temperature required to achieve a Bose-Einstein condensate is the temperature required at which non-quantum-mechanical effects are so stilled that they stop interfering with the much weaker quantum-mechanical effects, given by a formula but typically lower than 1 K.

The deep nothingness of space itself has a temperature of 2.7 K (-270.45º C); when all the stars in the universe die and there are no more sources of energy, all hot objects – like neutron stars, colliding gas clouds or molten rain over an exoplanet – will eventually have to cool to 2.7 K to achieve equilibrium (notwithstanding other eschatological events).

This brings us, figuratively, to the Boomerang Nebula – in my opinion the real coolest lab in the universe because it maintains a very low temperature across a very large volume, i.e. its coolness density is significantly higher. This is a protoplanetary nebula, which is a phase in the lives of stars within a certain mass range. In this phase, the star sheds some of its mass that expands outwards in the form of a gas cloud, lit by the star’s light. The gas in the Boomerang Nebula, from a dying red giant star changing to a white dwarf at the centre, is expanding outward at a little over 160 km/s (576,000 km/hr), and has been for the last 1,500 years or so. This rapid expansion leaves the nebula with a temperature of 1 K. Astronomers discovered this cold mass in late 1995.

(“When gas expands, the decrease in pressure causes the molecules to slow down. This makes the gas cold”: source.)

The experiment to create a Bose-Einstein condensate in space – or for that matter anywhere on Earth – transpired in a well-insulated container that, apart from the atoms to be cooled, was a vacuum. So as such, to the atoms, the container was their universe, their Vladivostok. They were not at risk of the container’s coldness inviting heat from its surroundings and destroying the condensate. The Boomerang Nebula doesn’t have this luxury: as a nebula, it’s exposed to the vast emptiness, and 2.7 K, of space at all times. So even though the temperature difference between itself and space is only 1.7 K, the nebula also has to constantly contend with the equilibriating ‘pressure’ imposed by space.

Further, according to Raghavendra Sahai (as quoted by NASA), one of the nebula’s cold spots’ discoverers, it’s “even colder than most other expanding nebulae because it is losing its mass about 100-times faster than other similar dying stars and 100-billion-times faster than Earth’s Sun.” This implies there is a great mass of gas, and so atoms, whose temperature is around 1 K.

All together, the fact that the nebula has maintained a temperature of 1 K for around 1,500 years (plus a 5,000-year offset, to compensate for the distance to the nebula) and over 3.14 trillion km makes it a far cooler “coolest” place, lab, whatever.