Boron nitride, tougher than it looks

During World War I, a British aeronautical engineer named A.A. Griffith noticed something odd about glass. He found that the atomic bonds in glass needed 10,000 megapascals of stress to break apart – but a macroscopic mass of glass could be broken apart by a stress of 100 megapascals. Something about glass changed between the atomic level and the bulk, making it more brittle than its atomic properties suggested.

Griffith attributed this difference to small imperfections in the bulk, like cracks and notches. He also realised the need for a new way to explain how brittle materials like glass fracture, since the atomic properties alone can’t explain it. He drew on thermodynamics to figure an equation based on two forms of energy: elastic energy and surface energy. The elastic energy is energy stored in a material when it is deformed – like the potential energy of a stretched rubber-band. The surface energy is the energy of molecules at the surface, which is always greater than that of molecules in the bulk. The greater the surface area of an object, the more surface energy it has.

Griffith took a block of glass, subjected it to a tensile load (i.e. a load that stretches the material without breaking it) and then etched a small crack in it. He found that the introduction of this flaw reduced the material’s elastic energy but increased its surface energy. He also found that the free energy – which is surface energy minus elastic energy – increased up to a point as he increased the crack length, before falling back down even if the crack got longer. A material fractures, i.e. breaks, when the amount of stress it is under exceeds this peak value.

Through experiments, engineers have also been able to calculate the fracture toughness of materials – a number essentially denoting the ability of a material to resist the propagation of surface cracks. Brittle materials usually have higher strength but lower fracture toughness. That is, they can withstand high loads without breaking or deforming, but when they do fail, they fail in catastrophic fashion. No half-measures.

If a material’s fracture characteristics are in line with Griffith’s theory, it’s usually brittle. For example, glass has a strength of 7 megapascals (with a theoretical upper limit of 17,000 megapascals) – but a fracture toughness of 0.6-0.8 megapascals per square-root metre.

Graphene is a 2D material, composed of a sheet of carbon atoms arranged in a hexagonal pattern. And like glass, its strength: 130,000 megapascals; its fracture toughness: 4 megapascals per square-root metre – the difference arising similarly from small flaws in the bulk material. Many people have posited graphene as a material of the future for its wondrous properties. Recently, scientists have been excited about the weird behaviour of electrons in graphene and the so-called ‘magic angle’. However, the fact that it is brittle automatically limits graphene’s applications to environments in which material failure can’t be catastrophic.

Another up-and-coming material is hexagonal boron nitride (h-BN). As its name indicates, h-BN is a grid of boron and nitrogen atoms arranged in a hexagonal pattern. (Boron nitride has two other forms: sphalerite and wurtzite.) h-BN is already used as a lubricant because it is very soft. It can also withstand high temperatures before losing its structural integrity, making it useful in applications related to spaceflight. However, since monolayer h-BN’s atomic structure is similar to that of graphene, it was likely to be brittle as well – with small flaws in the bulk material compromising the strength arising from its atomic bonds.

But a new study, published on June 2, has found that h-BN is not brittle. Scientists from China, Singapore and the US have reported that cracks in “single-crystal monolayer h-BN” don’t propagate according to Griffith’s theory, but that they do so in a more stable way, making the material tougher.

Even though h-BN is sometimes called ‘white graphene’, many of its properties are different. Aside from being able to withstand up to 300º C more in air before oxidising, h-BN is an insulator (graphene is a semiconductor) and is more chemically inert. In 2017, scientists from Australia, China, Japan, South Korea, the UK and the US also reported that while graphene’s strength dropped by 30% as the number of stacked layers was increased from one to eight, that of h-BN was pretty much constant. This suggested, the scientists wrote, “that BN nanosheets are one of the strongest insulating materials, and more importantly, the strong interlayer interaction in BN nanosheets, along with their thermal stability, make them ideal for mechanical reinforcement applications.”

The new study further cements this reputation, and in fact lends itself to the conclusion that h-BN is one of the thermally, chemically and mechanically toughest insulators that we know.

Here, the scientists found that when a crack is introduced in monolayer h-BN, the resulting release of energy is dissipated more effectively than is observed in graphene. And as the crack grows, they found that unlike in graphene, it gets deflected instead of proceeding along a straight path, and also sprouts branches. This way, monolayer h-BN redistributes the elastic energy released in a way that allows the crack length to increase without fracturing the material (i.e. without causing catastrophic failure).

According to their paper, this behaviour is the result of h-BN being composed of two different types of atoms, of boron and nitrogen, whereas graphene is composed solely of carbon atoms. As a result, when a bond between boron and nitrogen breaks, two types of crack-edges are formed: those with boron at the edge (B-edge) and those with nitrogen at the edge (N-edge). The scientists write that based on their calculations, “the amplitude of edge stress [along N-edges] is more than twice of that [along B-edges]”. Every time a crack branches or is deflected, the direction in which it propagates is determined according to the relative position of B-edges and N-edges around the crack tip. And as the crack propagates, the asymmetric stress along these two edges causes the crack to turn and branch at different times.

The scientists summarise this in their paper as that h-BN dissipates more energy by introducing “more local damage” – as opposed to global damage, i.e. fracturing – “which in turn induces a toughening effect”. “If the crack is branched, that means it is turning,” Jun Lou, one of the paper’s authors and a materials scientist at Rice University, Texas, told Nanowerk. “If you have this turning crack, it basically costs additional energy to drive the crack further. So you’ve effectively toughened your material by making it much harder for the crack to propagate.” The paper continues:

[These two mechanisms] contribute significantly to the one-order of magnitude increase in effective energy release rate compared with its Griffith’s energy release rate. This finding that the asymmetric characteristic of 2D lattice structures can intrinsically generate more energy dissipation through repeated crack deflection and branching, demonstrates a very important new toughening mechanism for brittle materials at the 2D limit.

To quote from Physics World:

The discovery that h-BN is also surprisingly tough means that it could be used to add tear resistance to flexible electronics, which Lou observes is one of the niche application areas for 2D-based materials. For flexible devices, he explains, the material needs to mechanically robust before you can bend it around something. “That h-BN is so fracture-resistant is great news for the 2D electronics community,” he adds.

The team’s findings may also point to a new way of fabricating tough mechanical metamaterials through engineered structural asymmetry. “Under extreme loading, fracture may be inevitable, but its catastrophic effects can be mitigated through structural design,” [Huajian Gao, also at Rice University and another member of the study], says.

Featured image: A representation of hexagonal boron nitride. Credit: Benjah-bmm27/Wikimedia Commons, public domain.

The ‘could’ve, should’ve, would’ve’ of R&D

ISRO’s Moon rover, which will move around the lunar surface come September (if all goes well), will live and and die in a span of 14 days because that’s how long the lithium-ion cells it’s equipped with can survive the -160º C-nights at the Moon’s south pole, among other reasons. This here illustrates an easily understood connection between fundamental research and its apparent uselessness on the one hand and applied science and its apparent superiority on the other.

Neither position is entirely and absolutely correct, of course, but this hierarchy of priorities is very real, at least in India, because it closely parallels the practices of the populist politics that privileges short-term gains over benefits in the longer run.

In this scenario, it may not seem worthwhile to fund a solid-state physicist who has, based on detailed physicochemical analyses, fashioned for example a new carbon-based material that can store lithium ions in its atomic lattice and has better thermal characteristics than graphite. It may seem even less worthwhile to fund researchers probing the seemingly obscure electronic properties of materials like graphene and silicene, writing papers steeped in abstract math and unable to propose a single viable application for the near-future.

But give it twenty years and a measure of success in the otherwise-unpredictable translational research part of the R&D pipeline, and suddenly, you’re holding the batteries that’re supposed to be installed on a Moon rover and need to determine how many instruments you can pack on there to ensure the whole ensemble is powered for the whole time they’ll need to conduct each of their tests. Just as suddenly, you’re also thinking about what else you could’ve installed on the little machine so it could’ve lived longer, and what else it could’ve potentially discovered in this bonus time.

Maybe you’re just happy, knowing how things have been for research in the country in the last two decades and based on the spaceflight organisation’s goals (a part of which the government has a say in), that the batteries can even last for two weeks. Maybe you’re just sad because you think it could’ve been better. But one way or another, it’s an inescapably tangible reminder that investments in research determine what you’re going to get to take out of the technology in the future. Put differently: it’s ridiculous to expect to know which water molecules are going to end up in which plant, but unless you water the soil, the plants are going to start wilting.

Chandrayaan 2 itself may be lined up to be a great success but who knows, there could come along a future mission where a groundbreaking instrument developed by an inspired student at a state university has to be left out of an interplanetary satellite because we didn’t have access to the right low-density, high-strength materials. Or where a bunch of Indians are on a decade-long interstellar voyage and the captain realises crew morale is dangerously low because the government couldn’t give two whits about social psychology.

Graphene the Ubiquitous

Every once in a while, a (revolutionary-in-hindsight) scientific discovery is made that’s at first treated as an anomaly, and then verified. Once established as a credible find, it goes through a period where it is subject to great curiosity and intriguing reality checks – whether it was a one-time thing, if it can actually be reproduced under different circumstances at different locations, if it has properties that can be tracked through different electrical, mechanical and chemical circumstances.

After surviving such tests, the once-discovery then enters a period of dormancy: while researchers look for ways to apply their find’s properties to solve real-world problems, science must go on and it does. What starts as a gentle trickle of academic papers soon cascades into a shower, and suddenly, one finds an explosion of interest on the subject against a background of “old” research. Everybody starts to recognize the find’s importance and realize its impending ubiquity – inside laboratories as well as outside. Eventually, this accumulating interest and the growing conviction of the possibility of a better, “enhanced” world of engineering drives investment, first private, then public, then more private again.

Enter graphene. Personally, I am very excited by graphene as such because of its extremely simple structure: it’s a planar arrangement of carbon atoms a layer thick positioned in a honeycomb lattice. That’s it; however, the wonderful capabilities that it has stacked up in the eye of engineers and physicists worldwide since 2004, the year of it’s experimental discovery, is mind-blowing. In the fields of electronics, mensuration, superconductivity, biochemistry, and condensed-matter physics, the attention it currently draws is a historic high.

Graphene’s star-power, so to speak, lies in its electronic and crystalline quality. More than 70 years ago, the physicist Lev Landau had argued that lower-dimensional crystal lattices, such as that of graphene, are thermodynamically unstable: at some fixed temperature, the distances through which the energetic atoms vibrated would cross the length of the interatomic distance, resulting in the lattice breaking down into islands, a process called “dissolving”. Graphene broke this argument by displaying extremely small interatomic distances, which translated as improved electron-sharing to form strong covalent bonds that didn’t break even at elevated temperatures.

As Andre Geim and Konstantin Novoselov, experimental discoverers of graphene and joint winners of the 2010 Nobel Prize in physics, wrote in 2007:

The relativistic-like description of electron waves on honeycomb lattices has been known theoretically for many years, never failing to attract attention, and the experimental discovery of graphene now provides a way to probe quantum electrodynamics (QED) phenomena by measuring graphene’s electronic properties.

(On a tabletop for cryin’ out loud.)

What’s more, because of a tendency to localize electrons faster than could conventional devices, using lasers to activate the photoelectric effect in graphene resulted in electric currents (i.e., moving electrons) forming within picoseconds (photons in the laser pulse knocked out electrons, which then traveled to the nearest location in the lattice where it could settle down, leaving a “hole” in its wake that would pull in the next electron, and so forth). Just because of this, graphene could make for an excellent photodetector, capable of picking up on small “amounts” of eM radiation quickly.

An enhanced current generation rate could also be read as a better electron-transfer rate, with big implications for artificial photosynthesis. The conversion of carbon dioxide to formic acid requires a catalyst that operates in the visible range to provide electrons to an enzyme that its coupled with. The enzyme then reacts with the carbon dioxide to yield the acid. Graphene, a team of South Korean scientists observed in early July, played the role of that catalyst with higher efficiency than its peers in the visible range of the eM spectrum, as well as offering up a higher surface area over which electron-transfer could occur.

Another potential area of application is in the design and development of non-volatile magnetic memories for higher efficiency computers. A computer usually has two kinds of memories: a faster, volatile memory that can store data only when connected to a power source, and a non-volatile memory that stores data even when power to it is switched off. A lot of the power consumed by computers is spent in transferring data between these two memories during operation. This leads to an undesirable difference arising between a computer’s optimum efficiency and its operational efficiency. To solve for this, a Singaporean team of scientists hit upon the use of two electrically conducting films separated by an insulating layer to develop a magnetic resistance between them on application of a spin-polarized electric field to them.

The resistance is highest when the direction of the magnetic field is anti-parallel (i.e., pointing in opposite directions) in the two films, and lowest when the field is parallel. This sandwiching arrangement is subsequently divided into cells, with each cell possessing some magnetic resistance in which data is stored. For maximal data storage, the fields would have to be anti-parallel as well as that the films’ material spin-polarizability high. Here again, graphene was found to be a suitable material. In fact, in much the same vein, this wonder of an allotrope could also have some role to play in replacing existing tunnel-junctions materials such as aluminium oxide and magnesium oxide because of its lower electrical resistance per unit area, absence of surface defects, prohibition of interdiffusion at interfaces, and uniform thickness.

In essence, graphene doesn’t only replace existing materials to enhance a product’s (or process’s) mechanical and electrical properties, but also brings along an opportunity to redefine what the product can do and what it could evolve into in the future. In this regard, it far surpasses existing results of research in materials engineering: instead of forging swords, scientists working with graphene can now forge the battle itself. This isn’t surprising at all considering graphene’s properties are most effective for nano-electromechanical applications (there have been talks of a graphene-based room-temperature superconductor). More precise measurements of their values should open up a trove of new fields, and possible hiding locations of similar materials, altogether.