My heart of physics

Every July 4, I have occasion to remember two things: the discovery of the Higgs boson, and my first published byline for an article about the discovery of the Higgs boson. I have no trouble believing it’s been eight years since we discovered this particle, using the Large Hadron Collider (LHC) and its ATLAS and CMS detectors, in Geneva. I’ve greatly enjoyed writing about particle physics in this time, principally because closely engaging with new research and the scientists who worked on them allowed me to learn more about a subject that high school and college had let me down on: physics.

In 2020, I haven’t been able to focus much on the physical sciences in my writing, thanks to the pandemic, the lockdown, their combined effects and one other reason. This has been made doubly sad by the fact that the particle physics community at large is at an interesting crossroads.

In 2012, the LHC fulfilled the principal task it had been built for: finding the Higgs boson. After that, physicists imagined the collider would discover other unknown particles, allowing theorists to expand their theories and answer hitherto unanswered questions. However, the LHC has since done the opposite: it has narrowed the possibilities of finding new particles that physicists had argued should exist according to their theories (specifically supersymmetric partners), forcing them to look harder for mistakes they might’ve made in their calculations. But thus far, physicists have neither found mistakes nor made new findings, leaving them stuck in an unsettling knowledge space from which it seems there might be no escape (okay, this is sensationalised, but it’s also kinda true).

Right now, the world’s particle physicists are mulling building a collider larger and more powerful than the LHC, at a cost of billions of dollars, in the hopes that it will find the particles they’re looking for. Not all physicists are agreed, of course. If you’re interested in reading more, I’d recommend articles by Sabine Hossenfelder and Nirmalya Kajuri and spiralling out from there. But notwithstanding the opposition, CERN – which coordinates the LHC’s operations with tens of thousands of personnel from scores of countries – recently updated its strategy vision to recommend the construction of such a machine, with the ability to produce copious amounts of Higgs bosons in collisions between electrons and positrons (a.k.a. ‘Higgs factories’). China has also announced plans of its own build something similar.

Meanwhile, scientists and engineers are busy upgrading the LHC itself to a ‘high luminosity version’, where luminosity represents the number of interesting events the machine can detect during collisions for further study. This version will operate until 2038. That isn’t a long way away because it took more than a decade to build the LHC; it will definitely take longer to plan for, convince lawmakers, secure the funds for and build something bigger and more complicated.

There have been some other developments connected to the current occasion in terms of indicating other ways to discover ‘new physics’, which is the collective name for phenomena that will violate our existing theories’ predictions and show us where we’ve gone wrong in our calculations.

The most recent one I think was the ‘XENON excess’, which refers to a moderately strong signal recorded by the XENON 1T detector in Italy that physicists think could be evidence of a class of particles called axions. I say ‘moderately strong’ because the statistical significance of the signal’s strength is just barely above the threshold used to denote evidence and not anywhere near the threshold that denotes a discovery proper.

It’s evoked a fair bit of excitement because axions count as new physics – but when I asked two physicists (one after the other) to write an article explaining this development, they refused on similar grounds: that the significance makes it seem likely that the signal will be accounted for by some other well-known event. I was disappointed of course but I wasn’t surprised either: in the last eight years, I can count at least four instances in which a seemingly inexplicable particle physics related development turned out to be a dud.

The most prominent one was the ‘750 GeV excess’ at the LHC in December 2015, which seemed to be a sign of a new particle about six-times heavier than a Higgs boson and 800-times heavier than a proton (at rest). But when physicists analysed more data, the signal vanished – a.k.a. it wasn’t there in the first place and what physicists had seen was likely a statistical fluke of some sort. Another popular anomaly that went the same way was the one at Atomki.

But while all of this is so very interesting, today – July 4 – also seems like a good time to admit I don’t feel as invested in the future of particle physics anymore (the ‘other reason’). Some might say, and have said, that I’m abandoning ship just as the field’s central animus is moving away from the physics and more towards sociology and politics, and some might be right. I get enough of the latter subjects when I work on the non-physics topics that interest me, like research misconduct and science policy. My heart of physics itself is currently tending towards quantum mechanics and thermodynamics (although not quantum thermodynamics).

One peer had also recommended in between that I familiarise myself with quantum computing while another had suggested climate-change-related mitigation technologies, which only makes me wonder now if I’m delving into those branches of physics that promise to take me farther away from what I’m supposed to do. And truth be told, I’m perfectly okay with that. 🙂 This does speak to my privileges – modest as they are on this particular count – but when it feels like there’s less stuff to be happy about in the world with every new day, it’s time to adopt a new hedonism and find joy where it lies.

Where is the coolest lab in the universe?

The Large Hadron Collider (LHC) performs an impressive feat every time it accelerates billions of protons to nearly the speed of light – and not in terms of the energy alone. For example, you release more energy when you clap your palms together once than the energy imparted to a proton accelerated by the LHC. The impressiveness arises from the fact that the energy of your clap is distributed among billions of atoms while the latter all resides in a single particle. It’s impressive because of the energy density.

A proton like this should have a very high kinetic energy. When lots of protons with such amounts of energy come together to form a macroscopic object, the object will have a high temperature. This is the relationship between subatomic particles and the temperature of the object they make up. The outermost layer of a star is so hot because its constituent particles have a very high kinetic energy. Blue hypergiant stars, thought to be the hottest stars in the universe, like Eta Carinae have a surface temperature of 36,000 K and a surface 57,600-times larger than that of the Sun. This isn’t impressive on the temperature scale alone but also on the energy density scale: Eta Carinae ‘maintains’ a higher temperature over a larger area.

Now, the following headline and variations thereof have been doing the rounds of late, and they piqued me because I’m quite reluctant to believe they’re true:

This headline, as you may have guessed by the fonts, is from Nature News. To be sure, I’m not doubting the veracity of any of the claims. Instead, my dispute is with the “coolest lab” claim and on entirely qualitative grounds.

The feat mentioned in the headline involves physicists using lasers to cool a tightly controlled group of atoms to near-absolute-zero, causing quantum mechanical effects to become visible on the macroscopic scale – the feature that Bose-Einstein condensates are celebrated for. Most, if not all, atomic cooling techniques endeavour in different ways to extract as much of an atom’s kinetic energy as possible. The more energy they remove, the cooler the indicated temperature.

The reason the headline piqued me was that it trumpets a place in the universe called the “universe’s coolest lab”. Be that as it may (though it may not technically be so; the physicist Wolfgang Ketterle has achieved lower temperatures before), lowering the temperature of an object to a remarkable sliver of a kelvin above absolute zero is one thing but lowering the temperature over a very large area or volume must be quite another. For example, an extremely cold object inside a tight container the size of a shoebox (I presume) must be lacking much less energy than a not-so-extremely cold volume across, say, the size of a star.

This is the source of my reluctance to acknowledge that the International Space Station could be the “coolest lab in the universe”.

While we regularly equate heat with temperature without much consequence to our judgment, the latter can be described by a single number pertaining to a single object whereas the former – heat – is energy flowing from a hotter to a colder region of space (or the other way with the help of a heat pump). In essence, the amount of heat is a function of two differing temperatures. In turn it could matter, when looking for the “coolest” place, that we look not just for low temperatures but for lower temperatures within warmer surroundings. This is because it’s harder to maintain a lower temperature in such settings – for the same reason we use thermos flasks to keep liquids hot: if the liquid is exposed to the ambient atmosphere, heat will flow from the liquid to the air until the two achieve a thermal equilibrium.

An object is said to be cold if its temperature is lower than that of its surroundings. Vladivostok in Russia is cold relative to most of the world’s other cities but if Vladivostok was the sole human settlement and beyond which no one has ever ventured, the human idea of cold will have to be recalibrated from, say, 10º C to -20º C. The temperature required to achieve a Bose-Einstein condensate is the temperature required at which non-quantum-mechanical effects are so stilled that they stop interfering with the much weaker quantum-mechanical effects, given by a formula but typically lower than 1 K.

The deep nothingness of space itself has a temperature of 2.7 K (-270.45º C); when all the stars in the universe die and there are no more sources of energy, all hot objects – like neutron stars, colliding gas clouds or molten rain over an exoplanet – will eventually have to cool to 2.7 K to achieve equilibrium (notwithstanding other eschatological events).

This brings us, figuratively, to the Boomerang Nebula – in my opinion the real coolest lab in the universe because it maintains a very low temperature across a very large volume, i.e. its coolness density is significantly higher. This is a protoplanetary nebula, which is a phase in the lives of stars within a certain mass range. In this phase, the star sheds some of its mass that expands outwards in the form of a gas cloud, lit by the star’s light. The gas in the Boomerang Nebula, from a dying red giant star changing to a white dwarf at the centre, is expanding outward at a little over 160 km/s (576,000 km/hr), and has been for the last 1,500 years or so. This rapid expansion leaves the nebula with a temperature of 1 K. Astronomers discovered this cold mass in late 1995.

(“When gas expands, the decrease in pressure causes the molecules to slow down. This makes the gas cold”: source.)

The experiment to create a Bose-Einstein condensate in space – or for that matter anywhere on Earth – transpired in a well-insulated container that, apart from the atoms to be cooled, was a vacuum. So as such, to the atoms, the container was their universe, their Vladivostok. They were not at risk of the container’s coldness inviting heat from its surroundings and destroying the condensate. The Boomerang Nebula doesn’t have this luxury: as a nebula, it’s exposed to the vast emptiness, and 2.7 K, of space at all times. So even though the temperature difference between itself and space is only 1.7 K, the nebula also has to constantly contend with the equilibriating ‘pressure’ imposed by space.

Further, according to Raghavendra Sahai (as quoted by NASA), one of the nebula’s cold spots’ discoverers, it’s “even colder than most other expanding nebulae because it is losing its mass about 100-times faster than other similar dying stars and 100-billion-times faster than Earth’s Sun.” This implies there is a great mass of gas, and so atoms, whose temperature is around 1 K.

All together, the fact that the nebula has maintained a temperature of 1 K for around 1,500 years (plus a 5,000-year offset, to compensate for the distance to the nebula) and over 3.14 trillion km makes it a far cooler “coolest” place, lab, whatever.

Journalistic entropy

Say you need to store a square image 1,000 pixels wide to a side with the smallest filesize (setting aside compression techniques). The image begins with the colour #009900 on the left side and, as you move towards the right, gradually blends into #1e1e1e on the rightmost edge. Two simple storage methods come to mind: you could either encode the colour-information of every pixel in a file and store that file, or you could determine a mathematical function that, given the inputs #009900 and #1e1e1e, generates the image in question.

The latter method seems more appealing, especially for larger canvases of patterns that are composed by a single underlying function. In such cases, it should obviously be more advantageous to store the image as an output of a function to achieve the smallest filesize.

Now, in information theory (as in thermodynamics), there is an entity called entropy: it describes the amount of information you don’t have about a system. In our example, imagine that the colour #009900 blends to #1e1e1e from left to right save for a strip along the right edge, say, 50 pixels wide. Each pixel in this strip can assume a random colour. To store this image, you’d have to save it as an addition of two functions: ƒ(x, y), where x = #009900 and y = #1e1e1e, plus one function to colour the pixels lying in the 50-px strip on the right side. Obviously this will increase the filesize of the stored function.

Even more, imagine if you were told that 200,000 pixels out of the 1,000,000 pixels in the image would assume random colours. The underlying function becomes even more clumsy: an addition of ƒ(x, y) and a function R that randomly selects 200,000 pixels and then randomly colours them. The outputs of this function R stands for the information about the image that you can’t have beforehand; the more such information you lack, the more entropy the image is said to have.

The example of the image was simple but sufficiently illustrative. In thermodynamics, entropy is similar to randomness vis-à-vis information: it’s the amount of thermal energy a system contains that can’t be used to perform work. From the point of view of work, it’s useless thermal energy (including heat) – something that can’t contribute to moving a turbine blade, powering a motor or motivating a system of pulleys to lift weights. Instead, it is thermal energy motivated by and directed at other impetuses.

As it happens, this picture could help clarify, or at least make more sense of, a contemporary situation in science journalism. Earlier this week, health journalist Priyanka Pulla discovered that the Indian Council of Medical Research (ICMR) had published a press release last month, about the serological testing kit the government had developed, with the wrong specificity and sensitivity data. Two individuals she spoke to, one from ICMR and another from the National Institute of Virology, Pune, which actually developed the kit, admitted the mistake when she contacted them. Until then, neither organisation had issued a clarification even though it was clear both individuals are likely to have known of the mistake at the time the release was published.

Assuming for a moment that this mistake was an accident (my current epistemic state is ‘don’t know’), it would indicate ICMR has been inefficient in the performance of its duties, forcing journalists to respond to it in some way instead of focusing on other, more important matters.

The reason I’m tending to think of such work as entropy and not work per se is such instances, whereby journalists are forced to respond to an event or action characterised by the existence of trivial resolutions, seem to be becoming more common.

It’s of course easier to argue that what I consider trivial may be nontrivial to someone else, and that these events and actions matter to a greater extent than I’m willing to acknowledge. However, I’m personally unable to see beyond the fact that an organisation with the resources and, currently, the importance of ICMR shouldn’t have had a hard time proof-reading a press release that was going to land in the inboxes of hundreds of journalists. The consequences of the mistake are nontrivial but the solution is quite trivial.

(There is another feature in some cases: of the absence of official backing or endorsement of any kind.)

So as such, it required work on the part of journalists that could easily have been spared, allowing journalists to direct their efforts at more meaningful, more productive endeavours. Here are four more examples of such events/actions, wherein the non-triviality is significantly and characteristically lower than that attached to formal announcements, policies, reports, etc.:

  1. Withholding data in papers – In the most recent example, ICMR researchers published the results of a seroprevalence survey of 26,000 people in 65 districts around India, and concluded that the prevalence of the novel coronavirus was 0.73% in this population. However, in their paper, the researchers include neither a district-wise breakdown of the data nor the confidence intervals for each available data-point even though they had this information (it’s impossible to compute the results the researchers did without these details). As a result, it’s hard for journalists to determine how reliable the results are, and whether they really support the official policies regarding epidemic-control interventions that will soon follow.
  2. Publishing faff – On June 2, two senior members of the Directorate General of Health services, within India’s Union health ministry, published a paper (in a journal they edited) that, by all counts, made nonsensical claims about India’s COVID-19 epidemic becoming “extinguished” sometime in September 2020. Either the pair of authors wasn’t aware of their collective irresponsibility or they intended to refocus (putting it benevolently) the attention of various people towards their work, turning them away from the duo deemed embarrassing or whatever. And either way, the claims in the paper wound their way into two news syndication services, PTI and IANS, and eventually onto the pages of a dozen widely-read news publications in the country. In effect, there were two levels of irresponsibility at play: one as embodied by the paper and the other, by the syndication services’ and final publishers’ lack of due diligence.
  3. Making BS announcements – This one is fairly common: a minister or senior party official will say something silly, such as that ancient Indians invented the internet, and ride the waves of polarising debate, rapidly devolving into acrimonious flamewars on Twitter, that follow. I recently read (in The Washington Post I think, but I can’t find the link now) that it might be worthwhile for journalists to try and spend less time on fact-checking a claim than it took someone to come up with that claim. Obviously there’s no easy way to measure the time some claims took to mature into their present forms, but even so, I’m sure most journalists would agree that fact-checking often takes much longer than bullshitting (and then broadcasting). But what makes this enterprise even more grating is that it is orders of magnitude easier to not spew bullshit in the first place.
  4. Conspiracy theories – This is the most frustrating example of the lot because, today, many of the originators of conspiracy theories are television journalists, especially those backed by government support or vice versa. While fully acknowledging the deep-seated issues underlying both media independence and the politics-business-media nexus, numerous pronouncements by so many news anchors have only been akin to shooting ourselves in the foot. Exhibit A: shortly after Prime Minister Narendra Modi announced the start of demonetisation, a beaming news anchor told her viewers that the new 2,000-rupee notes would be embedded with chips to transmit the notes’ location real-time, via satellite, to operators in Delhi.

Perhaps this entropy – i.e. the amount of journalistic work not available to deal with more important stories – is not only the result of a mischievous actor attempting to keep journalists, and the people who read those journalists, distracted but is also assisted by the manifestation of a whole industry’s inability to cope with the mechanisms of a new political order.

Science journalism itself has already experienced a symptom of this change when pseudoscientific ideas became more mainstream, even entering the discourse of conservative political groups, including that of the BJP. In a previous era, if a minister said something, a reporter was to drum up a short piece whose entire purpose was to record “this happened”. And such reports were the norm and in fact one of the purported roots of many journalistic establishments’ claims to objectivity, an attribute they found not just desirable but entirely virtuous: those who couldn’t be objective were derided as sub-par.

However, if a reporter were to simply report today that a minister said something, she places herself at risk of amplifying bullshit to a large audience if what the minister said was “bullshit bullshit bullshit”. So just as politicians’ willingness to indulge in populism and majoritarianism to the detriment of society and its people has changed, so also must science journalism change – as it already has with many publications, especially in the west – to ensure each news report fact-checks a claim it contains, especially if it is pseudoscientific.

In the same vein, it’s not hard to imagine that journalists are often forced to scatter by the compulsions of an older way of doing journalism, and that they should regroup on the foundations of a new agreement that lets them ignore some events so that they can better dedicate themselves to the coverage of others.

Featured image credit: Татьяна Чернышова/Pexels.

The symmetry incarnations

This post was originally published on October 6, 2012. I recently rediscovered it and decided to republish it with a few updates.

Geometric symmetry in nature is often a sign of unperturbedness, as if nothing has interfered with a natural process and that its effects at each step are simply scaled-up or scaled-down versions of each other. For this reason, symmetry is aesthetically pleasing, and often beautiful. Consider, for instance, faces. Symmetry of facial features about the central vertical axis is often translated as the face being beautiful, not just by humans but also monkeys.

This is only one example of one of the many forms of symmetry’s manifestation. When it involves geometric features, it’s a case of geometrical symmetry. When a process occurs similarly both forward and backward in time, it is temporal symmetry. If two entities that don’t seem geometrically congruent at first sight rotate, move or scale with similar effects on their forms, it is transformational symmetry. A similar definition applies to all theoretical models, musical progressions, knowledge and many other fields besides.

Symmetry-breaking

One of the first (postulated) instances of symmetry is said to have occurred during the Big Bang, when the universe was born. A sea of particles was perturbed 13.75 billion years ago by a high-temperature event, setting up anecdotal ripples in their system, eventually breaking their distribution in such a way that some particles got mass, some charge, some spin, some all of them, and some none of them. This event is known as electroweak symmetry-breaking. Because of the asymmetric properties of the resultant particles, matter as we know it was conceived.

Many invented scientific systems exhibit symmetry in that they allow for the conception of symmetry in the things they make possible. A good example is mathematics. On the real-number line, 0 marks the median. On either sides of 0, 1 and -1 are equidistant from 0; 5,000 and -5,000 are equidistant from 0; possibly, ∞ and -∞ are equidistant from 0. Numerically speaking, 1 marks the same amount of something that -1 marks on the other side of 0. Characterless functions built on this system also behave symmetrically on either sides of 0.

To many people, symmetry evokes the image of an object that, when cut in half along a specific axis, results in two objects that are mirror-images of each other. Cover one side of your face and place the other side against a mirror, and what a person hopes to see is the other side of the face – despite it being a reflection. Interestingly, this technique was used by neuroscientist V.S. Ramachandran to “cure” the pain of amputees when they tried to move a limb that wasn’t there).

An illustration of V.S. Ramachandran's mirror-box technique: Lynn Boulanger, an occupational therapy assistant and certified hand therapist, uses mirror therapy to help address phantom pain for Marine Cpl. Anthony McDaniel. Caption and credit: US Navy
An illustration of V.S. Ramachandran’s mirror-box technique: Lynn Boulanger, an occupational therapy assistant and certified hand therapist, uses mirror therapy to help address phantom pain for Marine Cpl. Anthony McDaniel. Caption and credit: US Navy

Natural symmetry

Symmetry at its best, however, is observed in nature. Consider germination: when a seed grows into a small plant and then into a tree, the seed doesn’t experiment with designs. The plant is not designed differently from the small tree, and the small tree is not designed differently from the big tree. If a leaf is given to sprout from the node richest in minerals on the stem, then it will. If a branch is given to sprout from the node richest in minerals on the trunk, then it will. So is mineral-deposition in the arbor symmetric? It should be if their transportation out of the soil and into the tree is radially symmetric. And so forth…

At times, repeated gusts of wind may push the tree to lean one way or another, shadowing the leaves from against the force and keeping them form shedding off. The symmetry is then broken, but no matter. The sprouting of branches from branches, and branches from those branches, and leaves from those branches, all follow the same pattern. This tendency to display an internal symmetry is characterised as fractalisation. A well-known example of a fractal geometry is the Mandelbrot set, shown below.

An illustration of recursive self-similarity in Mandelbrot set. Credit: Cuddlyable3/Wikimedia Commons
An illustration of recursive self-similarity in Mandelbrot set. Credit: Cuddlyable3/Wikimedia Commons

If you want to interact with a Mandelbrot set, check out this magnificent visualisation by Paul Neave (defunct now 🙁 ). You can keep zooming in, but at each step, you’ll only see more and more Mandelbrot sets. This set is one of a few exceptional sets that are geometric fractals.

Meta-geometry and Mulliken symbols

It seems like geometric symmetry is the most ubiquitous and accessible example to us. Let’s take it one step further and look at the meta-geometry at play when one symmetrical shape is given an extra dimension. For instance, a circle exists in two dimensions; its three-dimensional correspondent is the sphere. Through such an up-scaling, we are ensuring that all the properties of a circle in two dimensions stay intact in three dimensions, and then we are observing what the three-dimensional shape is.

A circle, thus, becomes a sphere. A square becomes a cube. A triangle becomes a tetrahedron. In each case, the 3D shape is said to have been generated by a 2D shape, and each 2D shape is said to be the degenerate of the 3D shape. Further, such a relationship holds between corresponding shapes across many dimensions, with doubly and triply degenerate surfaces also having been defined.

Credit: Vitaly Ostrosablin/Wikimedia Commons, CC BY-SA 3.0
The three-dimensional cube generates the four-dimensional hypercube, a.k.a. a tesseract. Credit: Vitaly Ostrosablin/Wikimedia Commons, CC BY-SA 3.0

Obviously, there are different kinds of degeneracy, 10 of which the physicist Robert S. Mulliken identified and laid out. These symbols are important because each one defines a degree of freedom that nature possesses while creating entities and this includes symmetrical entities as well. So if a natural phenomenon is symmetrical in n dimensions, then the only way it can be symmetrical in n+1 dimensions also is by transforming through one or many of the degrees of freedom defined by Mulliken.


Symbol Property
A symmetric with respect to rotation around the principal rotational axis (one dimensional representations)
B anti-symmetric with respect to rotation around the principal rotational axis (one dimensional representations)
E degenerate
subscript 1 symmetric with respect to a vertical mirror plane perpendicular to the principal axis
subscript 2 anti-symmetric with respect to a vertical mirror plane perpendicular to the principal axis
subscript g symmetric with respect to a center of symmetry
subscript u anti-symmetric with respect to a center of symmetry
prime (‘) symmetric with respect to a mirror plane horizontal to the principal rotational axis
double prime (”) anti-symmetric with respect to a mirror plane horizontal to the principal rotational axis

Source: LMU Munich


Apart from regulating the perpetuation of symmetry across dimensions, the Mulliken symbols also hint at nature wanting to keep things simple and straightforward. The symbols don’t behave differently for processes moving in different directions, through different dimensions, in different time-periods or in the presence of other objects, etc. The preservation of symmetry by nature is not coincidental. Rather, it is very well-defined.

Anastomosis

Now, if nature desires symmetry, if it is not a haphazard occurrence but one that is well orchestrated if given a chance to be, why don’t we see symmetry everywhere? Why is natural symmetry broken? One answer to this is that it is broken only insofar as it attempts to preserves other symmetries that we cannot observe with the naked eye.

For example, symmetry in the natural order is exemplified by a geological process called anastomosis. This property, commonly of quartz crystals in metamorphic regions of Earth’s crust, allows for mineral veins to form that lead to shearing stresses between layers of rock, resulting in fracturing and faulting. In other terms, geological anastomosis allows materials to be displaced from one location and become deposited in another, offsetting large-scale symmetry in favour of the prosperity of microstructures.

More generally, anastomosis is defined as the splitting of a stream of anything only to reunify sometime later. It sounds simple but it is an exceedingly versatile phenomenon, if only because it happens in a variety of environments and for a variety of purposes. For example, consider Gilbreath’s conjecture. It states that each series of prime numbers to which the forward difference operator – i.e. successive difference between numbers – has been applied always starts with 1. To illustrate:

2 3 5 7 11 13 17 19 23 29 … (prime numbers)

Applying the operator once: 1 2 2 4 2 4 2 4 6 …
Applying the operator twice: 1 0 2 2 2 2 2 2 …
Applying the operator thrice: 1 2 0 0 0 0 0 …
Applying the operator for the fourth time: 1 2 0 0 0 0 0 …

And so forth.

If each line of numbers were to be plotted on a graph, moving upwards each time the operator is applied, then a pattern for the zeros emerges, shown below.

The forest of stunted trees, used to gain more insights into Gilbreath's conjecture. Credit: David Eppstein/Wikimedia Commons
The forest of stunted trees, used to gain more insights into Gilbreath’s conjecture. Credit: David Eppstein/Wikimedia Commons

This pattern is called the forest of stunted trees, as if it were an area populated by growing trees with clearings that are always well-bounded triangles. The numbers from one sequence to the next are anastomosing, parting ways only to come close together after every five lines.

Another example is the vein skeleton on a hydrangea leaf. Both the stunted trees and the hydrangea veins patterns can be simulated using the rule-90 simple cellular automaton that uses the exclusive-or (XOR) function.

Bud and leaves of Hydrangea macrophylla. Credit: Alvesgaspar/Wikimedia Commons, CC BY-SA 3.0
Bud and leaves of Hydrangea macrophylla. Credit: Alvesgaspar/Wikimedia Commons, CC BY-SA 3.0

Nambu-Goldstone bosons

While anastomosis may not have a direct relation with symmetry and only a tenuous one with fractals, its presence indicates a source of perturbation in the system. Why else would the streamlined flow of something split off and then have the tributaries unify, unless possibly to reach out to richer lands? Anastomosis is a sign of the system acquiring a new degree of freedom. By splitting a stream with x degrees of freedom into two new streams each with x degrees of freedom, there are now more avenues through which change can occur.

Particle physics simplifies this scenario by assigning all forces and amounts of energy a particle. Thus, a force is said to be acting when a force-carrying particle is being exchanged between two bodies. Since each degree of freedom also implies a new force acting on the system, it wins itself a particle from a class of particles called the Nambu-Goldstone (NG) bosons. Named for Yoichiro Nambu and Jeffrey Goldstone, the presence of n NG bosons in a system means that, broadly speaking, the system has n degrees of freedom.

How and when an NG boson is introduced into a system is not yet well-understood. In fact, it was only recently that a theoretical physicist, named Haruki Watanabe, developed a mathematical model that could predict the number of degrees of freedom a complex system could have given the presence of a certain number of NG bosons. At the most fundamental level, it is understood that when symmetry breaks, an NG boson is born.

The asymmetry of symmetry

That is, when asymmetry is introduced in a system, so is a degree of freedom. This seems intuitive. But at the same time, you would think the reverse is also true: that when an asymmetric system is made symmetric, it loses a degree of freedom. However, this isn’t always the case because it could violate the third law of thermodynamics (specifically, the Lewis-Randall version of its statement).

Therefore, there is an inherent irreversibility, an asymmetry of the system itself: it works fine one way, it doesn’t work fine another. This is just like the split-off streams, but this time, they are unable to reunify properly. Of course, there is the possibility of partial unification: in the case of the hydrangea leaf, symmetry is not restored upon anastomosis but there is, evidently, an asymptotic attempt.

However, it is possible that in some special frames, such as in outer space, where the influence of gravitational forces is very weak, the restoration of symmetry may be complete. Even though the third law of thermodynamics is still applicable here, it comes into effect only with the transfer of energy into or out of the system. In the absence of gravity and other retarding factors, such as distribution of minerals in the soil for acquisition, etc., it is theoretically possible for symmetry to be broken and reestablished without any transfer of energy.

The simplest example of this is of a water droplet floating around. If a small globule of water breaks away from a bigger one, the bigger one becomes spherical quickly. When the seditious droplet joins with another globule, that globule also quickly reestablishes its spherical shape.

Thermodynamically speaking, there is mass transfer but at (almost) 100% efficiency, resulting in no additional degrees of freedom. Also, the force at play that establishes sphericality is surface tension, through which a water body seeks to occupy the shape that has the lowest volume for the correspondingly highest surface area. Notice how this shape – the sphere – is incidentally also the one with the most axes of symmetry and the fewest redundant degrees of freedom? Manufacturing such spheres is very hard.

An omnipotent impetus

Perhaps the explanation of the roles symmetry assumes seems regressive: every consequence of it is no consequence but itself all over again (i.e., self-symmetry – and it happened again). Indeed, why would nature deviate from itself? And as it recreates itself with different resources, it lends itself and its characteristics to different forms.

A mountain will be a mountain to its smallest constituents, and an electron will be an electron no matter how many of them you bring together at a location (except when quasiparticles show up). But put together mountains and you have ranges, sub-surface tectonic consequences, a reshaping of volcanic activity because of changes in the crust’s thickness, and a long-lasting alteration of wind and irrigation patterns. Bring together an unusual number of electrons to make up a high-density charge, and you have a high-temperature, high-voltage volume from which violent, permeating discharges of particles could occur – i.e., lightning.

Why should stars, music, light, radioactivity, politics, engineering or knowledge be any different?

Why a pump to move molten metal is awesome

The conversion of one form of energy into another is more efficient at higher temperatures.1 For example, one of the most widely used components of any system that involves the transfer of heat from one part of the system to another is a device called a heat exchanger. When it’s transferring heat from one fluid to another, for example, the heat exchanger must facilitate the efficient movement of heat between the two media without allowing them to mix.

There are many designs of heat exchangers for a variety of applications but the basic principle is the same. However, they’re all limited by the explicit condition that entropy – “the measure of disorder” – is higher at lower temperatures. In other words, the lower the temperature difference within the exchanger, the less efficiently the transfer will happen. This is why it’s desirable to have a medium that can carry a lot of heat per unit volume.

But this is not always possible for two reasons. First: there must exist a pump that can move such a hot medium from one point to another in the system. This pump must be made of materials that can withstand high temperatures during operation as well as not react with the medium at those temperatures. Second: one of the more efficient media that can carry a lot of heat is liquid metals. But they’re difficult to pump because of their corrosive nature and high density. Both reasons together, this is why medium temperatures have been limited to around 1,000º C.

Now, an invention by engineers from the US has proposed a solution. They’ve constructed a pump using ceramics. This is really interesting because ceramics have a good reputation for being able to withstand extreme heat (they were part of the US Space Shuttle’s heat shield exposed during atmospheric reentry) but an equally bad reputation for being very brittle.2 So this means that a ceramic composition of the pump material accords it a natural ability to withstand heat.

In other words, the bigger problem the engineers would’ve solved for would be to keep it from breaking during operation.

Their system consists of a motor, a gearbox, pipes and a reservoir of liquid tin. When the motor is turned on, the pump receives liquid tin from the bottom of the reservoir. Two interlocking gears inside the pump rotate. As the tin flows between the blades, it is compressed into the space between them, creating a pressure difference that sucks in more tin from the reservoir. After the tin moves through the blades, it is let out into another pipe that takes it back to the reservoir.

The blades are made of Shapal, an aluminium nitride ceramic made by the Tokuyama Corporation in Japan with the unique property of being machinable. The pump seals and the pipes are made of graphite. High-temperature pumps usually have pipes made of polymers. Graphite and such polymers are similar in that they’re both very difficult to corrode. But graphite has an upper hand in this context because it can also withstand higher temperatures before it loses its consistency.

Using this setup, the engineers were able to operate the pump continuously for 72 hours at an average temperature of 1,200º C. For the first 60 hours of operation, the flow rate varied between 28 and 108 grams per second (at an rpm in the lower hundreds). According to the engineers’ paper, this corresponds to an energy transfer of 5-20 kW for a vat of liquid tin heated from 300º C to 1,200º C. They extrapolate these numbers to suggest that if the gear diameter and thickness were enlarged from 3.8 cm to 17.1 cm and 1.3 cm to 5.85 cm (resp.) and operated at 1,800 rpm, the resulting heat transfer rate would be 100 MW – a jump of 5,000x from 20 kW and close to the requirements of a utility-scale power plant.

And all of this would be on a tabletop setup. This is the kind of difference having a medium with a high energy density makes.

The engineers say that their choice of temperature at which to run the pump – about 1,200ºC – was limited by whatever heaters they had available in their lab. So future versions of this pump could run for cheaper and at higher temperatures by using, say, molten silicon and higher grade ceramics than Shapal. Such improvements could have an outsize effect in our world because of the energy capacity and transfer improvements they stand to bring to renewable energy storage.

1. I can attest from personal experience that learning the principles of thermodynamics is easier through application than theory – an idea that my college professors utterly failed to grasp.

2. The ceramics used to pave the floor of your house and the ceramics used to pad the underbelly of the Space Shuttle are very different. For one, the latter had a foamy internal structure and wasn’t brittle. They were designed and manufactured this way because the ceramics of the Space Shuttle wouldn’t just have to withstand high heat – they would also have to be able to withstand the sudden temperature change as the shuttle dived from the -270º C of space into the 1,500º C of hypersonic shock.

Featured image credit: Erdenebayar/pixabay.

Thanksgiving turkey and drinking water

Saw this 20-something-second long video going around on Facebook and Twitter:

By itself, the video, produced by the US Consumer Product Safety Commission, tells me nothing apart from what I shouldn’t be doing, no reasons or explanations. Versions of the video were also carried by The Guardian, USA Today, Reuters, NBC NewsWashington Post and Scroll. (Disclosure: I work for The Wire, which competes with Scroll.) So I went looking – and found the answer on mental_floss:

The instant the frozen food hits the oil, the ice crystals melt, then momentarily sink. This exerts an upwards force on the oil. An instant later, these small sinking bubbles of water boil, expanding as they heat up and adding further force to the oil. This bubbling and forcing the oil upwards creates an aerosol of boiling oil and air violently shooting up out of the pan and towards the other parts of the kitchen.

I believe this also has a technical term, though it seems forced: ‘boiling liquid expanding vapour explosion’. The explosiveness arises from a process called flash evaporation (*junior year thermodynamics memories*) – when the pressure surrounding a liquid is suddenly reduced significantly, causing some of the water to instantly turn, or ‘flash’, into vapour (Business Insider has the video explainer). Because gases occupy more volume than liquids, flashing can also be interpreted as an explosive expansion.

One useful application: In most desalination plants around the world, salty water is passed through a throttling valve that converts some of it into salt-free vapour, which is condensed into potable water. The remaining salty water is then sent through another throttling valve at a lower pressure to repeat the process. This is called multi-stage flash distillation. Other applications: fire extinguishers and pressure cookers being able to let off steam. Some unfortunate ‘applications’: boiler explosions and rapidly worsening accidents involving tankers.

Relevant to the case of the frozen turkey: water boils at 100º C and oil boils at 450º C – both at atmospheric pressure. So when the turkey is dipped into a vat of very hot oil, the ice crystals falling off into the container sink beneath the oil but begin to boil on the way because of the oil’s temperature. Because water is denser, the boiling water remains trapped under its hot oil ceiling, although it’s also expanding because of the heat. At one point, the ceiling ruptures and the water flashes out, carrying some oil with it. The real problem begins when the oil is splashed into the fire below the container. Then, as mental_floss writes, “in a matter of seconds after putting your dinner on, your dinner has destroyed a large amount of your property and a significant portion of your, well, life.”

Featured image credit: toasty/Flickr, CC BY 2.0.

The Symmetry Incarnations – Part I

Symmetry in nature is a sign of unperturbedness. It means nothing has interfered with a natural process, and that its effects at each step are simply scaled-up or scaled-down versions of each other. For this reason, symmetry is aesthetically pleasing, and often beautiful. Consider, for instance, faces. Symmetry of facial features about the central vertical axis is often translated as the face being beautiful – not just by humans but also monkeys.

However, this is just an example of one of the many forms of symmetry’s manifestation. When it involves geometric features, it’s a case of geometrical symmetry. When a process occurs similarly both forward and backward in time, it is temporal symmetry. If two entities that don’t seem geometrically congruent at first sight rotate, move or scale with similar effects on their forms, it is transformational symmetry. A similar definition applies to all theoretical models, musical progressions, knowledge, and many other fields besides.

Symmetry-breaking

One of the first (postulated) instances of symmetry is said to have occurred during the Big Bang, when the observable universe was born. A sea of particles was perturbed 13.75 billion years ago by a high-temperature event, setting up anecdotal ripples in their system, eventually breaking their distribution in such a way that some particles got mass, some charge, some a spin, some all of them, and some none of them. In physics, this event is called spontaneous, or electroweak, symmetry-breaking. Because of the asymmetric properties of the resultant particles, matter as we know it was conceived.

Many invented scientific systems exhibit symmetry in that they allow for the conception of symmetry in the things they make possible. A good example is mathematics – yes, mathematics! On the real-number line, 0 marks the median. On either sides of 0, 1 and -1 are equidistant from 0, 5,000 and -5,000 are equidistant from 0; possibly, ∞ and -∞ are equidistant from 0. Numerically speaking, 1 marks the same amount of something that -1 marks on the other side of 0. Not just that, but also characterless functions built on this system also behave symmetrically on either sides of 0.

To many people, symmetry evokes an image of an object that, when cut in half along a specific axis, results in two objects that are mirror-images of each other. Cover one side of your face and place the other side against a mirror, and what a person hopes to see is the other side of the face – despite it being a reflection (interestingly, this technique was used by neuroscientist V.S. Ramachandran to “cure” the pain of amputees when they tried to move a limb that wasn’t there). Like this, there are symmetric tables, chairs, bottles, houses, trees (although uncommon), basic geometric shapes, etc.

A demonstration of V.S. Ramachandran’s mirror-technique

Natural symmetry

Symmetry at its best, however, is observed in nature. Consider germination: when a seed grows into a small plant and then into a tree, the seed doesn’t experiment with designs. The plant is not designed differently from the small tree, and the small tree is not designed differently from the big tree. If a leaf is given to sprout from the mineral-richest node on the stem then it will; if a branch is given to sprout from the mineral-richest node on the trunk then it will. So, is mineral-deposition in the arbor symmetric? It should be if their transportation out of the soil and into the tree is radially symmetric. And so forth…

At times, repeated gusts of wind may push the tree to lean one way or another, shadowing the leaves from against the force and keeping them form shedding off. The symmetry is then broken, but no matter. The sprouting of branches from branches, and branches from those branches, and leaves from those branches all follow the same pattern. This tendency to display an internal symmetry is characterized as fractalization. A well-known example of a fractal geometry is the Mandelbrot set, shown below.

If you want to interact with a Mandelbrot set, check out this magnificent visualization by Paul Neave. You can keep zooming in, but at each step, you’ll only see more and more Mandelbrot sets. Unfortunately, this set is one of a few exceptional sets that are geometric fractals.

Meta-geometry & Mulliken symbols

Now, it seems like geometric symmetry is the most ubiquitous and accessible example to us. Let’s take it one step further and look at the “meta-geometry” at play when one symmetrical shape is given an extra dimension. For instance, a circle exists in two dimensions; its three-dimensional correspondent is the sphere. Through such an up-scaling, we’re ensuring that all the properties of a circle in two dimensions stay intact in three dimensions, and then we’re observing what the three-dimensional shape is.

A circle, thus, becomes a sphere; a square becomes a cube; a triangle becomes a tetrahedron (For those interested in higher-order geometry, the tesseract, or hypercube, may be of special interest!). In each case, the 3D shape is said to have been generated by a 2D shape, and each 2D shape is said to be the degenerate of the 3D shape. Further, such a relationship holds between corresponding shapes across many dimensions, with doubly and triply degenerate surfaces also having been defined.

The tesseract (a.k.a. hypercube)

Obviously, there are different kinds of degeneracy, 10 of which the physicist Robert S. Mulliken identified and laid out. These symbols are important because each one defines a degree of freedom that nature possesses while creating entities, and this includes symmetrical entities as well. In other words, if a natural phenomenon is symmetrical in n dimensions, then the only way it can be symmetrical in n+1 dimensions also is by transforming through one or many of the degrees of freedom defined by Mulliken.

Robert S. Mulliken (1896-1986)

Apart from regulating the perpetuation of symmetry across dimensions, the Mulliken symbols also hint at nature wanting to keep things simple and straightforward. The symbols don’t behave differently for processes moving in different directions, through different dimensions, in different time-periods or in the presence of other objects, etc. The preservation of symmetry by nature is not a coincidental design; rather, it’s very well-defined.

Anastomosis

Now, if that’s the case – if symmetry is held desirable by nature, if it is not a haphazard occurrence but one that is well orchestrated if given a chance to be – why don’t we see symmetry everywhere? Why is natural symmetry broken? Is all of the asymmetry that we’re seeing today the consequence of that electro-weak symmetry-breaking phenomenon? It can’t be because natural symmetry is still prevalent. Is it then implied that what symmetry we’re observing today exists in the “loopholes” of that symmetry-breaking? Or is it all part of the natural order of things, a restoration of past capabilities?

One of the earliest symptoms of symmetry-breaking was the appearance of the Higgs mechanism, which gave mass to some particles but not some others. The hunt for it’s residual particle, the Higgs boson, was spearheaded by the Large Hadron Collider (LHC) at CERN.

The last point – of natural order – is allegorical with, as well as is exemplified by, a geological process called anastomosis. This property, commonly of quartz crystals in metamorphic regions of Earth’s crust, allows for mineral veins to form that lead to shearing stresses between layers of rock, resulting in fracturing and faulting. Philosophically speaking, geological anastomosis allows for the displacement of materials from one location and their deposition in another, thereby offsetting large-scale symmetry in favor of the prosperity of microstructures.

Anastomosis, in a general context, is defined as the splitting of a stream of anything only to rejoin sometime later. It sounds really simple but it is a phenomenon exceedingly versatile, if only because it happens in a variety of environments and for an equally large variety of purposes. For example, consider Gilbreath’s conjecture. It states that each series of prime numbers to which the forward difference operator has been applied always starts with 1. To illustrate:

2 3 5 7 11 13 17 19 23 29 … (prime numbers)

Applying the operator once: 1 2 2 4 2 4 2 4 6 … (successive differences between numbers)
Applying the operator twice: 1 0 2 2 2 2 2 2 …
Applying the operator thrice: 1 2 0 0 0 0 0 …
Applying the operator for the fourth time: 1 2 0 0 0 0 0 …

And so forth.

If each line of numbers were to be plotted on a graph, moving upwards each time the operator is applied, then a pattern for the zeros emerges, shown below.

This pattern is called that of the stunted trees, as if it were a forest populated by growing trees with clearings that are always well-bounded triangles. The numbers from one sequence to the next are anastomosing, only to come close together after every five lines! Another example is the vein skeleton on a hydrangea leaf. Both the stunted trees and the hydrangea veins patterns can be simulated using the rule-90 simple cellular automaton that uses the exclusive-or (XOR) function.

Nambu-Goldstone bosons

Now, what does this have to do with symmetry, you ask? While anastomosis may not have a direct relation with symmetry and only a tenuous one with fractals, its presence indicates a source of perturbation in the system. Why else would the streamlined flow of something split off and then have the tributaries unify, unless possibly to reach out to richer lands? Either way, anastomosis is a sign of the system acquiring a new degree of freedom. By splitting a stream with x degrees of freedom into two new streams each with x degrees of freedom, there are now more avenues through which change can occur.

Water entrainment in an estuary is an example of a natural asymptote or, in other words, a system’s “yearning” for symmetry

Particle physics simplies this scenario by assigning all forces and amounts of energy a particle. Thus, a force is said to be acting when a force-carrying particle is being exchanged between two bodies. Since each degree of freedom also implies a new force acting on the system, it wins itself a particle, actually a class of particles called the Nambu-Goldstone (NG) bosons. Named for Yoichiro Nambu and Jeffrey Goldstone, the particle’s existence’s hypothesizers, the presence of n NG bosons in a system means that, broadly speaking, the system has n degrees of freedom.

Jeffrey Goldstone (L) & Yoichiro Nambu

How and when an NG boson is introduced into a system is not yet a well-understood phenomenon theoretically, let alone experimentally! In fact, it was only recently that a mathematical model was developed by a theoretical physicist at UCal-Berkeley, Haruki Watanabe, capable of predicting how many degrees of freedom a complex system could have given the presence of a certain number of NG bosons. However, at the most basic level, it is understood that when symmetry breaks, an NG boson is born!

The asymmetry of symmetry

In other words, when asymmetry is introduced in a system, so is a degree of freedom. This seems only intuitive. At the same time, you’d think the axiom is also true: that when an asymmetric system is made symmetric, it loses a degree of freedom – but is this always true? I don’t think so because, then, it would violate the third law of thermodynamics (specifically, the Lewis-Randall version of its statement). Therefore, there is an inherent irreversibility, an asymmetry of the system itself: it works fine one way, it doesn’t work fine another – just like the split-off streams, but this time, being unable to reunify properly. Of course, there is the possibility of partial unification: in the case of the hydrangea leaf, symmetry is not restored upon anastomosis but there is, evidently, an asymptotic attempt.

Each piece of a broken mirror-glass reflects an object entirely, shedding all pretensions of continuity. The most intriguing mathematical analogue of this phenomenon is the Banach-Tarski paradox, which, simply put, takes symmetry to another level.

However, it is possible that in some special frames, such as in outer space, where the influence of gravitational forces is weak if not entirely absent, the restoration of symmetry may be complete. Even though the third law of thermodynamics is still applicable here, it comes into effect only with the transfer of energy into or out of the system. In the absence of gravity (and, thus, friction), and other retarding factors, such as distribution of minerals in the soil for acquisition, etc., symmetry may be broken and reestablished without any transfer of energy.

The simplest example of this is of a water droplet floating around. If a small globule of water breaks away from a bigger one, the bigger one becomes spherical quickly; when the seditious droplet joins with another globule, that globule also reestablishes its spherical shape. Thermodynamically speaking, there is mass transfer, but at (almost) 100% efficiency, resulting in no additional degrees of freedom. Also, the force at play that establishes sphericality is surface tension, through which a water body seeks to occupy the shape that has the lowest volume for the correspondingly highest surface area (notice how the shape is incidentally also the one with the most axes of symmetry, or, put another way, no redundant degrees of freedom? Creating such spheres is hard!).

A godless, omnipotent impetus

Perhaps the explanation of the roles symmetry assumes seems regressive: every consequence of it is no consequence but itself all over again (self-symmetry – there, it happened again). This only seems like a natural consequence of anything that is… well, naturally conceived. Why would nature deviate from itself? Nature, it seems, isn’t a deity in that it doesn’t create. It only recreates itself with different resources, lending itself and its characteristics to different forms.

A mountain will be a mountain to its smallest constituents, and an electron will be an electron no matter how many of them you bring together at a location. But put together mountains and you have ranges, sub-surface tectonic consequences, a reshaping of volcanic activity because of changes in the crust’s thickness, and a long-lasting alteration of wind and irrigation patterns. Bring together a unusual number of electrons to make up a high-density charge, and you have a high-temperature, high-voltage volume from which violent, permeating discharges of particles could occur – i.e., lightning. Why should stars, music, light, radioactivity, politics, manufacturing or knowledge be any different?

With this concludes the introduction to symmetry. Yes, there is more, much more…

xkcd #849

Graphene the Ubiquitous

Every once in a while, a (revolutionary-in-hindsight) scientific discovery is made that’s at first treated as an anomaly, and then verified. Once established as a credible find, it goes through a period where it is subject to great curiosity and intriguing reality checks – whether it was a one-time thing, if it can actually be reproduced under different circumstances at different locations, if it has properties that can be tracked through different electrical, mechanical and chemical circumstances.

After surviving such tests, the once-discovery then enters a period of dormancy: while researchers look for ways to apply their find’s properties to solve real-world problems, science must go on and it does. What starts as a gentle trickle of academic papers soon cascades into a shower, and suddenly, one finds an explosion of interest on the subject against a background of “old” research. Everybody starts to recognize the find’s importance and realize its impending ubiquity – inside laboratories as well as outside. Eventually, this accumulating interest and the growing conviction of the possibility of a better, “enhanced” world of engineering drives investment, first private, then public, then more private again.

Enter graphene. Personally, I am very excited by graphene as such because of its extremely simple structure: it’s a planar arrangement of carbon atoms a layer thick positioned in a honeycomb lattice. That’s it; however, the wonderful capabilities that it has stacked up in the eye of engineers and physicists worldwide since 2004, the year of it’s experimental discovery, is mind-blowing. In the fields of electronics, mensuration, superconductivity, biochemistry, and condensed-matter physics, the attention it currently draws is a historic high.

Graphene’s star-power, so to speak, lies in its electronic and crystalline quality. More than 70 years ago, the physicist Lev Landau had argued that lower-dimensional crystal lattices, such as that of graphene, are thermodynamically unstable: at some fixed temperature, the distances through which the energetic atoms vibrated would cross the length of the interatomic distance, resulting in the lattice breaking down into islands, a process called “dissolving”. Graphene broke this argument by displaying extremely small interatomic distances, which translated as improved electron-sharing to form strong covalent bonds that didn’t break even at elevated temperatures.

As Andre Geim and Konstantin Novoselov, experimental discoverers of graphene and joint winners of the 2010 Nobel Prize in physics, wrote in 2007:

The relativistic-like description of electron waves on honeycomb lattices has been known theoretically for many years, never failing to attract attention, and the experimental discovery of graphene now provides a way to probe quantum electrodynamics (QED) phenomena by measuring graphene’s electronic properties.

(On a tabletop for cryin’ out loud.)

What’s more, because of a tendency to localize electrons faster than could conventional devices, using lasers to activate the photoelectric effect in graphene resulted in electric currents (i.e., moving electrons) forming within picoseconds (photons in the laser pulse knocked out electrons, which then traveled to the nearest location in the lattice where it could settle down, leaving a “hole” in its wake that would pull in the next electron, and so forth). Just because of this, graphene could make for an excellent photodetector, capable of picking up on small “amounts” of eM radiation quickly.

An enhanced current generation rate could also be read as a better electron-transfer rate, with big implications for artificial photosynthesis. The conversion of carbon dioxide to formic acid requires a catalyst that operates in the visible range to provide electrons to an enzyme that its coupled with. The enzyme then reacts with the carbon dioxide to yield the acid. Graphene, a team of South Korean scientists observed in early July, played the role of that catalyst with higher efficiency than its peers in the visible range of the eM spectrum, as well as offering up a higher surface area over which electron-transfer could occur.

Another potential area of application is in the design and development of non-volatile magnetic memories for higher efficiency computers. A computer usually has two kinds of memories: a faster, volatile memory that can store data only when connected to a power source, and a non-volatile memory that stores data even when power to it is switched off. A lot of the power consumed by computers is spent in transferring data between these two memories during operation. This leads to an undesirable difference arising between a computer’s optimum efficiency and its operational efficiency. To solve for this, a Singaporean team of scientists hit upon the use of two electrically conducting films separated by an insulating layer to develop a magnetic resistance between them on application of a spin-polarized electric field to them.

The resistance is highest when the direction of the magnetic field is anti-parallel (i.e., pointing in opposite directions) in the two films, and lowest when the field is parallel. This sandwiching arrangement is subsequently divided into cells, with each cell possessing some magnetic resistance in which data is stored. For maximal data storage, the fields would have to be anti-parallel as well as that the films’ material spin-polarizability high. Here again, graphene was found to be a suitable material. In fact, in much the same vein, this wonder of an allotrope could also have some role to play in replacing existing tunnel-junctions materials such as aluminium oxide and magnesium oxide because of its lower electrical resistance per unit area, absence of surface defects, prohibition of interdiffusion at interfaces, and uniform thickness.

In essence, graphene doesn’t only replace existing materials to enhance a product’s (or process’s) mechanical and electrical properties, but also brings along an opportunity to redefine what the product can do and what it could evolve into in the future. In this regard, it far surpasses existing results of research in materials engineering: instead of forging swords, scientists working with graphene can now forge the battle itself. This isn’t surprising at all considering graphene’s properties are most effective for nano-electromechanical applications (there have been talks of a graphene-based room-temperature superconductor). More precise measurements of their values should open up a trove of new fields, and possible hiding locations of similar materials, altogether.