How does a fan work?

Everywhere I turn, all the talk is about the coronavirus, and it’s exhausting because I already deal with news of the coronavirus as part of my day-job. It’s impossible to catch people having conversations about anything else at all. I don’t blame them, of course, but it’s frustrating nonetheless.

I must admit I relished the opportunity to discuss some electrical engineering and power-plant management when Prime Minister Narendra Modi announced the nine-minute power shutdown event on April 5. So now, to take a break from public health and epidemiology, as well as to remember that a world beyond the coronavirus – and climate change and all the other Other Problems out there – exists, I’ve decided to make sense of how a fan works.

Yes, a household fan, of the kind commonly found in houses in the tropics that have electricity supply and whose members have been able to afford the few thousand rupees for the device. The fan’s ubiquity is a testament to how well we have understood two distinct parts of nature: electromagnetic interactions and fluid flow.

When you flick the switch, a fan comes on, turning about faster and faster until it has reached the speed you’ve set on the regulator, and a few seconds later, you feel cooler. This simple description reveals four distinct parts: the motor inside the fan, the regulator, the blades and the air. Let’s take them one at a time.

The motor inside the fan is an induction motor. It has two major components: the rotor, which is the part that rotates, and the stator, which is the part that remains stationary. All induction motors use alternating current to power themselves, but the rotor and stator are better understood using a direct-current (DC) motor simply because these motors are simpler, so you can understand a lot about their underlying principles simply by looking at them.

Consider an AA battery with a wire connecting its cathode to its anode. A small current will flow through the wire due to the voltage provided by the battery. Now, make a small break in this wire and attach another piece of wire there, bent in the shape of a rectangle, like so:

Next, place the rectangular loop in a magnetic field, such as by placing a magnet’s north pole to one side and a south pole to another:

When a current flows through the loop, it develops a magnetic field around itself. The idea that ‘like charges repel’ applies to magnetic charges as well (borne out through Lenz’s law; we’ll come to that in a bit), so if you orient the external magnetic field right, the loop’s magnetic field could repel it and exert a force on the wire to flip over. And once it has flipped over, the repelling force goes away and the loop doesn’t have to flip anymore.

But we can’t have that. We want the loop to keep flipping over, since that’s how we get rotational motion. We also don’t want the loop to lose contact with the circuit as it flips. To fix both these issues, we add a component called a split-ring commutator at the junction between the circuit and the rectangular loop.

Credit: http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/comtat.html

The commutator consists of two separate pieces of copper attached to the loop. Each piece brushes against a block of graphite connected to the circuit. When the loop has flipped over once, the commutator ensures that it’s still connected to the circuit. However, the difference is that the loop’s endpoints now carry current in the opposite direction, producing a magnetic field oriented the other way. But because the loop has flipped over, the new field is still opposed to the external field, and the loop is forced to flip once more. This way, the loop keeps rotating.

Our DC motor is now complete. The stator is the external magnetic field, because it does not move; the rotor is the rectangular loop, because it rotates.

A DC electric motor with a split-ring commutator (light blue). The green arrows depict the direction of force exerted by the magnetic field on the current carrying loop. Credit: Lookang/Wikimedia Commons, CC BY-SA 3.0

In an induction motor, like in a DC motor, the stator is a magnet. When a direct current is passed through it, the stator generates a steady magnetic field around itself. When an alternating current is passed through it, however, the magnetic field itself rotates because the current is constantly changing direction.

Alternating current has three phases. The stator of an induction motor is really a ring of electromagnets, divided into three groups, each subtending an angle of 120º around the stator. When the three-phase current is passed through the stator, each phase magnetises one group of magnets in sequence. So as the phases alternate, one set of magnets is magnetised at a time, going round and round. This gives rise to the rotating magnetic field. (In a DC motor, the direction of the direct current is mechanically flipped – i.e. reoriented through space by 180º degrees – so the flip is also of 180º at a time.)

A stator of ringed electromagnets produces a rotating magnetic field as a sum of magnetic vectors from three phase coils. Caption and credit: Mtodorov_69/Wikimedia Commons, CC BY-SA 3.0

The rotor in an induction motor consists of electrical wire coiled around a ring of steel. As the stator’s magnetic field comes alive and begins to rotate, the field ‘cuts’ across the coiled wire and induces a current in them. This current in turn produces a magnetic field coiled around the wires, called the rotor’s magnetic field.

In 1834, a Russian physicist named Heinrich Emil Lenz found that magnetic fields also have their own version of Newton’s third law. Called Lenz’s law, it states that if a magnetic field ‘M’ induces a current in a wire, and the current creates a secondary magnetic field ‘F’, M and F will oppose each other.

Similarly, the stator’s magnetic field will repel the rotor’s magnetic field, causing the former to push on the latter. This force in turn causes the rotor to rotate.

We’ve got the fan started, but the induction motor has more to offer.

The alternating current passing through the stator will constantly push the rotor to rotate faster. However, we often need the fan to spin at a specific speed. To balance the two, we use a regulator. The simplest regulator is a length of wire of suitably high resistance that reduces the voltage between the source and the stator, reducing the amount of power reaching the stator. However, if the fan needs to spin very slowly, such a regulator will have to have very high resistance, and in turn will produce a lot of heat. To overcome this problem, modern regulators are capacitors, not resistors. And since resistance doesn’t vary smoothly while capacitance does, capacitor regulators also allow for smooth speed control.

There is another downside to the speed. At no point can the rotor develop so much momentum that the stator’s magnetic field no longer induces a useful current in the rotor’s coils. (This is what happens in a generator: the rotor becomes the ‘pusher’, imparting energy to the stator that then feeds power into the grid.) That is, in an induction motor, the rotor must rotate slower than the stator.

Finally, the rotor itself – made of steel – cannot become magnetised from scratch. That is, if the steel is not at all magnetised when the stator’s magnetic field comes on, the rotor’s coils will first need to generate enough of a magnetic field to penetrate the steel. Only then can the steel rotor begin to move. This requirement gives rise to the biggest downside of induction motors: each motor consumes a fifth of the alternating current to magnetise the rotor.

Thus, we come to the third part. You’ve probably noticed that your fan’s blades accumulate dust more along one edge than the other. This is because the blades are slightly and symmetrically curved down in a shape that aerodynamics engineers call aerofoils or airfoils. When air flows onto a surface, like the side of a building, some of the air ‘bounces’ off, and the surface experiences an equal and opposite reaction that literally pushes on the surface. The rest of the air drags on the surface, akin to friction.

Airfoils are surfaces specifically designed to be ‘attacked’ by air such that they maximise lift and minimise drag. The most obvious example is an airplane wing. An engine attached to the wing provides thrust, motoring the vehicle forward. As the wing cuts through the air, the air flows over the wing’s underside, generating both lift and drag. But the wing’s shape is optimised to extract as much lift as possible, to push the airplane up into the air.

Examples of airfoils. ULM stands for ultralight motorised aircraft. Credit: Oliver Cleynen/Wikimedia Commons

Engineers derive this shape using two equations. The first – the continuity equation – states that if a fluid passes through a wider cross-section at some speed, it will subsequently move faster through a narrower cross section. The second – known as Bernoulli’s principle – stipulates that all times, the sum of a fluid’s kinetic energy (speed), potential energy (pressure) and internal energy (the energy of the random motion of the fluid’s constituent molecules) must be constant. So if a fluid speeds up, it will compensate by, say, exerting lower pressure.

So if an airfoil’s leading edge, the part that sweeps into the air, is broader than its trailing edge, the part from which the air leaves off, the air will leave off faster while exerting more lift. A fan’s blades, of course, can’t lift so to conserve momentum the air will exit with greater velocity.

When you flick a switch, you effectively set this ingenious combination of electromagnetic and aerodynamic engineering in motion, whipping the air about in your room. However, the fan doesn’t cool the air. The reason you feel cooler is because the fan circulates the air through your room, motivating more and more air particles to come in contact with your warm skin and carry away a little bit of heat. That is, you just lose heat by convection.

All of this takes only 10 seconds – but it took humankind over a century of research, numerous advancements in engineering, millions of dollars in capital and operational expenses, an efficient, productive and equitable research culture, and price regulation by the state as well as market forces to make it happen. Such is the price of ubiquity and convenience.

Chromodynamics: Gluons are just gonzo

One of the more fascinating bits of high-energy physics is the branch of physics called quantum chromodynamics (QCD). Don’t let the big name throw you off: it deals with a bunch of elementary particles that have a property called colour charge. And one of these particles creates a mess of this branch of physics because of its colour charge – so much so that it participates in the story that it is trying to shape. What could be more gonzo than this? Hunter S. Thompson would have been proud.

Like electrons have electric charge, particles studied by QCD have a colour charge. It doesn’t correspond to a colour of any kind; it’s just a funky name.

(Richard Feynman wrote about this naming convention in his book, QED: The Strange Theory of Light and Matter (pp. 163, 1985): “The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of ‘color,’ which has nothing to do with color in the normal sense.”)

The fascinating thing about these QCD particles is that they exhibit a property called colour confinement. It means that all particles with colour charge can’t ever be isolated. They’re always to be found only in pairs or bigger clumps. They can be isolated in theory if the clumps are heated to the Hagedorn temperature: 1,000 billion billion billion K. But the bigness of this number has ensured that this temperature has remained theoretical. They can also be isolated in a quark-gluon plasma, a superhot, superdense state of matter that has been creating fleetingly in particle physics experiments like the Large Hadron Collider. The particles in this plasma quickly collapse to form bigger particles, restoring colour confinement.

There are two kinds of particles that are colour-confined: quarks and gluons. Quarks come together to form bigger particles called mesons and baryons. The aptly named gluons are the particles that ‘glue’ the quarks together.

The force that acts between quarks and gluons is called the strong nuclear force. But this is misleading. The gluons actually mediate the strong nuclear force. A physicist would say that when two quarks exchange gluons, the quarks are being acted on by the strong nuclear force.

Because protons and neutrons are also made up of quarks and gluons, the strong nuclear force holds the nucleus together in all the atoms in the universe. Breaking this force releases enormous amounts of energy – like in the nuclear fission that powers atomic bombs and the nuclear fusion that powers the Sun. In fact, 99% of a proton’s mass comes from the energy of the strong nuclear force. The quarks contribute the remaining 1%; gluons are massless.

When you pull two quarks apart, you’d think the force between them will reduce. It doesn’t; it actually increases. This is very counterintuitive. For example, the gravitational force exerted by Earth drops off the farther you get away from it. The electromagnetic force between an electron and a proton decreases the more they move apart. But it’s only with the strong nuclear force that the force between two particles on which the force is acting actually increases as they move apart. Frank Wilczek called this a “self-reinforcing, runaway process”. This behaviour of the force is what makes colour confinement possible.

However, in 1973, Wilczek, David Gross and David Politzer found that the strong nuclear force increases in strength only up to a certain distance – around 1 fermi (0.000000000000001 metres, slightly larger than the diameter of a proton). If the quarks are separated by more than a fermi, the force between them falls off drastically, but not completely. This is called asymptotic freedom: the freedom from the force beyond some distance drops off asymptotically towards zero. Gross, Politzer and Wilczek won the Nobel Prize for physics in 2004 for their work.

In the parlance of particle physics, what makes asymptotic freedom possible is the fact that gluons emit other gluons. How else would you explain the strong nuclear force becoming stronger as the quarks move apart – if not for the gluons that the quarks are exchanging becoming more numerous as the distance increases?

This is the crazy phenomenon that you’re fighting against when you’re trying to set off a nuclear bomb. This is also the crazy phenomenon that will one day lead to the Sun’s death.

The first question anyone would ask now is – doesn’t asymptotic freedom violate the law of conservation of energy?

The answer lies in the nothingness all around us.

The vacuum of deep space in the universe is not really a vacuum. It’s got some energy of itself, which astrophysicists call ‘dark energy’. This energy manifests itself in the form of virtual particles: particles that pop in and out of existence, living for far shorter than a second before dissipating into energy. When a charged particle pops into being, its charge attracts other particles of opposite charge towards itself and repels particles of the same charge away. This is high-school physics.

But when a charged gluon pops into being, something strange happens. An electron has one kind of charge, the positive/negative electric charge. But a gluon contains a ‘colour’ charge and an ‘anti-colour’ charge, each of which can take one of three values. So the virtual gluon will attract other virtual gluons depending on their colour charges and intensify the colour charge field around it, and also change its colour according to whichever particles are present. If this had been an electron, its electric charge and the opposite charge of the particle it attracted would cancel the field out.

This multiplication is what leads to the build up of energy when we’re talking about asymptotic freedom.

Physicists refer to the three values of the colour charge as blue, green and red. (This is more idiocy – you might as well call them ‘baboon’, ‘lion’ and ‘giraffe’.) If a blue quark, a green quark and a red quark come together to form a hadron (a class of particles that includes protons and neutrons), then the hadron will have a colour charge of ‘white’, becoming colour-neutral. Anti-quarks have anti-colour charges: antiblue, antigreen, antired. When a red quark and an antired anti-quark meet, they will annihilate each other – but not so when a red quark and an antiblue anti-quark meet.

Gluons complicate this picture further because, in experiments, physicists have found that gluons behave as if they have both colour and anti-colour. In physical terms, this doesn’t make much sense, but they do in mathematical terms (which we won’t get into). Let’s say a proton is made of one red quark, one blue quark and one green quark. The quarks are held together by gluons, which also have a colour charge. So when two quarks exchange a gluon, the colours of the quarks change. If a blue quark emits a blue-antigreen gluon, the quark turns green whereas the quark that receives the gluon will turn blue. Ultimately, if the proton is ‘white’ overall, then the three quarks inside are responsible for maintaining that whiteness. This is the law of conservation of colour charge.

Gluons emit gluons because of their colour charges. When quarks exchange gluons, the quarks’ colour charges also change. In effect, the gluons are responsible for quarks getting their colours. And because the gluons participate in the evolution of the force that they also mediate, they’re just gonzo: they can interact with themselves to give rise to new particles.

A gluon can split up into two gluons or into a quark-antiquark pair. Say a quark and an antiquark are joined together. If you try to pull them apart by supplying some energy, the gluon between them will ‘swallow’ that energy and split up into one antiquark and one quark, giving rise to two quark-antiquark pairs (and also preserving colour-confinement). If you supply even more energy, more quark-antiquark pairs will be generated.

For these reasons, the strong nuclear force is called a ‘colour force’: it manifests in the movement of colour charge between quarks.

In an atomic nucleus, say there is one proton and one neutron. Each particle is made up of three quarks. The quarks in the proton and the quarks in the neutron interact with each other because they are close enough to be colour-confined: the proton-quarks’ gluons and the neutron-quarks’ gluons interact with each other. So the nucleus is effectively one ball of quarks and gluons. However, one nucleus doesn’t interact with that of a nearby atom in the same way because they’re too far apart for gluons to be exchanged.

Clearly, this is quite complicated – not just for you and me but also for scientists, and for supercomputers that perform these calculations for large experiments in which billions of protons are smashed into each other to see how the particles interact. Imagine: there are six types, or ‘flavours’, of quarks, each carrying one of three colour charges. Then there is the one gluon that can carry one of nine combinations of colour-anticolour charges.

The Wire
September 20, 2017

Featured image credit: Alexas_Fotos/pixabay.

Getting started on superconductivity

After the hoopla surrounding and attention on particle physics subsided, I realized that I’d been riding a speeding wagon all the time. All I’d done is used the lead-up to (the search for the Higgs boson) and the climax itself to teach myself something. Now, it’s left me really excited! Learning about particle physics, I’ve come to understand, is not a single-track course: all the way from making theoretical predictions to having them experimentally verified, particle physics is an amalgamation of far-reaching advancements in a host of other subjects.

One such is superconductivity. Philosophically, it’s a state of existence so far removed from its naturally occurring one that it’s a veritable “freak”. It is common knowledge that everything that’s naturally occurring is equipped to resist change that energizes, to return whenever possible to a state of lower energy. Symmetry and surface tension are great examples of this tendency. Superconductivity, on the other hand, is the desistence of a system to resist the passage of an electric current through it. As a phenomenon that as yet doesn’t manifest in naturally occurring substances, I can’t really opine on its phenomenological “naturalness”.

In particle physics, superconductivity plays a significant role in building powerful particle accelerators. In the presence of a magnetic field, a charged particle moves in a curved trajectory through it because of the Lorentz force acting on it; this fact is used to guide the protons in the Large Hadron Collider (LHC) at CERN through a ring 27 km long. Because moving in a curved path involves acceleration, each “swing” around the ring happens faster than the last, eventually resulting in the particle traveling at close to the speed of light.

A set of superconducting quadrupole-electromagnets installed at the LHC with the cryogenic cooling system visible in the background

In order to generate these extremely powerful magnetic fields – powerful because of the minuteness of each charge and the velocity required to be achieved – superconducting magnets are used that generate fields of the order of 20 T (to compare: the earth’s magnetic field is 25-60 μT, or close to 500,000-times weaker)! Furthermore, the direction of the magnetic field is also switched accordingly to achieve circular motion, to keep the particle from being swung off into the inner wall of the collider at any point!

To understand the role the phenomenon of superconductivity plays in building these magnets, let’s understand how electromagnets work. In a standard iron-core electromagnet, insulated wire is wound around an iron cylinder, and when a current is passed through the wire, a magnetic field is generated around the cross-section of the wire. Because of the coiling, though, the centre of the magnetic field passes through the axis of the cylinder, whose magnetic permeability magnifies the field by a factor of thousands, itself becoming magnetic.

When the current is turned off, the magnetic field instantaneously disappears. When the number of coils is increased, the strength of the magnetic field increases. When the strength of the current is increased, the strength of the magnetic field increases. However, beyond a point, the heat dissipated due to the wire’s electric resistance reduces the amount of current flowing through it, consequently resulting in a weakening of the core’s magnetic field over time.

It is Ohm’s law that establishes proportionality between voltage (V) and electric current (I), calling the proportionality-constant the material’s electrical resistance: R = V/I. To overcome heating due to resistance, resistance itself must be brought down to zero. According to Ohm’s law, this can be done either by passing a ridiculously large current through the wire or bringing the voltage across its ends down to zero. However, performing either of these changes on conventional conductors is impossible: how does one quickly pass a large volume of water through any pipe across which the pressure difference is miniscule?!

Heike Kamerlingh Onnes

The solution to this unique problem, therefore, lay in a new class of materials that humankind had to prepare, a class of materials that could “instigate” an alternate form of electrical conduction such that an electrical current could pass through it in the absence of a voltage difference. In other words, the material should be able to carry large amounts of current without offering up any resistance to it. This class of materials came to be known as superconductors – after Heike Kamerlingh Onnes discovered the phenomenon in 1911.

In a conducting material, the electrons that essentially effect the flow of electric current could be thought of as a charged fluid flowing through and around an ionic 3D grid, an arrangement of positively charged nuclei that all together make up the crystal lattice. When a voltage-drop is established, the fluid begins to get excited and moves around, an action called conducting. However, the electrons constantly collide with the ions. The ions, then, absorb some of the energy of the current, start vibrating, and gradually dissipate it as heat. This manifests as the resistance. In a superconductor, however, the fluid exists as a superfluid, and flows such that the electrons never collide into the ions.

In (a classical understanding of) the superfluid state, each electron repels every other electron because of their charge likeness, and attracts the positively charged nuclei. As a result, the nucleus moves very slightly toward the electron, causing an equally slight distortion of the crystal lattice. Because of the newly increased positive-charge density in the vicinity, some more electrons are attracted by the nucleus.

This attraction, which, across the entirety of the lattice, can cause a long-range but weak “draw” of electrons, results in pairs of electrons overcoming their mutual hatred of each other and tending toward one nucleus (or the resultant charge-centre of some nuclei). Effectively, this is a pairing of electrons whose total energy was shown by Leon Cooper in 1956 to be lesser than the energy of the most energetic electron if it had existed unpaired in the material. Subsequently, these pairs came to be called Cooper pairs, and a fluid composed of Cooper pairs, a superfluid (thermodynamically, a superfluid is defined as a fluid that can flow without dissipating any energy).

Although the sea of electrons in the new superconducting class of materials could condense into a superfluid, the fluid itself can’t be expected to flow naturally. Earlier, the application of an electric current imparted enough energy to all the electrons in the metal (via a voltage difference) to move around and to scatter against nuclei to yield resistance. Now, however, upon Cooper-pairing, the superfluid had to be given an environment in which there’d be no vibrating nuclei. And so: enter cryogenics.

The International Linear Collider – Test Area’s (ILCTA) cryogenic refrigerator room

The thermal energy of a crystal lattice is given by E = kT, where ‘k’ is Boltzmann’s constant and T, the temperature. Demonstrably, to reduce the kinetic energy of all nuclei in the lattice to zero, the crystal itself had to be cooled to absolute zero (0 kelvin). This could be achieved by cryogenic cooling techniques. For instance, at the LHC, the superconducting magnets are electromagnets wherein the coiled wire is made of a superconducting material. When cooled to a really low temperature using a two-stage heat-exchanger composed of liquid helium jacketed with liquid nitrogen, the wires can carry extremely large amounts of current to generate very intense magnetic fields.

At the same time, however, if the energy of the superfluid itself surpassed the thermal energy of the lattice, then it could flow without the lattice having to be cooled down. Because the thermal energy is different for different crystals at different ambient temperatures, the challenge now lies in identifying materials that could permit superconductivity at temperatures approaching room-temperature. Now that would be (even more) exciting!

P.S. A lot of the related topics have not been covered in this post, such as the Meissner effect, electron-phonon interactions, properties of cuprates and lanthanides, and Mott insulators. They will be taken up in the future as they’re topics that require in-depth detailing, quite unlike this post which has been constructed as a superfluous introduction only.

Graphene the Ubiquitous

Every once in a while, a (revolutionary-in-hindsight) scientific discovery is made that’s at first treated as an anomaly, and then verified. Once established as a credible find, it goes through a period where it is subject to great curiosity and intriguing reality checks – whether it was a one-time thing, if it can actually be reproduced under different circumstances at different locations, if it has properties that can be tracked through different electrical, mechanical and chemical circumstances.

After surviving such tests, the once-discovery then enters a period of dormancy: while researchers look for ways to apply their find’s properties to solve real-world problems, science must go on and it does. What starts as a gentle trickle of academic papers soon cascades into a shower, and suddenly, one finds an explosion of interest on the subject against a background of “old” research. Everybody starts to recognize the find’s importance and realize its impending ubiquity – inside laboratories as well as outside. Eventually, this accumulating interest and the growing conviction of the possibility of a better, “enhanced” world of engineering drives investment, first private, then public, then more private again.

Enter graphene. Personally, I am very excited by graphene as such because of its extremely simple structure: it’s a planar arrangement of carbon atoms a layer thick positioned in a honeycomb lattice. That’s it; however, the wonderful capabilities that it has stacked up in the eye of engineers and physicists worldwide since 2004, the year of it’s experimental discovery, is mind-blowing. In the fields of electronics, mensuration, superconductivity, biochemistry, and condensed-matter physics, the attention it currently draws is a historic high.

Graphene’s star-power, so to speak, lies in its electronic and crystalline quality. More than 70 years ago, the physicist Lev Landau had argued that lower-dimensional crystal lattices, such as that of graphene, are thermodynamically unstable: at some fixed temperature, the distances through which the energetic atoms vibrated would cross the length of the interatomic distance, resulting in the lattice breaking down into islands, a process called “dissolving”. Graphene broke this argument by displaying extremely small interatomic distances, which translated as improved electron-sharing to form strong covalent bonds that didn’t break even at elevated temperatures.

As Andre Geim and Konstantin Novoselov, experimental discoverers of graphene and joint winners of the 2010 Nobel Prize in physics, wrote in 2007:

The relativistic-like description of electron waves on honeycomb lattices has been known theoretically for many years, never failing to attract attention, and the experimental discovery of graphene now provides a way to probe quantum electrodynamics (QED) phenomena by measuring graphene’s electronic properties.

(On a tabletop for cryin’ out loud.)

What’s more, because of a tendency to localize electrons faster than could conventional devices, using lasers to activate the photoelectric effect in graphene resulted in electric currents (i.e., moving electrons) forming within picoseconds (photons in the laser pulse knocked out electrons, which then traveled to the nearest location in the lattice where it could settle down, leaving a “hole” in its wake that would pull in the next electron, and so forth). Just because of this, graphene could make for an excellent photodetector, capable of picking up on small “amounts” of eM radiation quickly.

An enhanced current generation rate could also be read as a better electron-transfer rate, with big implications for artificial photosynthesis. The conversion of carbon dioxide to formic acid requires a catalyst that operates in the visible range to provide electrons to an enzyme that its coupled with. The enzyme then reacts with the carbon dioxide to yield the acid. Graphene, a team of South Korean scientists observed in early July, played the role of that catalyst with higher efficiency than its peers in the visible range of the eM spectrum, as well as offering up a higher surface area over which electron-transfer could occur.

Another potential area of application is in the design and development of non-volatile magnetic memories for higher efficiency computers. A computer usually has two kinds of memories: a faster, volatile memory that can store data only when connected to a power source, and a non-volatile memory that stores data even when power to it is switched off. A lot of the power consumed by computers is spent in transferring data between these two memories during operation. This leads to an undesirable difference arising between a computer’s optimum efficiency and its operational efficiency. To solve for this, a Singaporean team of scientists hit upon the use of two electrically conducting films separated by an insulating layer to develop a magnetic resistance between them on application of a spin-polarized electric field to them.

The resistance is highest when the direction of the magnetic field is anti-parallel (i.e., pointing in opposite directions) in the two films, and lowest when the field is parallel. This sandwiching arrangement is subsequently divided into cells, with each cell possessing some magnetic resistance in which data is stored. For maximal data storage, the fields would have to be anti-parallel as well as that the films’ material spin-polarizability high. Here again, graphene was found to be a suitable material. In fact, in much the same vein, this wonder of an allotrope could also have some role to play in replacing existing tunnel-junctions materials such as aluminium oxide and magnesium oxide because of its lower electrical resistance per unit area, absence of surface defects, prohibition of interdiffusion at interfaces, and uniform thickness.

In essence, graphene doesn’t only replace existing materials to enhance a product’s (or process’s) mechanical and electrical properties, but also brings along an opportunity to redefine what the product can do and what it could evolve into in the future. In this regard, it far surpasses existing results of research in materials engineering: instead of forging swords, scientists working with graphene can now forge the battle itself. This isn’t surprising at all considering graphene’s properties are most effective for nano-electromechanical applications (there have been talks of a graphene-based room-temperature superconductor). More precise measurements of their values should open up a trove of new fields, and possible hiding locations of similar materials, altogether.