MIT develops thermo-PV cell with 40% efficiency

Researchers at MIT have developed a heat engine that can convert heat to electricity with 40% efficiency. Unlike traditional heat engines – a common example is the internal combustion engine inside a car – this device doesn’t have any moving parts. Second, this device has been designed to work with a heat source that has a temperature of 1,900º to 2,400º C. Effectively, it’s like a solar cell that has been optimised to work with photons from vastly hotter sources – although its efficiency still sets it apart. If you know the history, you’ll understand why 40% is a big deal. And if you know a bit of optics and some materials science, you’ll understand how this device could be an important part of the world’s efforts to decarbonise its power sources. But first the history.

We’ve known how to build heat engines for almost two millennia. They were first built to convert heat, generated by burning a fuel, into mechanical energy – so they’ve typically had moving parts. For example, the internal combustion engine combusts petrol or diesel and harnesses the energy produced to move a piston. However, the engine can only extract mechanical work from the fuel – it can’t put the heat back. If it did, it would have to ‘give back’ the work it just extracted, nullifying the engine’s purpose. So once the piston has been moved, the engine dumps the heat and begins the next cycle of heat extraction from more fuel. (In the parlance of thermodynamics, the origin of the heat is called the source and its eventual resting place is called the sink.)

The inevitability of this waste heat keeps the heat engine’s efficiency from ever reaching 100% – and is further dragged down by the mechanical energy losses implicit in the moving parts (the piston, in this case). In 1820, the French mechanical engineer Nicolas Sadi Carnot derived the formula to calculate the maximum possible efficiency of a heat engine that works in this way. (The formula also assumes that the engine is reversible – i.e. that it can pump heat from a colder source to a hotter sink.) The number spit out by this formula is called the Carnot efficiency. No heat engine can have an energy efficiency that’s greater than its Carnot efficiency. The internal combustion engines of today have a Carnot efficiency of around 37%. A steam generator at a large power plant can go up to 51%. Against this background, the heat engine that the MIT team has developed has a celebration-worthy efficiency of 40%.

The other notable thing about it is the amount of heat with which it can operate. There are two potential applications of the new device that come immediately to mind: to use the waste heat from something that operates at 1,900-2,400º C and to take the heat from something that stores energy at those temperatures. There aren’t many entities in the world that maintain a temperature of 1,900-2,400º C as well as dump waste heat. Work on the device caught my attention after I spotted a press release from MIT. The release described one application that combined both possibilities in the form of a thermal battery system. Here, heat from the Sun is concentred in graphite blocks (using lenses and mirrors) that are located in a highly insulated chamber. When the need arises, the insulation can be removed to a suitable extent for the graphite to lose some heat, which the new device then converts to electricity.

On Twitter, user Scott Leibrand (@ScottLeibrand) also pointed me to a similar technology called FIRES – short for ‘Firebrick Resistance-Heated Energy Storage’, proposed by MIT researchers in 2018. According to a paper they wrote, it “stores electricity as … high-temperature heat (1000–1700 °C) in ceramic firebrick, and discharges it as a hot airstream to either heat industrial plants in place of fossil fuels, or regenerate electricity in a power plant.” They add that “traditional insulation” could limit heat leakage from the firebricks to less than 3% per day and estimate a storage cost of $10/kWh – “substantially less expensive than batteries”. This is where the new device could shine, or better yet enable a complete power-production system: by converting heat deliberately leaked from the graphite blocks or firebricks to electricity, at 40% efficiency. Even given the fact that heat transfer is more efficient at higher temperatures, this is impressive – more since such energy storage options are also geared for the long-term.

Let’s also take a peek at how the device works. It’s called a thermophotovoltaic (TPV) cell. The “photovoltaic” in the name indicates that it uses the photovoltaic effect to create an electric current. It’s closely related to the photoelectric effect. In both cases, an incoming photon knocks out an electron in the material, creating a voltage that then supports an electric current. In the photoelectric effect, the electron is completely knocked out of the material. In the photovoltaic effect, the electron stays within the material and can be recaptured. Second, in order to achieve the high efficiency, the research team wrote in its paper that it did three things. It’s a bunch of big words but they actually have straightforward implications, as I explain, so don’t back down.

1. “The usage of higher bandgap materials in combination with emitter temperatures between 1,900 and 2,400 °C” – Band gap refers to the energy difference between two levels. In metals, for example, when electrons in the valence band are imparted enough energy, they can jump across the band gap into the conduction band, where they can flow around the metal conducting electricity. The same thing happens in the TPV cell, where incoming photons can ‘kick’ electrons into the material’s conduction band if they have the right amount of energy. Because the photon source is a very hot object, the photons are bound to have the energy corresponding to the infrared wavelength of light – which carries around 1-1.5 electron-volt, or eV. So the corresponding TPV material also needs to have a bandgap of 1-1.5 eV. This brings us to the second point.

2. “High-performance multi-junction architectures with bandgap tunability enabled by high-quality metamorphic epitaxy” – Architecture refers to the configuration of the cell’s physical, electrical and chemical components and epitaxy refers to the way in which the cell is made. In the new TPV cell, the MIT team used a multi-junction architecture that allowed the device to ‘accept’ photons of a range of wavelengths (corresponding to the temperature range). This is important because the incoming photons can have one of two effects: either kick out an electron or heat up the material. The latter is undesirable and should be avoided, so the multi-junction setup to absorb as many photons as possible. A related issue is that the power output per unit volume of an object radiating heat scales according to the fourth power of its temperature. That is, if its temperature increases by x, its power output per volume will increase by x^4. Since the heat source of the TPV cell is so hot, it will have a high power output, thus again privileging the multi-junction architecture. The epitaxy is not interesting to me, so I’m skipping it. But I should note that electric cells like the current one aren’t ubiquitous because making them is a highly intricate process.

3. “The integration of a highly reflective back surface reflector (BSR) for band-edge filtering” – The MIT press release explains this part clearly: “The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold” – the BSR. “The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.”

While it seems obvious that technology like this will play an important part in humankind’s future, particularly given the attractiveness of maintaining a long-term energy store as well as the use of a higher-efficiency heat engine, the economics matter muchly. I don’t know how much the new TPV cell will cost, especially since it isn’t being mass-produced yet; in addition, the design of the thermal battery system will determine how many square feet of TPV cells will be required, which in turn will affect the cells’ design as well as the economics of the overall facility. This said, the fact that the system as a whole will have so few moving parts as well as the availability of both sunlight and graphite or firebricks, or even molten silicon, which has a high heat capacity, keep the lucre of MIT’s high-temperature TPVs alive.

Featured image: A thermophotovoltaic cell (size 1 cm x 1 cm) mounted on a heat sink designed to measure the TPV cell efficiency. To measure the efficiency, the cell is exposed to an emitter and simultaneous measurements of electric power and heat flow through the device are taken. Caption and credit: Felice Frankel/MIT, CC BY-NC-ND.

How does a fan work?

Everywhere I turn, all the talk is about the coronavirus, and it’s exhausting because I already deal with news of the coronavirus as part of my day-job. It’s impossible to catch people having conversations about anything else at all. I don’t blame them, of course, but it’s frustrating nonetheless.

I must admit I relished the opportunity to discuss some electrical engineering and power-plant management when Prime Minister Narendra Modi announced the nine-minute power shutdown event on April 5. So now, to take a break from public health and epidemiology, as well as to remember that a world beyond the coronavirus – and climate change and all the other Other Problems out there – exists, I’ve decided to make sense of how a fan works.

Yes, a household fan, of the kind commonly found in houses in the tropics that have electricity supply and whose members have been able to afford the few thousand rupees for the device. The fan’s ubiquity is a testament to how well we have understood two distinct parts of nature: electromagnetic interactions and fluid flow.

When you flick the switch, a fan comes on, turning about faster and faster until it has reached the speed you’ve set on the regulator, and a few seconds later, you feel cooler. This simple description reveals four distinct parts: the motor inside the fan, the regulator, the blades and the air. Let’s take them one at a time.

The motor inside the fan is an induction motor. It has two major components: the rotor, which is the part that rotates, and the stator, which is the part that remains stationary. All induction motors use alternating current to power themselves, but the rotor and stator are better understood using a direct-current (DC) motor simply because these motors are simpler, so you can understand a lot about their underlying principles simply by looking at them.

Consider an AA battery with a wire connecting its cathode to its anode. A small current will flow through the wire due to the voltage provided by the battery. Now, make a small break in this wire and attach another piece of wire there, bent in the shape of a rectangle, like so:

Next, place the rectangular loop in a magnetic field, such as by placing a magnet’s north pole to one side and a south pole to another:

When a current flows through the loop, it develops a magnetic field around itself. The idea that ‘like charges repel’ applies to magnetic charges as well (borne out through Lenz’s law; we’ll come to that in a bit), so if you orient the external magnetic field right, the loop’s magnetic field could repel it and exert a force on the wire to flip over. And once it has flipped over, the repelling force goes away and the loop doesn’t have to flip anymore.

But we can’t have that. We want the loop to keep flipping over, since that’s how we get rotational motion. We also don’t want the loop to lose contact with the circuit as it flips. To fix both these issues, we add a component called a split-ring commutator at the junction between the circuit and the rectangular loop.

Credit: http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/comtat.html

The commutator consists of two separate pieces of copper attached to the loop. Each piece brushes against a block of graphite connected to the circuit. When the loop has flipped over once, the commutator ensures that it’s still connected to the circuit. However, the difference is that the loop’s endpoints now carry current in the opposite direction, producing a magnetic field oriented the other way. But because the loop has flipped over, the new field is still opposed to the external field, and the loop is forced to flip once more. This way, the loop keeps rotating.

Our DC motor is now complete. The stator is the external magnetic field, because it does not move; the rotor is the rectangular loop, because it rotates.

A DC electric motor with a split-ring commutator (light blue). The green arrows depict the direction of force exerted by the magnetic field on the current carrying loop. Credit: Lookang/Wikimedia Commons, CC BY-SA 3.0

In an induction motor, like in a DC motor, the stator is a magnet. When a direct current is passed through it, the stator generates a steady magnetic field around itself. When an alternating current is passed through it, however, the magnetic field itself rotates because the current is constantly changing direction.

Alternating current has three phases. The stator of an induction motor is really a ring of electromagnets, divided into three groups, each subtending an angle of 120º around the stator. When the three-phase current is passed through the stator, each phase magnetises one group of magnets in sequence. So as the phases alternate, one set of magnets is magnetised at a time, going round and round. This gives rise to the rotating magnetic field. (In a DC motor, the direction of the direct current is mechanically flipped – i.e. reoriented through space by 180º degrees – so the flip is also of 180º at a time.)

A stator of ringed electromagnets produces a rotating magnetic field as a sum of magnetic vectors from three phase coils. Caption and credit: Mtodorov_69/Wikimedia Commons, CC BY-SA 3.0

The rotor in an induction motor consists of electrical wire coiled around a ring of steel. As the stator’s magnetic field comes alive and begins to rotate, the field ‘cuts’ across the coiled wire and induces a current in them. This current in turn produces a magnetic field coiled around the wires, called the rotor’s magnetic field.

In 1834, a Russian physicist named Heinrich Emil Lenz found that magnetic fields also have their own version of Newton’s third law. Called Lenz’s law, it states that if a magnetic field ‘M’ induces a current in a wire, and the current creates a secondary magnetic field ‘F’, M and F will oppose each other.

Similarly, the stator’s magnetic field will repel the rotor’s magnetic field, causing the former to push on the latter. This force in turn causes the rotor to rotate.

We’ve got the fan started, but the induction motor has more to offer.

The alternating current passing through the stator will constantly push the rotor to rotate faster. However, we often need the fan to spin at a specific speed. To balance the two, we use a regulator. The simplest regulator is a length of wire of suitably high resistance that reduces the voltage between the source and the stator, reducing the amount of power reaching the stator. However, if the fan needs to spin very slowly, such a regulator will have to have very high resistance, and in turn will produce a lot of heat. To overcome this problem, modern regulators are capacitors, not resistors. And since resistance doesn’t vary smoothly while capacitance does, capacitor regulators also allow for smooth speed control.

There is another downside to the speed. At no point can the rotor develop so much momentum that the stator’s magnetic field no longer induces a useful current in the rotor’s coils. (This is what happens in a generator: the rotor becomes the ‘pusher’, imparting energy to the stator that then feeds power into the grid.) That is, in an induction motor, the rotor must rotate slower than the stator.

Finally, the rotor itself – made of steel – cannot become magnetised from scratch. That is, if the steel is not at all magnetised when the stator’s magnetic field comes on, the rotor’s coils will first need to generate enough of a magnetic field to penetrate the steel. Only then can the steel rotor begin to move. This requirement gives rise to the biggest downside of induction motors: each motor consumes a fifth of the alternating current to magnetise the rotor.

Thus, we come to the third part. You’ve probably noticed that your fan’s blades accumulate dust more along one edge than the other. This is because the blades are slightly and symmetrically curved down in a shape that aerodynamics engineers call aerofoils or airfoils. When air flows onto a surface, like the side of a building, some of the air ‘bounces’ off, and the surface experiences an equal and opposite reaction that literally pushes on the surface. The rest of the air drags on the surface, akin to friction.

Airfoils are surfaces specifically designed to be ‘attacked’ by air such that they maximise lift and minimise drag. The most obvious example is an airplane wing. An engine attached to the wing provides thrust, motoring the vehicle forward. As the wing cuts through the air, the air flows over the wing’s underside, generating both lift and drag. But the wing’s shape is optimised to extract as much lift as possible, to push the airplane up into the air.

Examples of airfoils. ULM stands for ultralight motorised aircraft. Credit: Oliver Cleynen/Wikimedia Commons

Engineers derive this shape using two equations. The first – the continuity equation – states that if a fluid passes through a wider cross-section at some speed, it will subsequently move faster through a narrower cross section. The second – known as Bernoulli’s principle – stipulates that all times, the sum of a fluid’s kinetic energy (speed), potential energy (pressure) and internal energy (the energy of the random motion of the fluid’s constituent molecules) must be constant. So if a fluid speeds up, it will compensate by, say, exerting lower pressure.

So if an airfoil’s leading edge, the part that sweeps into the air, is broader than its trailing edge, the part from which the air leaves off, the air will leave off faster while exerting more lift. A fan’s blades, of course, can’t lift so to conserve momentum the air will exit with greater velocity.

When you flick a switch, you effectively set this ingenious combination of electromagnetic and aerodynamic engineering in motion, whipping the air about in your room. However, the fan doesn’t cool the air. The reason you feel cooler is because the fan circulates the air through your room, motivating more and more air particles to come in contact with your warm skin and carry away a little bit of heat. That is, you just lose heat by convection.

All of this takes only 10 seconds – but it took humankind over a century of research, numerous advancements in engineering, millions of dollars in capital and operational expenses, an efficient, productive and equitable research culture, and price regulation by the state as well as market forces to make it happen. Such is the price of ubiquity and convenience.

The ‘could’ve, should’ve, would’ve’ of R&D

ISRO’s Moon rover, which will move around the lunar surface come September (if all goes well), will live and and die in a span of 14 days because that’s how long the lithium-ion cells it’s equipped with can survive the -160º C-nights at the Moon’s south pole, among other reasons. This here illustrates an easily understood connection between fundamental research and its apparent uselessness on the one hand and applied science and its apparent superiority on the other.

Neither position is entirely and absolutely correct, of course, but this hierarchy of priorities is very real, at least in India, because it closely parallels the practices of the populist politics that privileges short-term gains over benefits in the longer run.

In this scenario, it may not seem worthwhile to fund a solid-state physicist who has, based on detailed physicochemical analyses, fashioned for example a new carbon-based material that can store lithium ions in its atomic lattice and has better thermal characteristics than graphite. It may seem even less worthwhile to fund researchers probing the seemingly obscure electronic properties of materials like graphene and silicene, writing papers steeped in abstract math and unable to propose a single viable application for the near-future.

But give it twenty years and a measure of success in the otherwise-unpredictable translational research part of the R&D pipeline, and suddenly, you’re holding the batteries that’re supposed to be installed on a Moon rover and need to determine how many instruments you can pack on there to ensure the whole ensemble is powered for the whole time they’ll need to conduct each of their tests. Just as suddenly, you’re also thinking about what else you could’ve installed on the little machine so it could’ve lived longer, and what else it could’ve potentially discovered in this bonus time.

Maybe you’re just happy, knowing how things have been for research in the country in the last two decades and based on the spaceflight organisation’s goals (a part of which the government has a say in), that the batteries can even last for two weeks. Maybe you’re just sad because you think it could’ve been better. But one way or another, it’s an inescapably tangible reminder that investments in research determine what you’re going to get to take out of the technology in the future. Put differently: it’s ridiculous to expect to know which water molecules are going to end up in which plant, but unless you water the soil, the plants are going to start wilting.

Chandrayaan 2 itself may be lined up to be a great success but who knows, there could come along a future mission where a groundbreaking instrument developed by an inspired student at a state university has to be left out of an interplanetary satellite because we didn’t have access to the right low-density, high-strength materials. Or where a bunch of Indians are on a decade-long interstellar voyage and the captain realises crew morale is dangerously low because the government couldn’t give two whits about social psychology.

Why a pump to move molten metal is awesome

The conversion of one form of energy into another is more efficient at higher temperatures.1 For example, one of the most widely used components of any system that involves the transfer of heat from one part of the system to another is a device called a heat exchanger. When it’s transferring heat from one fluid to another, for example, the heat exchanger must facilitate the efficient movement of heat between the two media without allowing them to mix.

There are many designs of heat exchangers for a variety of applications but the basic principle is the same. However, they’re all limited by the explicit condition that entropy – “the measure of disorder” – is higher at lower temperatures. In other words, the lower the temperature difference within the exchanger, the less efficiently the transfer will happen. This is why it’s desirable to have a medium that can carry a lot of heat per unit volume.

But this is not always possible for two reasons. First: there must exist a pump that can move such a hot medium from one point to another in the system. This pump must be made of materials that can withstand high temperatures during operation as well as not react with the medium at those temperatures. Second: one of the more efficient media that can carry a lot of heat is liquid metals. But they’re difficult to pump because of their corrosive nature and high density. Both reasons together, this is why medium temperatures have been limited to around 1,000º C.

Now, an invention by engineers from the US has proposed a solution. They’ve constructed a pump using ceramics. This is really interesting because ceramics have a good reputation for being able to withstand extreme heat (they were part of the US Space Shuttle’s heat shield exposed during atmospheric reentry) but an equally bad reputation for being very brittle.2 So this means that a ceramic composition of the pump material accords it a natural ability to withstand heat.

In other words, the bigger problem the engineers would’ve solved for would be to keep it from breaking during operation.

Their system consists of a motor, a gearbox, pipes and a reservoir of liquid tin. When the motor is turned on, the pump receives liquid tin from the bottom of the reservoir. Two interlocking gears inside the pump rotate. As the tin flows between the blades, it is compressed into the space between them, creating a pressure difference that sucks in more tin from the reservoir. After the tin moves through the blades, it is let out into another pipe that takes it back to the reservoir.

The blades are made of Shapal, an aluminium nitride ceramic made by the Tokuyama Corporation in Japan with the unique property of being machinable. The pump seals and the pipes are made of graphite. High-temperature pumps usually have pipes made of polymers. Graphite and such polymers are similar in that they’re both very difficult to corrode. But graphite has an upper hand in this context because it can also withstand higher temperatures before it loses its consistency.

Using this setup, the engineers were able to operate the pump continuously for 72 hours at an average temperature of 1,200º C. For the first 60 hours of operation, the flow rate varied between 28 and 108 grams per second (at an rpm in the lower hundreds). According to the engineers’ paper, this corresponds to an energy transfer of 5-20 kW for a vat of liquid tin heated from 300º C to 1,200º C. They extrapolate these numbers to suggest that if the gear diameter and thickness were enlarged from 3.8 cm to 17.1 cm and 1.3 cm to 5.85 cm (resp.) and operated at 1,800 rpm, the resulting heat transfer rate would be 100 MW – a jump of 5,000x from 20 kW and close to the requirements of a utility-scale power plant.

And all of this would be on a tabletop setup. This is the kind of difference having a medium with a high energy density makes.

The engineers say that their choice of temperature at which to run the pump – about 1,200ºC – was limited by whatever heaters they had available in their lab. So future versions of this pump could run for cheaper and at higher temperatures by using, say, molten silicon and higher grade ceramics than Shapal. Such improvements could have an outsize effect in our world because of the energy capacity and transfer improvements they stand to bring to renewable energy storage.

1. I can attest from personal experience that learning the principles of thermodynamics is easier through application than theory – an idea that my college professors utterly failed to grasp.

2. The ceramics used to pave the floor of your house and the ceramics used to pad the underbelly of the Space Shuttle are very different. For one, the latter had a foamy internal structure and wasn’t brittle. They were designed and manufactured this way because the ceramics of the Space Shuttle wouldn’t just have to withstand high heat – they would also have to be able to withstand the sudden temperature change as the shuttle dived from the -270º C of space into the 1,500º C of hypersonic shock.

Featured image credit: Erdenebayar/pixabay.