A closet of hidden phenomena

An apparatus to study quantum entanglement, with superconducting channels placed millimeters apart.
An apparatus to study quantum entanglement, with superconducting channels placed millimeters apart. Photo: Softpedia

Science has been rarely counter-intuitive to our understanding of reality, and its elegant rationalism at every step of the way has been reassuring. This is why Bell’s theorem has been one of the strangest concepts of reality scientists have come across: it is hardly intuitive, hardly rational, and hardly reassuring.

To someone interested in the bigger picture, the theorem is the line before which quantum mechanics ends and after which classical mechanics begins. It’s the line in the sand between the Max Planck and the Albert Einstein weltanschauungen.

Einstein, and many others before him, worked with gravity, finding a way to explain the macrocosm and its large-scale dance of birth and destruction. Planck, and many others after him, have helped describe the world of the atom and its innards using extremely small packets of energy called particles, swimming around in a pool of exotic forces.

At the nexus of a crisis

Over time, however, as physicists studied the work of both men and of others, it started to become clear that the the fields were mutually exclusive, never coming together to apply to the same idea. At this tenuous nexus, the Irish physicist John Stuart Bell cleared his throat.

Bell’s theorem states, in simple terms, that for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or superluminal communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed – like the moon in the morning.

The paradox is obvious. Classical mechanics is applicable everywhere, even with subatomic particles that are billionths of nanometers across. That it’s not is only because its dominant player, the gravitational force, is overshadowed by other stronger forces. Quantum mechanics, on the other hand, is not so straightforward with its offering. It could be applied in the macroscopic world – but its theory has trouble dealing with gravity and the strong nuclear force, both of which have something to do with mass.

This means if quantum mechanics is to have a smooth transition at some scale into a classical reality… it can’t. At that scale, one of locality or realism must snap back to life. This is why confronting the idea that one of them isn’t true is unsettling. They are both fundamental hypotheses of physics.

The newcomer

A few days ago, I found a paper on arXiv titled Violation of Bell’s inequality in fluid mechanics (May 28, 2013). Its abstract stated that “… a classical fluid mechanical system can violate Bell’s inequality because the fluid motion is correlated over very large distances”. Given that Bell stands between Planck’s individuated notion of quantum mechanics and Einstein’s waltz-like continuum of the cosmos, it was intriguing to see scientists attempting to describe a quantum mechanical phenomenon in a classical system.

The correlation that the paper’s authors talk about implies fluid flow in one region of space-time is somehow correlated with fluid flow in another region of space-time. This is a violation of locality. However, fluid mechanics has been, still is, a purely classical occurrence: its behaviour can be traced to Newton’s ideas from the 17th century. This means all flow events are, rather have to be, decidedly realand local.

To make their point, the authors use mathematical equations modelling fluid flow, conceived by Leonhard Euler in the 18th century, and how they could explain vortices – regions of a fluid where the flow is mostly a spinning motion about an axis.


This is a vortex. Hurt your eyes yet?

Assigning fictitious particles to different parts of the equation, the scientists demonstrate how the particles in one region of flow could continuously and instantaneously affect particles in another region of fluid flow. In quantum mechanics, this phenomenon is called entanglement. It has no classical counterpart because it violates the principle of locality.

Coincidental correlation

However, there is nothing quantum about fluid flow, much less about Euler’s equations. Then again, if the paper is right, would that mean flowing fluids are a quantum mechanical system? Occam’s razorcomes to the rescue: Because fluid flow is classical but still shows signs of nonlocality, there is a possibility that purely local interactions could explain quantum mechanical phenomena.

Think about it. A purely classical system also shows signs of quantum mechanical behaviour. This meant that some phenomena in the fluid could be explained by both classical and quantum mechanical models, i.e. the two models correspond.

There is a stumbling block, however. Occam’s razor only provides evidence of a classical solution for nonlocality, not a direct correspondence between micro- and macroscopic physics. In other words, it could easily be a post hoc ergo propter hoc inference: Because nonlocality came after application of local mathematics, local mathematics must have caused nonlocality.

“Not quite,” said Robert Brady, one of the authors on the paper. “Bell’s hypothesis is often said to be about ‘locality’, and so it is common to say that quantum mechanical systems are ‘nonlocal’ because Bell’s hypothesis does not apply to them. If you choose this description, then fluid mechanics is also ‘non-local’, since Bell’s hypothesis does not apply to them either.”

“However, in fluid mechanics it is usual to look at this from a different angle, since Bell’s hypothesis would not be thought reasonable in that field.”

Brady’s clarification brings up an important point: Even though the lines don’t exactly blur between the two domains, knowing more than choosing where to apply which model makes a large difference. If you misstep, classical fluid flow could become quantum fluid flow simply because it displays some pseudo-effects.

In fact, experiments to test Bell’s hypothesis have been riddled with such small yet nagging stumbling blocks. Even if a suitable domain of applicability has been chosen, an efficient experiment has to be designed that fully exploits the domain’s properties to arrive at a conclusion – and this has proved very difficult. Inspired by the purely theoretical EPR paradox put forth in 1935, Bell stated his theorem in 1964. It is now 2013 and no experiment has successfully been able to decide if Bell was right or wrong.

Three musketeers

The three most prevalent problems such experiments face are called the failure of rotational invariance, the no-communication loophole, and the fair sampling assumption.

In any Bell experiment, two particles are allowed to interact in some way – such as being born from a same source – and separated across a large distance. Scientists then measure the particles’ properties using detectors. This happens again and again until any patterns among paired particles can be found or denied.

Whatever properties the scientists are going to measure, the different values that that property can take must be equally likely. For example, if I have a bag filled with 200 blue balls, 300 red balls and 100 yellow balls, I shouldn’t think something quantum mechanical was at play if one in two balls pulled out was red. That’s just probability at work. And when probability can’t be completely excluded from the results, it’s called a failure of rotational invariance.

For the experiment to measure only the particles’ properties, the detectors must not be allowed to communicate with each other. If they were allowed to communicate, scientists wouldn’t know if a detection arose due to the particles or due to glitches in the detectors. Unfortunately, in a perfect setup, the detectors wouldn’t communicate at all and be decidedly local – putting them in no position to reveal any violation of locality! This problem is called the no-communication loophole.

The final problem – fair sampling – is a statistical issue. If an experiment involves 1,000 pairs of particles, and if only 800 pairs have been picked up by the detector and studied, the experiment cannot be counted as successful. Why? Because results from the other 200 could have distorted the results had they been picked up. There is a chance. Thus, the detectors would have to be 100 per cent efficient in a successful experiment.

In fact, the example was a gross exaggeration: detectors are only 5-30 per cent efficient.

One (step) at a time

Resolution for the no-communication problem came in 1998 by scientists from Austria, who also closed the rotational invariance loophole. The fair sampling assumption was resolved by a team of scientists from the USA in 2001, one of whom was David Wineland, physics Nobel Laureate, 2012. However, they used only two ions to make the measurements. A more thorough experiment’s resultswere announced just last month.

Researchers from the Institute for Quantum Optics and Quantum Communication, Austria, had used detectors called transition-edge sensors that could pick up individual photons for detection with a 98 per cent efficiency. These sensors were developed by the National Institute for Standards and Technology, Maryland, USA. In keeping with tradition, the experiment admitted the no-communication loophole.

Unfortunately, for an experiment to be a successful Bell-experiment, it must get rid of all three problems at the same time. This hasn’t been possible to date, which is why a conclusive Bell’s test, and the key to quantum mechanics’ closet of hidden phenomena, eludes us. It is as if nature uses one loophole or the other to deceive the experimenters.*

The silver lining is that the photon has become the first particle for which all three loopholes have been closed, albeit in different experiments. We’re probably getting there, loopholes relenting. The reward, of course, could be the greatest of all: We will finally know if nature is described by quantum mechanics, with its deceptive trove of exotic phenomena, or by classical mechanics and general relativity, with its reassuring embrace of locality and realism.

(*In 1974, John Clauser and Michael Horne found a curious workaround for the fair-sampling problem that they realised could be used to look for new physics. They called this the no-enhancement problem. They had calculated that if some method was found to amplify the photons’ signals in the experiment and circumvent the low detection efficiency, the method would also become a part of the result. Therefore, if the result came out that quantum mechanics was nonlocal, then the method would be a nonlocal entity. So, using different methods, scientists distinguish between previously unknown local and nonlocal processes.)

(This blog post first appeared at The Copernican on June 15, 2013.)

Can science and philosophy mix constructively?

'The School of Athens', painted by Rafael during the Renaissance in 1509-1511, shows philosophers, mathematicians and scientists of ancient Greece gathered together.
‘The School of Athens’, painted by Rafael during the Renaissance in 1509-1511, shows philosophers, mathematicians and scientists of ancient Greece gathered together. Photo: Wikimedia Commons

Quantum mechanics can sometimes be very hard to understand, so much so that even thinking about it becomes difficult. This could be because its foundations lay in the action-centric depiction of reality that slowly rejected its origins and assumed a thought-centric one garb.

In his 1925 paper on the topic, physicist Werner Heisenberg used only observable quantities to denote physical phenomena. He also pulled up Niels Bohr in that great paper, saying, “It is well known that the formal rules which are used [in Bohr’s 1913 quantum theory] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron.”

A true theory

Because of the uncertainty principle, and other principles like it, quantum mechanics started to develop into a set of theories that could be tested against observations, and that, to physicists, left very little to thought experiments. Put another way, there was nothing a quantum-physicist could think up that couldn’t be proved or disproved experimentally. This way of looking at the world – in philosophy – is called logical positivism.

This made quantum mechanics a true theory of reality, as opposed to a hypothetical, unverifiable one.

However, even before Heisenberg’s paper was published, positivism was starting to be rejected, especially by chemists. An important example was the advent of statistical mechanics and atomism in the early 19th century. Both of them interpreted, without actual physical observations, that if two volumes of hydrogen and one volume of oxygen combined to form water vapor, then a water molecule would have to comprise two atoms of hydrogen and one atom of oxygen.

A logical positivist would have insisted on actually observing the molecule individually, but that was impossible at the time. This insistence on submitting physical proof, thus, played an adverse role in the progress of science by delaying/denying success its due.

As time passed, the failures of positivism started to take hold on quantum mechanics. In a 1926 conversation with Albert Einstein, Heisenberg said, “… we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are the frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path.” And since he held that any theory ought only to be a true theory, he concluded that these parameters must feature in the theory, and what it projected, as themselves instead of the unobservable electron path.This wasn’t the case.

Gaps in our knowledge

Heisenberg’s probe of the granularity of nature led to his distancing from the theory of logical positivism. And Steven Weinberg, physicist and Nobel Laureate, uses just this distancing to harshly argue in a 1994 essay, titled Against Philosophy, that physics has never benefited from the advice of philosophers, and when it does, it’s only to negate the advice of another philosopher – almost suggesting that ‘science is all there is’ by dismissing the aesthetic in favor of the rational.

In doing so, Weinberg doesn’t acknowledge the fact that science and philosophy go hand in hand; what he has done is simply to outline the failure of logical positivism in the advancement of science.

At the simplest, philosophy in various forms guides human thought toward ideals like objective truth and is able to establish their superiority over subjective truths. Philosophy also provides the framework within which we can conceptualize unobservables and contextualize them in observable space-time.

In fact, Weinberg’s conclusion brings to mind an article in Nature News & Comment by Daniel Sarewitz. In the piece, Sarewitz, a physicist, argued that for someone who didn’t really know the physics supporting the Higgs boson, its existence would have to be a matter of faith than one of knowledge. Similarly, for someone who couldn’t translate electronic radiation to ‘mean’ the electron’s path, the latter would have to be a matter of faith or hope, not a bit of knowledge.

Efficient descriptions

A more well-defined example is the theory of quarks and gluons, both of which are particles that haven’t been spotted yet but are believed to exist by the scientific community. The equipment to spot them is yet to be built and will cost hundreds of billions of dollars, and be orders of magnitude more sophisticated than the LHC.

In the meantime, unlike what Weinberg and like what Sarewitz would have you believe, we do rely on philosophical principles, like that of sufficient reasoning (Spinoza 1663Leibniz 1686), to fill up space-time at levels we can’t yet probe, to guide us toward a direction that we ought to probe after investing money in it.

This is actually no different from a layman going from understanding electric fields to supposedly understanding the Higgs field. At the end of the day, efficient descriptions make the difference.

Exchange of knowledge

This sort of dependence also implies that philosophy draws a lot from science, and uses it to define its own prophecies and shortcomings. We must remember that, while the rise of logical positivism may have shielded physicists from atomism, scientific verification through its hallowed method also did push positivism toward its eventual rejection. There was human agency in both these timelines, both motivated by either the support for or the rejection of scientific and philosophical ideas.

The moral is that scientists must not reject philosophy for its passage through crests and troughs of credence because science also suffers the same passage. What more proof of this do we need than Popper’s and Kuhn’s arguments – irrespective of either of them being true?

Yes, we can’t figure things out with pure thought, and yes, the laws of physics underlying the experiences of our everyday lives are completely known. However, in the search for objective truth –whatever that is – we can’t neglect pure thought until, as Weinberg’s Heisenberg-example itself seems to suggest, we know everything there is to know, until science and philosophy, rather verification-by-observation and conceptualization-by-ideation, have completely and absolutely converged toward the same reality.

Until, in short, we can describe nature continuously instead of discretely.

Liberation of philosophical reasoning

By separating scientific advance from contributions from philosophical knowledge, we are advocating for the ‘professionalization’ of scientific investigation, that it must decidedly lack the attitude-born depth of intuition, which is aesthetic and not rational.

It is against such advocacy that American philosopher Paul Feyerabend voiced vehemently: “The withdrawal of philosophy into a ‘professional’ shell of its own has had disastrous consequences.” He means, in other words, that scientists have become too specialized and are rejecting the useful bits of philosophy.

In his seminal work Against Method (1975), Feyerabend suggested that scientists occasionally subject themselves to methodological anarchism so that they may come up with new ideas, unrestricted by the constraints imposed by the scientific method, freed in fact by the liberation of philosophical reasoning.

These new ideas, he suggests, can then be reformulated again and again according to where and how observations fit into it. In the meantime, the ideas are not born from observations but pure thought that is aided by scientific knowledge from the past. As Wikipedia puts it neatly: “Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts.” These ‘known facts’ are akin to Weinberg’s observables.

So, until the day we can fully resolve nature’s granularity, and assume the objective truth of no reality before that, Pierre-Simon Laplace’s two-century old words should show the way: “We may regard the present state of the universe as the effect of its past and the cause of its future” (An Essay on Probabilities, 1814).

(This blog post first appeared at The Copernican on June 6, 2013.)

Bitcoins and the landscape of internet commerce

In a previous post, I’d laid out the technical details of what goes into mining and transacting with bitcoins (BTC). My original idea was to talk about why they are an important invention, but also felt that the technology mattered enough to merit a post of its own.

BTCs are not a fiat currency. That means they’re a kind of money that has not acquired its value from a government, a government-backed organization or a law. Instead, BTCs acquire their value by assuring security, anonymity and translatability, and are most suited for performing transactions that could do without incompetent interference from banks and public bodies. In short, BTCs are not government-backed.

They’re ‘produced’ by users who have a piece of open source code using which they perform multiple encryptions to ‘mine’ a coin. The rest of the network then checks the validity of the coin using similar encryptions. There’s a mediating regulatory system that’s purely algorithmic, and it automatically and continuously adjusts the difficulty of mining a coin until all 21 million have been mined.

The technical intricacies behind the currency have built up to provide the virtual currency with some critical features that have, in the recent months, made BTCs both a currency and a commodity.

Bitcoins are different because they’re made differently

The commodity value rests in its currency value (and some speculative value), which isn’t just a number but a number and some implications. The number, of course, is somewhere around $124 for 1 BTC today (June 2, 2013). The implications are that you don’t have to reveal your identity if you’re an owner of BTCs. This is partly because the currency has no central issuing authority that’s regulating its flow, therefore no body that wants to know how the coins are being used and by whom.

Think World of Warcraft and in-game money: You play the game, you make some, you hoard and safeguard it. Now, imagine if you could use this money in the real world. That’s what BTCs are.

As a collateral, you also get to transfer coins anonymously. You do this between what are called wallets, each of which contains ‘addresses’ to locations on the web; each address contains some bitcoins.

The technical architecture is such that once coins are sent, they stay sent; there’s no way to reverse the transaction other than by initiating a new one. There are also far fewer security concerns than those tagging along with offline currencies, such as forgery and material damage. BTCs simply exist as a string of numbers and characters on the internet. Their veracity is established by the mining network.

They are also invisible to banks and taxmen. Why are they invisible to banks? Because the implied authority of BTCs arises from its ‘democratically’ secured birth and distribution, and doesn’t need an institution like the bank to verify its validity, nor, as a result, will it be subject to a processing fee (the ‘democracy’ is ironic because there’s no one to take the blame when something about the coins goes wrong). Why are they invisible to taxmen? Because they are not issued by a government.

Thus, on the upside, there is no authority that can debase the currency, mishandle it out of greed or just plain incompetence, nor lend it out in waves with no thought spared for the reserves. As Warren Buffet wrote in 2012: “Governments determine the ultimate value of money, and systemic forces will sometimes cause them to gravitate to policies that produce inflation. From time to time such policies spin out of control.”

The threat of deflation

On the downside, because of the anonymity and irreversibility of transactions, and if your system is left vulnerable to a hack while you take a nap, your BTCs can be stolen from your wallet forever, with no way to find out who took them. However, this is only a minor glitch in the bitcoins system; an even larger one is the threat of deflation.

At the moment, there are some 11 million BTCs already mined, with the remaining 10 million to be mined by 2025. Even by about 2020-2022, the supply of BTCs as regulated by the network will become so low as to be, for all practical purposes, considered constant. By then, price discovery would’ve matured, and speculations diminished, so that each coin will then have an almost fixed, instead of constantly increasing, price-tag. This process will also be aided by scale.

Unfortunately, if, by then, millions use BTCs, the absence of anyone to issue new units would lead to spiking demand and, thus, value, resulting in an enormous deflation of commodities. And in a deflationary environment, economies don’t grow; this is where a primitive crypto-currency differs from government-issued notations of currency (this is also what happened in 1636). So, a widespread adoption of BTCs is not a good thing, but it’s a good place to start thinking what about currencies needs to be fixed.

An ideal currency, for example, might be able to transcend borders and appeal to things other than nationality to be held valuable, like BTCs can be converted into a host of other currencies, and even be used to denote value in different countries simultaneously.

For instance, a service titled Mt. Gox operates out of the US that lets you convert dollars into BTCs. However, ever since the FBI decided to crack down on the system because it was a violation of federal law for individuals to create private currency systems, Mt. Gox has necessitated photographic identities of its users since May 30, which defeats the central purpose. Of course, Gox’s faulty policies that made it harder to obtain coins were also to blame. The moral’s that they’re attracting the wrong kind of attention and that makes them even less attractive an asset.

Lighting the way ahead

At the end of the day, BTCs offer a lot of promise about refining future payments. Extensions to it likeZerocoin assist in the preservation of anonymity even if it has been violated by government interference. In the future, BTCs might even tear down paywalls and boost trade. Even fight spam (by making you pay a thousandth of a BTC to a receiver every time you sent out a mail. If you sent out a billion, you’d have paid up – all without the hassle of using a credit card)!

At the moment, though, bitcoins are assailed by important flaws as well as heady speculation that’s driving their mining, but they’re showing the way ahead well enough.

This post first appeared, as written by me, on The Copernican science blog on June 2, 2013.

Trying to understand bitcoins

In a 2008 paper, a Japanese programmer, Satoshi Nakamoto, introduced an alternate form of currency that he called bitcoins. His justifications were the problems plaguing contemporary digital commerce. In Nakamoto’s words:

Completely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes. The cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions, and there is a broader cost in the loss of ability to make non-reversible payments for nonreversible services.

With the possibility of reversal, the need for trust spreads.

Merchants must be wary of their customers, hassling them for more information than they would otherwise need. A certain percentage of fraud is accepted as unavoidable. These costs and payment uncertainties can be avoided in person by using physical currency, but no mechanism exists to make payments over a communications channel without a trusted party.

Nakamoto’s solution was a purely digital currency – the bitcoin – that would let transacting parties remain anonymous, keep transactions very secure, and eliminate redundant fees. Unlike conventional currencies such as the rupee or the dollar, it would also be impervious to government interference. And it would accomplish all this by “being material” only on the world wide web.

Contrary to popular opinion, bitcoins don’t already exist, waiting to be found, etc. Bitcoins are created when a particular kind of transaction happens – not between two people, but between two people and a system that can be thought of as a bitcoin client. It exists on the world wide web, too.

When you login through your client and start looking for a bitcoin, you’re given a bit of information – like your location on the web, a time, a date, an index number, etc. – called a mandatory string. You then proceed to encrypt the string using an algorithm called the SHA-256. Doing this would require a computer or processor called the miner.

A legacy in the string

On the miner, an encryption algorithm performs mathematical and logical operations on it that distorts all information that would’ve been visible at first glance. For instance, if the mandatory string reads like thecopernican.28052013.1921681011, post-encryption with SHA-256 it would read 2aa003e47246e54f439873516cb1b2d61af8def752fe883c22886c39ce430563.

In the case of bitcoins, the mandatory string consists of a collection of all the mandatory strings that have been used by users before it. So, encrypting it would mean you’re encrypting the attempts of all those who have come before you, maintaining a sort of legacy called the blockchain.

After this first step, when you manage to encrypt the mandatory string in a specific way – such as such that the first four digits are zero, say – as determined by the system, you’ve hit your jackpot… almost.

This jackpot is a block of 50 bitcoins, and you can’t immediately own it. Because you’ve performed an encryption that could just as well have been staged, you’ve to wait for confirmation. That is, another user who’s out there looking for bitcoins must have encrypted another bit of mandatory string the exact same way. The odds are against you, but you’ve to wait for it to happen or you won’t get your bitcoins.

Once another user lands up on your block, then your block is confirmed and it’s split between you – the miner – and the confirmers, with you getting the lion’s share.

Proof of work, and its denial

This establishes proof of work in getting to the coins, and implies a consensus among miners that your discovery was legitimate. And you don’t even need to reveal your identity for the grant of legitimacy. But of course, the number of confirmations necessary to consummate a “dig” varies – from six to some appropriate number.

If, somehow, you possess more than 50 per cent of the bitcoin-mining community’s encrypting power, then you can perform the mining as well as the confirmation. That is, you will be able to establish your own blockchain as you are the consensus, and generate blocks faster than the rest of the network. Over time, your legacy will be longer than the original, making it the dominant chain for the system.

Similarly, if you have transferred your bitcoins to another person, you will also be able to reverse the transaction. As stated in a paper by Meni Rosenfeld: “… if the sender [of coins] would be able, after receiving [a] product, to broadcast a conflicting transaction sending the same coin back to himself,” the concept of bitcoins will be undermined.

Greed is accounted for

Even after you’ve landed your first block, you’re going to keep looking for more blocks. And because there are only 21 million bitcoins that the system has been programmed to allow, finding each block must increase the difficulty of finding subsequent blocks.

Why must it? Because if all the 21 million were equally difficult to find, then they’d all have been found by now. The currency would neither have had time to accrue a community of its users nor the time needed to attain a stable value that can be useful when transacting. Another way to look at it is because bitcoins have no central issuing authority, like RBI for the rupee, regulating the value of the currency after letting it become monopolised would be difficult.

The coin doesn’t have an intrinsic value but provides value to transactions. The only other form of currency – the one issued by governments – represents value that can be ascertained by government-approved institutions like banks. This shows itself as a processing fee when you’re wiring money between two accounts, for instance.

A bitcoin’s veracity, however, is proven just like the its mining: by user confirmation.

What goes around comes around

If A wants to transfer bitcoins to B, the process is:

  1. A informs B.
  2. B creates a block that comes with a cryptographic key pair: a private key that is retained by B and apublic key that everyone knows.
  3. A tells the bitcoin client, software that mediates the transaction, that he’d like to transfer 10 bitcoins to B’s block.
  4. The client transfers 10 bitcoins to the new block.
  5. The block can be accessed only with the private key, which now rests with B, and the public key, which other miners use to verify the transaction.

Since there is no intervening ‘authority’ like a bank that ratifies the transaction but other miners themselves, the processing fee is eliminated. Moreover, because of the minimal resources necessary to start and finish a transaction, there is no minimum size of transaction for it to be economically feasible. And last: a transaction is always (remarkably!) secure.

God in the machine

While the bitcoin client can be used on any computer, special hardware is necessary for a machine to repeatedly encrypt – a.k.a. hash – a given string until it arrives at a block. Every time an unsatisfactory hash is generated that’s rejected by the system, a random number is affixed to the mandatory string and then hashed again for a different result. Each such result is called a nonce.

Because only a uniquely defined nonce – such as starting with a few zeroes, etc. – is acceptable, the mining rig must be able to hash at least millions of times each second in order to yield any considerable results. Commercially available rigs hash much faster than this, though.

The Avalon ASIC miner costs $9,750 for an at-least-60 billion hashes per second (GH/s) unit; the BFL Jalapeno 50-GH/s miner comes at $2,499. Note, however, that Avalon accepts only bitcoins as payment these days, and BFL haven’t shipped their product for quite some time now.

The electronic architecture behind such miners is either the application-specific integrated circuit (ASIC) or the advanced field programmable gate array (FPGA), both of which are made to run the SHA-256 algorithm. ASICs are integrated circiuts customised for a particular application. FPGAs are ASICs that are customisable even after manufacturing.

Because of the tremendous interest in bitcoins, and crypto-currencies in general, its economic impact is best measured not just by its present value – a whopping $130 per bitcoin – but also the mining-rig industry, their power consumption, ‘bitcoin bubbles‘, and the rise of other crypto-currencies that take an even more sophisticated approach to mitigating the pains of internet commerce.

This post first appeared, as written by me, in The Copernican science blog on May 31, 2013.

Bohr and the breakaway from classical mechanics

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the center like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantized and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behavior was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognized this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalization of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, even before he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more successful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalized them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his pathbreaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.

This article, as written by me, originally appeared in The Copernican science blog on May 19, 2013.

Bohr and the breakaway from classical mechanics

Niels Bohr, 1950.
Niels Bohr, 1950. Photo: Blogspot

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the centre like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantised and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behaviour was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognised this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalisation of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, only a few years after he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more succesful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalised them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his path-breaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.

The Last Temptation

Today, I bought The Last Temptation by Nikos Kazantzakis. When I handed the Rs. 450 it cost over at the counter, it was a significant moment for me because for the last three years, after my reading habit had fallen off but before I had realized that it had, I was rejecting books that “wouldn’t appeal to the man I wanted to become”.

I wouldn’t read books that had strong religious elements (because I wanted to be an atheist), that hadn’t good reviews (because I wanted to spend time “well”), that attended to morals and values I considered irrelevant, that hosted plots drawing upon cultural memories that were simply American or simply European but surely not global, etc. I would find the smallest of excuses to avoid masterpieces.

At the same time, I would read other books – especially non-fiction and works of fantasy fiction. To this day, I don’t know whence that part of me arose that judged literary agency before it was agent, but I do know it turned me into this pontificator who thought he’d read enough books to start judging others without having to read them. A part of me has liked to think nobody can do that. And by buying a copy of The Last Temptation (and intending to read it), I think I am out of mine.

Of course, I’m also assuming the solution is something so simple…

Choices.

The Verge paid Paul Miller to stay away from the internet for a year.

paul_miller_verge

We have this urge to think of the internet as something that wasn’t produced by human agency, like an alien sewerage network whose filth has infected us and our lives to the point of disease. If someone has problems and they tell you about it, don’t tell me you haven’t thought about blaming the internet. I have, too. We think it is a constantly refilled dump that spills over onto our computer screens (while also hypocritically engaging in the rhetoric of how many opportunities “the social media” hold). And then, we realize that the internet is one massive improbably impressionable relay of emotions, propped up on infrastructure that simplifies access a hundredfold. There’s nothing leaving it behind will do to you because it’s always been your choice whether or not to access it.

In fact, that’s what you rediscover.

(Hat-tip to Dhiya Kuriakose)

Which way does antimatter swing?

In our universe, matter is king: it makes up everything. Its constituents are incredibly tiny particles – smaller than even the protons and neutrons they constitute – and they work together with nature’s forces to make up… everything.

There was also another form of particle once, called antimatter. It is extinct today, but when the universe was born 13.82 billion years ago, there were equal amounts of both kinds.

Nobody really knows where all the antimatter disappeared to or how, but they are looking. Some others, however, are asking another question: did antimatter, while it lasted, fall downward or upward in response to gravity?

Joel Fajans, a professor at the University of California, Berkeley, is one of the physicists doing the asking. “It is the general consensus that the interaction of matter with antimatter is the same as gravitational interaction of matter,” he told this correspondent.

But he wants to be sure, because what he finds could revolutionize the world of physics. Over the years, studying particles and their antimatter counterparts has revealed most of what we know today about the universe. In the future, physicists will explore their minuscule world, called the quantum world, further to see if answers to some unsolved problems are found. If, somewhere, an anomaly is spotted, it could pave the way for new explanations to take over.

“Much of our basic understanding of the evolution of the early universe might change. Concepts like dark energy and dark matter might have be to revised,” Fajans said.

Along with his colleague Jonathan Wurtele, Fajans will work with the ALPHA experiment at CERN to run an elegant experiment that could directly reveal gravity’s effect on antimatter. ALPHA stands for Anti-hydrogen Laser Physics Apparatus.

We know gravity acts on a ball by watching it fall when dropped. On Earth, the ball will fall toward the source of the gravitational pull, a direction called ‘down’. Fajans and Wurtele will study if down is in the same place for antimatter as for matter.

An instrument at CERN called the anti-proton decelerator (AD) synthesizes the antimatter counterpart of protons for study in the lab at a low energy. Fajans and co. will then use the ALPHA experiment’s setup to guide them into the presence of anti-electrons derived from another source using carefully directed magnetic fields.

When an anti-proton and an anti-electron come close enough, their charges will trap each other to form an anti-hydrogen atom.

Because antimatter and matter annihilate each other in a flash of energy, they couldn’t be let near each other during the experiment. Instead, the team used strong magnetic fields to form a force-field around the antimatter, “bottling” it in space.

Once this was done, the experiment was ready to go. Like fingers holding a ball unclench, the magnetic fields were turned off – but not instantaneously. They were allowed to go from ‘on’ to ‘off’ over 30 milliseconds. In this period, the magnetic force wears off and lets gravitational force take its place.

And in this state, Fajans and his team studied which way the little things moved: up or down.

The results

The first set of results from the experiment have allowed no firm conclusions to be drawn. Why? Fajans answered, “Relatively speaking, gravity has little effect on the energetic anti-atoms. They are already moving so fast that they are barely affected by the gravitational forces.” According to Wurtele, about 411 out 434 anti-atoms in the trap were so energetic that the way they escaped from the trap couldn’t be attributed to gravity’s pull or push on them.

Among them, they observed roughly equal numbers of anti-atoms to falling out at the bottom of the trap as at the top (and sides, for that matter.)

They shared this data with their ALPHA colleagues and two people from the University of California, lecturer Andrew Charman and postdoc Andre Zhmoginov. They ran statistical tests to separate results due to gravity from results due to the magnetic field. Again, much statistical uncertainty remained.

The team has no reason to give up, though. For now, they know that gravity would have to be 100 times stronger than it is for them to see any of its effects on anti-hydrogen atoms. They have a lower limit.

Moreover, the ALPHA experiment is also undergoing upgrades to become ALPHA-2. With this avatar, Fajans’s team also hopes to incorporate laser-cooling, a method of further slowing the anti-atoms, so that the effects of gravity are enhanced. Michael Doser, however, is cautious.

The future

As a physicist working with antimatter at CERN, Doser says, “I would be surprised if laser cooling of antihydrogen atoms, something that hasn’t been attempted to date, would turn out to be straightforward.” The challenge lies in bringing the systematics down to the point at which one can trust that any observation would be due to gravity, rather than due to the magnetic trap or the detectors being used.

Fajans and co. also plan to turn off the magnets more slowly in the future to enhance the effects of gravity on the anti-atom trajectories. “We hope to be able to definitively answer the question of whether or not antimatter falls down or up with these improvements,” Fajans concluded.

Like its larger sibling, the Large Hadron Collider, the AD is also undergoing maintenance and repair in 2013, so until the next batch of anti-protons are available in mid-2014, Fajans and Wurtele will be running tests at their university, checking if their experiment can be improved in any way.

They will also be taking heart from there being two other experiments at CERN that can verify their results if they come up with something anomalous, two experiments working with antimatter and gravity. They are the Anti-matter Experiment: Gravity, Interferometry, Spectrocopy (AEGIS), for which Doser is the spokesperson, and the Gravitational Behaviour of Anti-hydrogen at Rest (GBAR).

Together, they carry the potential benefit of an independent cross-check between techniques and results. “This is less important in case no difference to the behaviour of normal matter is found,” Doser said, “but would be crucial in the contrary case. With three experiments chasing this up, the coming years look to be interesting!”

This post, as written by me, originally appeared in The Copernican science blog at The Hindu on May 1, 2013.

The pain is gone.

Reading some pages of fiction touched off old memories that I’d forgotten existed, bringing back to life words and, with them, sensations. Words were between words, ideas between ideas, color underneath hue.

Earlier, I wrote not to remember or document, I wrote because I knew of no other way to digest the world; when I wrote, I grew up. Every phrase I pushed back into the inspiration whence it had come, like a bullet pressed back into the wound, I’d bleed, but the blood would be blood, just there, undigested like a colored liquid I could see, feel it crawling, but not speak about. So I wrote relentlessly, good or bad, profound or – as often was the case – meaningless.

And then I’d read myself, I’d grow up just a little, and there’d be a little more to think about life. I’m not much of a traveller, a mover even, so over time, what I wrote about would have become mundane, featureless, like a barren tract of land that lay rasping, unable to breathe air and already alien to water because it had eaten and suckled on itself, if not for books. I grew up on the minutes of lives very different from my own – or whatever lay beneath all the pages of my ink – and soon couldn’t think for myself without even the gentlest consideration of another character’s opinion.

As the years passed, I began to frighten me, I was not comfortable with the decisions I made for myself. It wasn’t that I feared that I’d be the only one to blame; in fact, that thought had never struck. No, it was simply the lack of awareness of the self, a full man beneath the patina of literature, of scientific intellect and philosophical leanings, built upon all the uncertainties and failures that the litterateur above had thwarted. A part of me had gambled me away for knowledge of the desires of other men and women, while another waited, rather cowered, in its weakening shadow.

Finally, one day, the world arrived, and robbed me away: from books, from stories, from oh-so-important The Others. What was left of me emerged, looking upon the world as a continuous litany of disappointment, the pain and the shock of humiliation – much of it in my own eyes – still evident, and took its first few steps. It tottered. It fell. It stood up, and it fell again. When it learned to stand up and straight, it refused to fall ever again.

The child was man, the writer was gone, the learner was robbed, and the world was upon me, smothering me, it smothers me still… and then I found books once more. I long to return to my shell but the emergence seems irreversible. Now, when I look upon the words, I see words: I see that they are red, viscous, flowing only with steep gradient, still and even tending to crenellate. I know that it is blood, but the nerves are deadened. The pain is gone. It is difficult to grow up when the pain is gone.