Thinking quantum

In quantum physics, every metric is conceived as a vector. But that’s where its relation with classical physics ends, makes teaching a pain.

Teaching classical mechanics is easy because we engage with it every day in many ways. Enough successful visualization tools exist to do that.

Just wondering why quantum mechanics has to be so hard. All I need is to find a smart way to make visualizing it easier.

Analogizing quantum physics with classical physics creates more problems than it solves. More than anything, the practice creates a need to nip cognitive inconsistencies in the bud.

If quantum mechanics is the way the world works at its most fundamental levels, why is it taught in continuation of classical physics?

Is or isn’t it easier to teach mathematics and experiments relating to quantum mechanics and then present the classical scenario as an idealized, macroscopic state?

After all, isn’t that the real physics of the times? We completely understand classical mechanics; we need more people who can “think quantum” today.

Getting started on superconductivity

After the hoopla surrounding and attention on particle physics subsided, I realized that I’d been riding a speeding wagon all the time. All I’d done is used the lead-up to (the search for the Higgs boson) and the climax itself to teach myself something. Now, it’s left me really excited! Learning about particle physics, I’ve come to understand, is not a single-track course: all the way from making theoretical predictions to having them experimentally verified, particle physics is an amalgamation of far-reaching advancements in a host of other subjects.

One such is superconductivity. Philosophically, it’s a state of existence so far removed from its naturally occurring one that it’s a veritable “freak”. It is common knowledge that everything that’s naturally occurring is equipped to resist change that energizes, to return whenever possible to a state of lower energy. Symmetry and surface tension are great examples of this tendency. Superconductivity, on the other hand, is the desistence of a system to resist the passage of an electric current through it. As a phenomenon that as yet doesn’t manifest in naturally occurring substances, I can’t really opine on its phenomenological “naturalness”.

In particle physics, superconductivity plays a significant role in building powerful particle accelerators. In the presence of a magnetic field, a charged particle moves in a curved trajectory through it because of the Lorentz force acting on it; this fact is used to guide the protons in the Large Hadron Collider (LHC) at CERN through a ring 27 km long. Because moving in a curved path involves acceleration, each “swing” around the ring happens faster than the last, eventually resulting in the particle traveling at close to the speed of light.

A set of superconducting quadrupole-electromagnets installed at the LHC with the cryogenic cooling system visible in the background

In order to generate these extremely powerful magnetic fields – powerful because of the minuteness of each charge and the velocity required to be achieved – superconducting magnets are used that generate fields of the order of 20 T (to compare: the earth’s magnetic field is 25-60 μT, or close to 500,000-times weaker)! Furthermore, the direction of the magnetic field is also switched accordingly to achieve circular motion, to keep the particle from being swung off into the inner wall of the collider at any point!

To understand the role the phenomenon of superconductivity plays in building these magnets, let’s understand how electromagnets work. In a standard iron-core electromagnet, insulated wire is wound around an iron cylinder, and when a current is passed through the wire, a magnetic field is generated around the cross-section of the wire. Because of the coiling, though, the centre of the magnetic field passes through the axis of the cylinder, whose magnetic permeability magnifies the field by a factor of thousands, itself becoming magnetic.

When the current is turned off, the magnetic field instantaneously disappears. When the number of coils is increased, the strength of the magnetic field increases. When the strength of the current is increased, the strength of the magnetic field increases. However, beyond a point, the heat dissipated due to the wire’s electric resistance reduces the amount of current flowing through it, consequently resulting in a weakening of the core’s magnetic field over time.

It is Ohm’s law that establishes proportionality between voltage (V) and electric current (I), calling the proportionality-constant the material’s electrical resistance: R = V/I. To overcome heating due to resistance, resistance itself must be brought down to zero. According to Ohm’s law, this can be done either by passing a ridiculously large current through the wire or bringing the voltage across its ends down to zero. However, performing either of these changes on conventional conductors is impossible: how does one quickly pass a large volume of water through any pipe across which the pressure difference is miniscule?!

Heike Kamerlingh Onnes

The solution to this unique problem, therefore, lay in a new class of materials that humankind had to prepare, a class of materials that could “instigate” an alternate form of electrical conduction such that an electrical current could pass through it in the absence of a voltage difference. In other words, the material should be able to carry large amounts of current without offering up any resistance to it. This class of materials came to be known as superconductors – after Heike Kamerlingh Onnes discovered the phenomenon in 1911.

In a conducting material, the electrons that essentially effect the flow of electric current could be thought of as a charged fluid flowing through and around an ionic 3D grid, an arrangement of positively charged nuclei that all together make up the crystal lattice. When a voltage-drop is established, the fluid begins to get excited and moves around, an action called conducting. However, the electrons constantly collide with the ions. The ions, then, absorb some of the energy of the current, start vibrating, and gradually dissipate it as heat. This manifests as the resistance. In a superconductor, however, the fluid exists as a superfluid, and flows such that the electrons never collide into the ions.

In (a classical understanding of) the superfluid state, each electron repels every other electron because of their charge likeness, and attracts the positively charged nuclei. As a result, the nucleus moves very slightly toward the electron, causing an equally slight distortion of the crystal lattice. Because of the newly increased positive-charge density in the vicinity, some more electrons are attracted by the nucleus.

This attraction, which, across the entirety of the lattice, can cause a long-range but weak “draw” of electrons, results in pairs of electrons overcoming their mutual hatred of each other and tending toward one nucleus (or the resultant charge-centre of some nuclei). Effectively, this is a pairing of electrons whose total energy was shown by Leon Cooper in 1956 to be lesser than the energy of the most energetic electron if it had existed unpaired in the material. Subsequently, these pairs came to be called Cooper pairs, and a fluid composed of Cooper pairs, a superfluid (thermodynamically, a superfluid is defined as a fluid that can flow without dissipating any energy).

Although the sea of electrons in the new superconducting class of materials could condense into a superfluid, the fluid itself can’t be expected to flow naturally. Earlier, the application of an electric current imparted enough energy to all the electrons in the metal (via a voltage difference) to move around and to scatter against nuclei to yield resistance. Now, however, upon Cooper-pairing, the superfluid had to be given an environment in which there’d be no vibrating nuclei. And so: enter cryogenics.

The International Linear Collider – Test Area’s (ILCTA) cryogenic refrigerator room

The thermal energy of a crystal lattice is given by E = kT, where ‘k’ is Boltzmann’s constant and T, the temperature. Demonstrably, to reduce the kinetic energy of all nuclei in the lattice to zero, the crystal itself had to be cooled to absolute zero (0 kelvin). This could be achieved by cryogenic cooling techniques. For instance, at the LHC, the superconducting magnets are electromagnets wherein the coiled wire is made of a superconducting material. When cooled to a really low temperature using a two-stage heat-exchanger composed of liquid helium jacketed with liquid nitrogen, the wires can carry extremely large amounts of current to generate very intense magnetic fields.

At the same time, however, if the energy of the superfluid itself surpassed the thermal energy of the lattice, then it could flow without the lattice having to be cooled down. Because the thermal energy is different for different crystals at different ambient temperatures, the challenge now lies in identifying materials that could permit superconductivity at temperatures approaching room-temperature. Now that would be (even more) exciting!

P.S. A lot of the related topics have not been covered in this post, such as the Meissner effect, electron-phonon interactions, properties of cuprates and lanthanides, and Mott insulators. They will be taken up in the future as they’re topics that require in-depth detailing, quite unlike this post which has been constructed as a superfluous introduction only.

Yes, I had $50.

Last week, I paid $50 to sign up for entrepreneur Dalton Caldwell‘s new start-up App.net. I wouldn’t have found the service by myself until it’d have been too late for me to get on their bandwagon early – and getting early on a promising bandwagon is something I’ve always missed out on. So, on a friend’s advice, I signed up for the alpha as a paying member (the other tier being paying developer), and went about finding out what really I’d signed up for. I know, it sounds stupid.

From where I was looking, App.net – by leaving out advertisers – provided access to more definition for developers to work with. Sure, it looks like Twitter for now, but I’m hoping that in the near future, it could yield a unified service through which I could manage my entire web-based social graph in real-time.

I’m not a developer. Sure, I can navigate through the world of developers, but I’d only be a tourist at the most. What I am at heart is an information-collator and -distributor. I read almost 50 articles on various topics every day, and that’s just opinion/analysis pieces. News is separate. More than anything, I’d be thrilled if I had someway to represent myself through these commodities (like I’m doing now by sharing links and blurbs on Facebook and short quips on Twitter) in a more tractable manner.

Not to mention: I’d also like it if I was able to customize what I had to offer and serve it differently. For instance, Twitter-lists is a concept that comes closest to tracking, in real-time, news on my favorite subjects from my favorite commentators. However, Twitter’s social infrastructure has left the possibilities arising out of that fragmented. Imagine, instead, how great it’d be if I could set up one platform from atop which multiple authors could share their favorite reads in real-time, which readers could then customize and consume.

Perhaps I’m going too far, perhaps I’m imagining things, but I’d like to think such things will become possible, and that App.net will have a role to play in it. Sure, I had $50, and I could just be trying to salvage the sense in my decision right now. However, if I hadn’t thought these and such things would be possible, I wouldn’t have spent the money I’d saved up to buy some hosting space for this blog.

The marching coloumns

Every day is a swing between highs and lows, and in the last two months that I’ve experienced them, they’ve never been periodic. Setting off the work, the mood depends on the weather: cloudy is good, buoyant, rain is more than welcome, but a clear, blue sky and a blazing fireball in the empyrean is a dampener on my spirits, if not on anyone else’s. How will I work if I’m sweating all the time? Hmm.

The traffic in my erstwhile small city has grown to draconean proportions. Some argue that it’s a good sign, a sign of the city turning into a metropolis. I don’t like it. It not only places more minutes, more hours between work and home, home and work, between the factories and the beach, between the railway stations and the travel-shops, but it turns nice auto-drivers into pissed-off tyrants whom you simply don’t want to run into.

It takes nothing to precipitate all this but the clock striking 6. Areas and wards transform from familiar crenelations of microscopic economies, communities of traders, sweatshop toilers, and flower-braiders to hotbeds of rage, of exodus and maddened intra-urban migration… Suddenly, friends want to leave, fathers want to be left alone, mothers want to vent, and sisters want only to know what the hell’s going on.

If you’re in Chennai and traveling by auto in the evenings, I suggest you carry a book, or a Kindle, or a smartphone with which to kill time. It’s a time-warp, absolute and unrelenting chronostasis, with a profanity-drenched metronome ticking away like a time-bomb in the seat in front of yours. Of course, there are also people pushing, people shoving their way through the maze of vehicles. For every mile, I suppose it’s 10 points, and for every deceptively shallow pothole surmounted, 50.

In this crazy, demented rush, the only place anyone wants to be is on the other side of the road, the Place Where There Is Space, a vacuum on the far side that sucks the journeymen and journeywomen of Chennai into a few seconds of a non-Chennai space. When I ride in an auto on such days, I just don’t mind waiting, for everyone to pass by. I don’t want to make enemies of my fellows. At the same time, I never might know them better than their mumbled gratitude when I wave them ahead.

The driver gets pissed off, though. Starts to charge more, calls me “soft”, and that I don’t have what it takes to live and survive in the city. I tell him I can live and survive in the city alright, it’s just the city that’s not the city anymore. Sometimes, the driver laughs; most times, it’s a frown. In that instant, I’m computed to become an intellectual, and auto-drivers seem to think intellectuals have buttloads of money.

The only thing these days that intellectuals have buttloads of is tolerance.

Tolerance to let the world pass by without doing anything about it, tolerance to letting passersby jeer at you and making you feel guilty, tolerance to the rivers that must flow and the coloumns that must march, tolerance to peers and idols who insist something must be done, tolerance to their mundane introspection and insistence that there’s more to doing things than just hoping that that’s a purpose in itself.

It’s circular logic, unbreakable without a sudden and overwhelming injection of a dose of chaos. When the ants scurry, the mosquitoes take off, and the elephants stampede, all to wade through an influx of uncertainty and incomprehension and unadulterated freedom, real purpose will be forged. When children grow up, they are introduced to this cycle, cajoled into adopting it. Eventually, the children are killed to make way for adults.

With penises and vaginas, the adults must rule this world. But why must they rule? They don’t know. Why must they serve? They don’t know. Yeah, sitting in an auto moving at 1 mile an hour, these questions weigh you down like lodestones, like anchors tugging at the seafloor, fastening your wayward and seemingly productive mind to an epiphany. You must surely have watched Nolan’s Inception: doesn’t the paradox of pitch circularity come to mind?

The grass is always greener on the other side, the staircase forever leads to heaven, the triangle is an infinite mobius spiral, each twist a jump into the few-seconds-from-now future. Somewhere, however, there is a rupture. Somewhere inside my city, there is a road at the other end of which there is my city in chronostasis, stuck in a few-hours-from-now past.

Where auto-drivers aren’t pissed off because the clock struck 6, where fathers and mothers realize nothing’s slowed down but just that their clocks have been on fast-forward of late, where snaking ribbons of smoke don’t compete for space but simply let it go, no longer covet it, only join in the collective sorrow of our city’s adolescence.

Building the researcher’s best friend

One of the most pressing problems for someone conducting any research on personal initiative has to be information storage, access, and reproduction. Even if you’re someone who’s just going through interesting papers in pre-print servers and journals and want to quickly store text, excerpts, images, videos, diagrams, and/or graphs on the fly, you’ll notice that a multitude of storage options exist that are still not academically intelligent.

For instance, for starters, I could use an offline notepad that has a toggle-equipped LaTex-interpreter that I could use to quickly key in equations.

So, when I stumbled across this paper written by Joshi, et al, at Purdue University in 1994, I was glad someone had taken the time and trouble to think up the software-architecture of an all-encompassing system that would handle information in all media, provide options for cross-referencing, modality, multiple authors, subject-wise categorization, cataloguing, data mining, etc. Here’s an excerpt from the paper.

The electronic notebook concept is an attempt to emulate the physical notebook that we use ubiquitously. It provides an unrestricted editing environment where users can record their problem and solution specifications, computed solutions, results of various analyses, commentary text as well as handwritten comments.

The notebook interface is multimodal and synergetic, it integrates text, handwriting, graphics, audio and video in its input and output modes. It functions not only as a central recording mechanism, it also acts as the access mechanism for all the tools that support the user’s problem solving activities.

(I’d like to take a moment to stress on good data-mining because it plays an instrumental role in effecting serendipitous discoveries within my finite corpus of data, i.e. (and as a matter of definition) if the system is smart enough to show me something that it knows could be related to what I’m working on and something that I don’t know is related to what I’m working on, then it’s an awesome system.)

The Purdue team went on to implement a prototype, but you’ll see it was limited to being an interactive PDE-solver. If you’re looking for something along the same lines, then the Wolfram Mathematica framework has to be your best bet: its highly intuitive UI makes visualizing the task at hand a breeze, and lets you focus on designing practical mathematical/physical systems while it takes care of getting problems out of the way.

However, that misses the point. For every time I come across an interesting paper, some sections of which could fit well into a corpus of knowledge that I’m, at the time, assimilating, I currently use a fragile customization of the WordPress CMS that “works” with certain folders in my hard-drive. And by “works”, I mean I’m the go-between semantic interpreter – and that’s exactly what I need an automaton for. On one of my other blogs – unnamed here because it’s an online index of sorts for me – I have tagged and properly categorized posts that are actually bits and pieces of different research paths.

For products that offer such functionalities as the ones I’m looking for, I’m willing to pay, and I’m sure anyone will given how much more handy such tools are becoming by the day. Better yet if they’re hosted on the cloud: I don’t have to bother about backing up too much and can also enjoy the added benefit of “anywhere-access”.

For now, however, I’m going to get back to installing the California Digital Library’s eXtensible Text Framework (CDL-XTF) – a solution that seems to be a promising offline variant.

Assuming this universe…

Accomplished physicists I have met or spoken with in the last four months professed little agreement over which parts of physics were set-in-stone and which parts simply largely-corroborated hypotheses. Here are some of them, with a short description of the dispute.

  1. Bosons – Could be an emergent phenomenon arising out of fermion-fermion interaction; current definition could be a local encapsulation of special fermionic properties
  2. Colour-confinement – ‘Tis held that gluons, mediators of the colour force, cannot exist in isolation nor outside the hadrons (that are composed of quarks held together by gluons); while experimental proof of the energy required to pull a quark free being much greater than the energy to pull a quark-antiquark pair out of vacuum exists, denial of confinement hasn’t yet been conclusively refuted (ref: lattice formulation of string theory)
  3. Massive gluons – A Millennium Prize problem
  4. Gravity – Again, could be an emergent phenomenon arising out of energy-corrections of hidden, underlying quantum fields
  5. Compactified extra-dimensions & string theory – There are still many who dispute the “magical” mathematical framework that string theory provides because it is a perturbative theory (i.e., background-dependent); a non-perturbative definition would make its currently divergent approximations convergent

If you ever get the opportunity to listen to a physicist ruminate on the philosophy of nature, don’t miss it. What lay-people would daily dispute is the macro-physical implications of a quantum world; the result is the all-important subjective clarification that lets us think better. What physicists dispute is the constitution of the quantum world itself; the result is the more objective phenomenological implications for everyone everywhere. We could use both debates.

“God is a mathematician.”

The more advanced the topics I deal with in physics, the more stark I observe the divergence from philosophy and mathematics to be. While one seems to drill right down to the bedrock of all things existential, the other assumes disturbingly abstract overtones, often requiring multiple interpretations to seem to possess any semblance of meaningfulness.

This is where the strength of the mind is tested: an ability to make sense of fundamental concepts in various contexts and to recall all of them at will so that complex associations don’t remain complex but instead break down under the gaze of the mind’s eye to numerous simple associations.

While computation theory would have us hold that a reasonable strength of any computing mechanism could be measured as the number of calculations it can perform per second, when it comes to high-energy physics, the strength lies with the quickness with which new associations are established where old ones existed. In other words, where unlearning is just as important as learning, we require adaptation and readjustment more than faster calculation.

In fact, the mathematics is such: at the fringe, unstable, flitting between virtuality and a reality that may or may not be this one.

One could contend that the definition of mathematics in its simplest form – number theory, fundamental theories of algebra, etc. – is antithetic to the kind of universe we seem to be unraveling. If we considered the example of physics, and the divergence of philosophy from theoretical physics, then my argument is unfortunately true.

However, at the same time, it seems to be outside the reach of human intelligence to conceive a new mathematical system that becomes simpler as we move closer to the truth and is ridiculously more complex as one strays from it toward simpler logic – not to mention outside the reach of reasoning! How would we then educate our children?

However, it is still unfortunate that only “greater” minds can comprehend the nature of the truth – what it comprises, what it necessitates, what it subsumes.

With this in mind: we also face the risk of submitting to broader and broader terms of explanation to make it simpler and simpler; we throw away important aspects of the nature of reality from our textbooks because people may not understand it, or may be disturbed by such clarity, and somehow result in the search seeming less relevant to daily life. Such an outcome we must keep from being precipitated by any activity in the name of and for the sake of science.

On Monday, I attended a short lecture by the eminent Indian particle physicist Dr. G. Rajasekaran, or Rajaji as he is referred to by his colleagues, on the Standard Model of high-energy physics and its future in the context of the CERN announcement on July 4, 2012. While his talk itself straightened a few important creases in my superficial understanding of the subject, two of its sections continues to nag at me.

The first was his attitude toward string theory, which was laudatory to say the least and stifling to say the most. When asked by a colleague of his from the Institute of Mathematical Science about constraints placed on string theory by theoretical physics, Rajaji dismissed it as a political “move” to discredit something as exotic as the mathematical framework that string theory introduced.

After a few short, stunted sniggers rippled through the audience, there was silence as everyone realised Rajaji was serious in his allegation: he had dismissed the question as some political comment! Upon some prodding by the questioner, Rajaji proceeded to answer in deliberately uncertain terms about the reasons for the supertheory’s existence and its hypotheses.

Now, I must mention that earlier in his lecture, he had mentioned that researchers, especially of high-energy/particle physics, tended to dismiss new findings just as quickly as they were ready to defend their own propositions because the subject they worked with was such: a faceless foe, constantly shifting form, one moment yielding to one whim, one serendipity, and the next moment, to the other (ref: Kuhn’s thesis). And here he was, living his words!

The second section was his conviction that the future of all kinds of physics lay in the hands of accelerator physics. That experimental proof was the sole arbiter for all things physical he summarised within a memorable statement:

God is a mathematician, but even he/she/it will wait for experimental proof before being right.

This observation arose when Rajaji decided to speculate aloud on the future of experimental particle physics, specially considering an observable proof of the existence of string theory.

He finished ruing that accelerator physics was an oft ignored subject in many research centres and universities; now that we had sufficiently explored the limits and capabilities of SM-physics, the physics to follow (SUSY, GUT, string theory, etc.) necessitated collision-energies of the order of 1019 GeV (the “upgraded” run of the LHC in early to July 2012 delivered a collision energy of 8,000 GeV).

These are energies well outside the ambit of current human capability. It may well be admitted at this point that an ultimate explanation of the universe and all it contains is not going to be simple, and definitely not elegant. Every step of the way, we seem to encounter two kinds of problems: one cardinal (particle-kinds and their properties) and metaphysical (why three families of particles and not two or four?).

While the mathematics is “reconfigured” to include such new findings, the philosophy acquires a rupture, a break in derivability, and implications become apparent ex post facto.

First, there are two simple axioms: that a particle can represent two states at the same time (superposition), i.e. 1 and 0, and that the information contained by a particle is destroyed the moment it is read. Braid these principles with the exotic phenomenon called entanglement, where two particles yield the same information upon observation even though they may be miles apart, and you get quantum computing and networking.

And here’s the router for it.

Click on image for pre-print paper

How many of you are familiar with the concept of a Google Hangout?

Better yet, how many of you are web developers and are familiar with the concept of a Google Hangout?

I assure you, the number in answer to this question is low in India. At least, that’s the conclusion I’m forced to reach after having liaised with a large number of such developers for a news-daily in south India. Just today, I’d asked Mr. SH, who heads a small web-dev firm in Chennai, if we could have a Hangout in the evening to iron out some creases. His response: “OK… What’s a Hangout? Do I need to be in front of a computer for it?”

Others didn’t answer all that differently, either. Sure, it could be that the idea of a Hangout isn’t as ubiquitous as some would like it to be, but I can’t see how a developer can’t have heard of it.

What smells like a harmful thought?

A lot about our biology is intertwined with our culture. When such an association is encountered for the first time, it could sound interesting, intriguing even, but with time, it becomes an evident relationship because our culture plays an important role in our upbringing. For example, think about how the body’s olfactory wiring affects what we think about cow-dung.

Broadly speaking, scatological odors snub any inclination on the individual’s part to approach the source or even think about it for prolonged periods. Talking about excrement quells taste buds at the dinner table and there is an immediate response in the form of revulsion.

Blech!

However, what could cause such a response is not a closed question. One answer that makes sense is that the brain interprets that excrement will be poisonous if consumed in any amount, and so turns the mind away from engaging with it in any fashion so as to minimize the risk of poisoning.

At the same time, in India, cow-dung is regularly used as fuel for cooking-fires and also for a multitude of ritualistic purposes. There is no revulsion of any sort amongst those who handle it, and if there had been any in the past, it could have been subdued.

If any test exists to exclude familiarity and false convictions, and establish that the excrement-poison relationship has been eliminated in the individual’s mind, this argument could be true.