A ‘quantum consciousness’ absurdity at IIT Mandi

As you go downward, inward, smaller and smaller, you get more vast conscious experience. This is the idea in Indian knowledge systems. At the bottom of it, or the very base of it, at the source, is Brahman. My point is that this is actually very consistent with what we’re learning about how consciousness may be produced in our brain due to quantum effects.

The person who made this statement is Stuart Hameroff, at the 10th convocation at IIT Mandi on December 5. Hameroff is a neuroscientist and anaesthesiologist. Since 1975, he has been at the University of Arizona, where, in 1999, he became professor in the department of anaesthesiology and psychology and the director for the Center for Consciousness Studies. He became an emeritus professor in 2003. He is famous for his part in the Orch-OR hypothesis of how consciousness originates in the brain. Hameroff and Roger Penrose (who won the Nobel Prize for physics in 2020 for unrelated work) collaborated on the idea and published a few papers detailing their assumptions and conclusions.

‘Orch-OR’ stands for ‘orchestrated objective reduction’. Broadly speaking, the hypothesis is that the states of microtubules, which are cellular structures inside neurons, enter into a quantum superposition – becoming like the cat inside the unopened box in the Schrödinger’s cat thought-experiment. The superposition is then forced to collapse (‘the box is opened’) in favour of one state by gravity. This is taken to be the moment when consciousness comes ‘on’. One scheme of the hypothesis says that when the superposition collapses, the process should release some electromagnetic radiation – a testable prediction.

Experiments looking for this radiation have come up empty. One experiment whose results were published in May 2022 found that the brain would have to have 1,000-times more of the cells that make up microtubules than it actually does, disfavouring the hypothesis in a range of space- and time-scales, albeit not entirely. The hypothesis is also largely controversial and doesn’t find favour among most physicists, who have been critical of Penrose’s calculations of how the gravity-mediated superposition collapse happens.

To be sure, there has been some evidence that quantum phenomena might be going on in the human brain; what we don’t have evidence for is sophisticated hypotheses like Orch-OR that claim a precise origin of consciousness.

So Hameroff’s statement at IIT Mandi that the concept of the ‘Brahman’ “is actually very consistent with what we’re learning about how consciousness may be produced in our brain due to quantum effects” is at best disingenuous. What we are learning is that Orch-OR is quite unlikely to be a valid explanation of the emergence of consciousness in the brain; and we are certainly not finding support for Orch-OR by shoehorning the concept of the ‘Brahman’ and spiritual ideas of consciousness into gravity’s effects on hypothetical forms of spacetime.

But his talk gets worse. A few minutes later, Hameroff says that consciousness appears to straddle the shared border between the quantum and the classical worlds, that the phenomenon of quantum state reduction (a.k.a. superposition collapse) is akin to the emergence of the “Atman from the Brahman”, and that he wondered “whether the many faces of Krishna were a quantum superposition”. He continues:

Under normal circumstances, the people back then didn’t see Krishna in superposition but as one, but knew in different ways that there could be this apparent superposition of many possible faces of Krishna.

So here we have a scientist who once helped develop and support a difficult hypothesis to explain a famously intractable problem using the methods of science, but who has now gone so far as to claim that a) a Hindu deity was a real person, b) he had many heads, and c) people didn’t see these heads but d) knew that he could have many faces, and e) understood them to be in an “apparent superposition”. What are we to make of this waterfall of nonsense?

His claims aren’t false because they’re unfalsifiable. Consider just one: Hameroff is postulating that macroscopic superposition could have been real (regarding the face of a human being, but let’s set that aside).

Quantum computers work by manipulating qubits, the smallest units of information in these machines, using quantum phenomena like superposition and entanglement. The computer’s result is the state to which the qubits’ superposition collapses. But unlike classical bits, which are made of semiconductors, qubits are very fragile systems and must be protected against external disturbances, even small amounts of electromagnetic radiation. If a qubit is disturbed, it will lose the ability to participate in the quantum computer in a process called decoherence.

To date, despite researchers’ best efforts, they haven’t built a quantum computer that completely eliminates errors arising out of decoherence. They also haven’t built quantum computers that can solve practical problems, like modelling stresses on a bridge or synthesising a drug with certain components either, meaning more complex machines will present more significant decoherence barriers. And quantum computers use subatomic (microscopic) particles as qubits.

There is no evidence to date of macroscopic objects being in perfect superposition, forget an entity as complicated and ‘noisy’ as a human face or body. Hameroff’s other claims require less explanation as to their absurdity.

There is a good chance he was invited to IIT Mandi with full knowledge of his views. Since 1994, Hameroff has been conducting a conference every year called ‘Science of Consciousness’, where some consciousness researchers present some notable ideas and results while others… Let me quote Tom Bartlett writing for The Guardian in 2018:

While the Science of Consciousness event has, technically, three programme chairs and an advisory committee, it is more or less The Stuart Show. He decides who will and who will not present. And, to put it nicely, not everyone is in love with the choices he makes. To put it less nicely: some consciousness researchers believe that the whole shindig has gone off the rails, that it is seriously damaging the field of consciousness studies, and that it should be shut down.

In 2012, Hameroff said,

“Let’s say the heart stops beating, the blood stops flowing, the microtubules lose their quantum state. The quantum information within the microtubules is not destroyed, it can’t be destroyed, it just distributes and dissipates to the universe at large. … If the patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says ‘I had a near death experience’. If they’re not revived, and the patient dies, it’s possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.”

In the Copenhagen interpretation of quantum mechanics, a superposition of states collapses into a single state when an observation is made on the system. What counts as an act of observation has been refined over the years: today, physicists understand that the collapse happens when the information required to describe the superposition is no longer locally available. In this sense, Hameroff’s delineation above seems to hew close to reality – but we have absolutely no way of saying that quantum information is the same as a soul!

As Jim Al-Khalili, an expert in quantum biology, said in 2020 about Orch-OR:

“There was some brief excitement about this idea initially, but I think very quickly most scientists said: ‘No, hang on a minute, just because quantum mechanics is mysterious and we don’t understand it and consciousness is mysterious and we don’t understand it, it doesn’t mean that the two have to be connected’.”

So Hameroff has been making dubious claims for a long time, including claims for at least a decade that overlook the differences between a knowledge system concerned with verifiable truths and the elimination of bias and one that is concerned with harmonising reality and perception regardless of tests – and the pitfalls of claiming that a ‘fact’ in one system could be wholly equivalent to a ‘fact’ in the other. To quote philosophy scholar S.K. Arun Murthi:

While [ancient Indian] systems of thought are called philosophical systems, they are unified in their aim: salvation and liberation of the soul. One question that has frequently been the topic of discussion in scholarly circles is whether Indian culture and civilisation really recognised an independent discipline called ‘philosophy’ as a discursive analytic tradition. The question arises because all its schools have been restricted to theological and soteriological concerns.

Surendranath Dasgupta even begins his aforementioned book with a note of caution, that Indian thought always manifested itself “in an yearning after the Infinite” and that “Hindus never busied themselves about the investigation of the laws of nature except in so far as it was connected with the general philosophical speculations”.

The other knowledge system, science, requires evidence, but Hameroff has none. Yet in May 2022 Hameroff again wrote:

“Light is the part of the electromagnetic spectrum that can be seen by the eyes of humans and animals – visible light. … Ancient traditions characterized consciousness as light. Religious figures were often depicted with luminous ‘halos’, and/or auras. Hindu deities are portrayed with luminous blue skin. And people who have ‘near death’ and ‘out of body’ experiences described being attracted toward a ‘white light’. In many cultures, those who have ‘awakened to the truth about reality’ are ‘enlightened’.”

(A few months earlier, Laxmidhar Behera, the director of IIT Mandi, had expressed belief and experience in exorcisms and that students should rid their friends’ parents of evil spirits using chants.)

Armchair logicians and social-media loudmouths will now interpret Hameroff’s talk as ‘evidence’ of ancient India’s intellectual supremacy and as his tacit endorsement of the physical reality of Hindu deities and the sophisticated ideas that sages and scholars of the time contemplated. Hameroff also says that “therapies aimed at microtubule resonance e.g. with painless, safe and pleasant brain ultrasound can treat mental and cognitive disorders”, extending a handle to many quacks already using quantum-physics gobbledygook to con unsuspecting care-seekers.

There may be some who truly believe such statements while others will wield it to further an agenda while knowing well that the claims are less than flimsy. Either way, the task in front of the debunker is much more devious: to set out not just the laws of nature and the methods of science but also the work of Hameroff and Penrose, the criticism against it, why expertise is not a carte blanche, the incommensurability of the knowledge systems involved, and the fine line between ‘absence of evidence’ and ‘evidence of absence’.

A quantum theory of consciousness

We seldom have occasion to think about science and religion at the same time, but the most interesting experience I have had doing that came in October 2018, when I attended a conference called ‘Science for Monks’* in Gangtok, Sikkim. More precisely, it was one edition of a series of conferences by that name, organised every year between scientists and science communicators from around the world and Tibetan Buddhist monks in the Indian subcontinent. Let me quote from the article I wrote after the conference to illustrate why such engagement could be useful:

“When most people think about the meditative element of the practice of Buddhism, … they think only about single-point meditation, which is when a practitioner closes their eyes and focuses their mind’s eye on a single object. The less well known second kind is analytical meditation: when two monks engage in debate and question each other about their ideas, confronting them with impossibilities and contradictions in an effort to challenge their beliefs. This is also a louder form of meditation. [One monk] said that sometimes, people walk into his monastery expecting it to be a quiet environment and are surprised when they chance upon an argument. Analytical meditation is considered to be a form of evidence-sharpening and a part of proof-building.”

As interesting as the concept of the conference is, the 2018 edition was particularly so because the field of science on the table that year was quantum physics. That quantum physics is counter-intuitive is a banal statement; it is chock-full of twists in the tale, interpretations, uncertainties and open questions. Even a conference among scientists was bound to be confusing – imagine the scope of opportunities for confusion in one between scientists and monks. As if in response to this risk, the views of the scientists and the monks were very cleanly divided throughout the event, with neither side wanting to tread on the toes of the other, and this in turn dulled the proceedings. And while this was a sensible thing to do, I was disappointed.

This said, there were some interesting conversations outside the event halls, in the corridors, over lunch and dinner, and at the hotel where we were put up (where speakers in the common areas played ‘Om Mani Padme Hum’ 24/7). One of them centered on the rare (possibly) legitimate idea in quantum physics in which Buddhist monks, and monks of every denomination for that matter, have considerable interest: the origin of consciousness. While any sort of exposition or conversation involving the science of consciousness has more often than not been replete with bad science, this idea may be an honourable exception.

Four years later, I only remember that there was a vigorous back-and-forth between two monks and a physicist, not the precise contents of the dialogue or who participated. The subject was the Orch OR hypothesis advanced by the physicist Roger Penrose and quantum-consciousness theorist Stuart Hameroff. According to a 2014 paper authored by the pair, “Orch OR links consciousness to processes in fundamental space-time geometry.” It traces the origin of consciousness to cellular structures inside neurons called microtubules being in a superposition of states, and which then collapse into a single state in a process induced by gravity.

In the famous Schrödinger’s cat thought-experiment, the cat exists in a superposition of ‘alive’ and ‘dead’ states while the box is closed. When an observer opens the box and observes the cat, its state collapses into either a ‘dead’ or an ‘alive’ state. Few scientists subscribe to the Orch OR view of self-awareness; the vast majority believe that consciousness originates not within neurons but in the interactions between neurons, happening at a large scale.

‘Orch OR’ stands for ‘orchestrated objective reduction’, with Penrose being credited with the ‘OR’ part. That is also the part at which mathematicians and physicists have directed much of their criticism.

It begins with Penrose’s idea of spacetime blisters. According to him, at the Planck scale (around 10-35 m), the spacetime continuum is discrete, not continuous, and that each quantum superposition occupies a distinct piece of the spacetime fabric. These pieces are called blisters. Pernose postulated that gravity acts on each of these blisters and destabilises them, causing the superposed states to collapse into a single state.

A quantum computer performs calculations using qubits as the fundamental units of information. The qubits interact with each other in quantum-mechanical processes like superposition and entanglement. At some point, the superposition of these qubits is forced to collapse by making an observation, and the state to which it collapses is recorded as the computer’s result. In 1989, Penrose proposed that there could be a quantum-computer-like mechanism operating in the human brain and that the OR mechanism could be the act of observation that forces it to terminate.

One refinement of the OR hypothesis is the Diósi-Penrose scheme, with contributions from Hungarian physicist Lajos Diósi. In this scheme, spacetime blisters are unstable and the superposition collapses when the mass of the superposed states exceeds a fixed value. In the course of his calculations, Diósi found that at the moment of collapse, the system must emit some electromagnetic radiation (due to the motion of electrons).

Hameroff made his contribution by introducing microtubules as a candidate for the location of qubit-like objects and which could collectively set up a quantum-computer-like system within the brain.

There have been some experiments in the last two decades that have tested whether Orch OR could manifest in the brain, based on studies of electron activity. But a more recent study suggests that Orch OR may just be infeasible as an explanation for the origin of consciousness.

Here, a team of researchers – including Lajos Diósi – first looked for the electromagnetic radiation at the instant the superposition collapsed. The researchers didn’t find any, but the parameters of their experiment (including the masses involved) allowed them to set lower limits on the scale at which Orch OR might work. That is, they had a way to figure out a way in which the distance, time and mass might be related in an Orch OR event.

They set these calculations out in a new paper, published in the journal Physics of Life Reviews on May 17. According to their paper, they fixed the time-scale of the collapse to 0.025 to 0.5 seconds, which is comparable to the amount of time in which our brain recognises conscious experience. They found that at a spatial scale of 10-15 m – which Penrose has expressed a preference for – a superposition that collapses in 0.025 seconds would require 1,000-times more tubulins as there are in the brain (1020), an impossibility. (Tubulins polymerise to form microtubules.) But at a scale of around 1 nm, the researchers worked out that the brain would need only 1012 tubulins for their superposition to collapse in around 0.025 seconds. This is still a very large number of tubulins and a daunting task even for the human brain. But it isn’t impossible as with the collapse over 10-15 m. According to the team’s paper,

The Orch OR based on the DP [Diósi-Penrose] theory is definitively ruled out for the case of [10-15 m] separation, without needing to consider the impact of environmental decoherence; we also showed that the case of partial separation requires the brain to maintain coherent superpositions of tubulin of such mass, duration, and size that vastly exceed any of the coherent superposition states that have been achieved with state-of-the-art optomechanics and macromolecular interference experiments. We conclude that none of the scenarios we discuss … are plausible.

However, the team hasn’t nearly eliminated Orch OR; instead, they wrote that they intend to refine the Diósi-Penrose scheme to a more “sophisticated” version that, for example, may not entail the release of electromagnetic radiation or provide a more feasible pathway for superposition collapse. So far, in their telling, they have used experimental results to learn where their theory should improve if it is to remain a plausible description of reality.

If and when the ‘Science for Monks’ conferences, or those like it, resume after the pandemic, it seems we may still be able to put Orch OR on the discussion table.

* I remember it was called ‘Science for Monks’ in 2018. Its name appears to have been changed since to ‘Science for Monks and Nuns’.

A latent monadology: An extended revisitation of the mind-body problem

Image by Genis Carreras

In an earlier post, I’d spoken about a certain class of mind-body interfacing problems (the way I’d identified it): evolution being a continuous process, can psychological changes effected in a certain class of people identified solely by cultural practices “spill over” as modifications of evolutionary goals? There were some interesting comments on the post, too. You may read them here.

However, the doubt was only the latest in a series of others like it. My interest in the subject was born with a paper I’d read quite a while ago that discussed two methods either of which humankind could possibly use to recreate the human brain as a machine. The first method, rather complexly laid down, was nothing but the ubiquitous recourse called reverse-engineering. Study the brain, understand what it’s made of, reverse all known cause-effect relationships associated with the organ, then attempt to recreate the cause using the effect in a laboratory with suitable materials to replace the original constituents.

The second method was much more interesting (this bias could explain the choice of words in the previous paragraph). Essentially, it described the construction of a machine that could perform all the known functions of the brain. Then, this machine would have to be subjected to a learning process, through which it would acquire new skills while it retained and used the skills it’s already been endowed with. After some time, if the learnt skills, so chosen to reflect real human skills, are deployed by the machine to recreate human endeavor, then the machine is the brain.

Why I like this method better than the reverse-engineered brain is because it takes into account the ability to learn as a function of the brain, resulting in a more dynamic product. The notion of the brain as a static body is definitively meaningless as, axiomatically, conceiving of it as a really powerful processor stops short of such Leibnizian monads as awareness and imagination. While these two “entities” evade comprehension, subtracting the ability to, yes, somehow recreate them doesn’t yield a convincing brain as it is. And this is where I believe the mind-body problem finds solution. For the sake of argument, let’s discuss the issue differentially.

Spherical waves coming from a point source. The solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the solution for a spherical wave through the use of partial differential equations. (Image by Oleg Alexandrov on Wikimedia, including MATLAB source code.)

Hold as constant: Awareness
Hold as variable: Imagination

The brain is aware, has been aware, must be aware in the future. It is aware of the body, of the universe, of itself. In order to be able to imagine, therefore, it must concurrently trigger, receive, and manipulate different memorial stimuli to construct different situations, analyze them, and arrive at a conclusion about different operational possibilities in each situation. Note: this process is predicated on the inability of the brain to birth entirely original ideas, an extension of the fact that a sleeping person cannot be dreaming of something he has not interacted with in some way.

Hold as constant: Imagination
Hold as variable: Awareness

At this point, I need only prove that the brain can arrive at an awareness of itself, the body, and the universe, through a series of imaginative constructs, in order to hold my axiom as such. So, I’m going to assume that awareness came before imagination did. This leaves open the possibility that with some awareness, the human mind is able to come up with new ways to parse future stimuli, thereby facilitating understanding and increasing the sort of awareness of everything that better suits one’s needs and environment.

Now, let’s talk about the process of learning and how it sits with awareness, imagination, and consciousness, too. This is where I’d like to introduce the metaphor called Leibniz’s gap. In 1714, Gottfried Leibniz’s ‘Principes de la Nature et de la Grace fondés en raison‘ was published in the Netherlands. In the work, which would form the basis of modern analytic philosophy, the philosopher-mathematician argues that there can be no physical processes that can be recorded or tracked in any way that would point to corresponding changes in psychological processes.

… supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.

If any technique was found that could span the distance between these two concepts – the physical and the psychological – then Leibniz says the technique will effectively bridge Leibniz’s gap: the symbolic distance between the mind and the body.

Now it must be remembered that the German was one of the three greatest, and most fundamentalist, rationalists of the 17th century: the other two were Rene Descartes and Baruch Spinoza (L-D-S). More specifically: All three believed that reality was composed fully of phenomena that could be explained by applying principles of logic to a priori, or fundamental, knowledge, subsequently discarding empirical evidence. If you think about it, this approach is flawless: if the basis of a hypothesis is logical, and if all the processes of development and experimentation on it are founded in logic, then the conclusion must also be logical.

(L to R) Gottfried Leibniz, Baruch Spinoza, and Rene Descartes

However, where this model does fall short is in describing an anomalous phenomenon that is demonstrably logical but otherwise inexplicable in terms of the dominant logical framework. This is akin to Thomas Kuhn’s philosophy of science: a revolution is necessitated when enough anomalies accumulate that defy the reign of an existing paradigm, but until then, the paradigm will deny the inclusion of any new relationships between existing bits of data that don’t conform to its principles.

When studying the brain (and when trying to recreate it in a lab), Leibniz’s gap, as understood by L-D-S, cannot be applied for various reasons. First: the rationalist approach doesn’t work because, while we’re seeking logical conclusions that evolve from logical starts, we’re in a good position to easily disregard the phenomenon called emergence that is prevalent in all simple systems that have high multiplicity. In fact, ironically, the L-D-S approach might be more suited for grounding empirical observations in logical formulae because it is only then that we run no risk of avoiding emergent paradigms.

“Some dynamical systems are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions will lead to orbits that converge to this chaotic region.” – Wikipedia

Second: It is important to not disregard that humans do not know much about the brain. As elucidated in the less favored of the two-methods I’ve described above, were we to reverse-engineer the brain, we can still only make the new-brain do what we already know that it already does. The L-D-S approach takes complete knowledge of the brain for granted, and works post hoc ergo propter hoc (“correlation equals causation”) to explain it.

[youtube http://www.youtube.com/watch?v=MygelNl8fy4?rel=0]

Therefore, in order to understand the brain outside the ambit of rationalism (but still definitely within the ambit of empiricism), introspection need not be the only way. We don’t always have to scrutinize our thoughts to understand how we assimilated them in the first place, and then move on from there, when we can think of the brain itself as the organ bridging Leibniz’s gap. At this juncture, I’d like to reintroduce the importance of learning as a function of the brain.

To think of the brain as residing at a nexus, the most helpful logical frameworks are the computational theory of the mind (CTM) and the Copenhagen interpretation of quantum mechanics (QM).

xkcd #45 (depicting the Copenhagen interpretation)

In the CTM-framework, the brain is a processor, and the mind is the program that it’s running. Accordingly, the organ works on a set of logical inputs, each of which is necessarily deterministic and non-semantic; the output, by extension, is the consequence of an algorithm, and each step of the algorithm is a mental state. These mental states are thought to be more occurrent than dispositional, i.e., more tractable and measurable than the psychological emergence that they effect. This is the break from Leibniz’s gap that I was looking for.

Because the inputs are non-semantic, i.e., interpreted with no regard for what they mean, it doesn’t mean the brain is incapable of processing meaning or conceiving of it in any other way in the CTM-framework. The solution is a technical notion called formalization, which the Stanford Encyclopedia of Philosophy describes thus:

… formalization shows us how semantic properties of symbols can (sometimes) be encoded in syntactically-based derivation rules, allowing for the possibility of inferences that respect semantic value to be carried out in a fashion that is sensitive only to the syntax, and bypassing the need for the reasoner to have employ semantic intuitions. In short, formalization shows us how to tie semantics to syntax.

A corresponding theory of networks that goes with such a philosophy of the brain is connectionism. It was developed by Walter Pitts and Warren McCulloch in 1943, and subsequently popularized by Frank Rosenblatt (in his 1957 conceptualization of the Perceptron, a simplest feedforward neural network), and James McClelland and David Rumelhart (‘Learning the past tenses of English verbs: Implicit rules or par­allel distributed processing’, In B. MacWhinney (Ed.), Mechanisms of Language Acquisition (pp. 194-248). Mah­wah, NJ: Erlbaum) in 1987.

(L to R) Walter Pitts (L-top), Warren McCulloch (L-bottom), David Rumelhart, and James McClelland

As described, the L-D-S rationalist contention was that fundamental entities, or monads or entelechies, couldn’t be looked for in terms of physiological changes in brain tissue but in terms of psychological manifestations. The CTM, while it didn’t set out to contest this, does provide a tensor in which the inputs and outputs are correlated consistently through an algorithm with a neural network for an architecture and a Turing/Church machine for an algorithmic process. Moreover, this framework’s insistence on occurrent processes is not the defier of Leibniz: the occurrence is presented as antithetical to the dispositional.

Jerry Fodor

The defier of Leibniz is the CTM itself: if all of the brain’s workings can be elucidated in terms of an algorithm, inputs, a formalization module, and outputs, then there is no necessity to suppress any thoughts to the purely-introspectionist level (The domain of CTM, interestingly, ranges all the way from the infraconscious to the set of all modular mental processes; global mental processes, as described by Jerry Fodor in 2000, are excluded, however).

Where does quantum mechanics (QM) come in, then? Good question. The brain is a processor. The mind is a program. The architecture is a neural network. The process is that of a Turing machine. But how is the information between received and transmitted? Since we were speaking of QM, more specifically the Copenhagen interpretation of it, I suppose it’s obvious that I’m talking about electrons and electronic and electrochemical signals being transmitted through sensory, motor, and interneurons. While we’re assuming that the brain is definable by a specific processual framework, we still don’t know if the interaction between the algorithm and the information is classical or quantum.

While the classical outlook is more favorable because almost all other parts of the body are fully understand in terms of classical biology, there could be quantum mechanical forces at work in the brain because – as I’ve said before – we’re in no way to confirm or deny if it’s purely classical or purely non-classical operationally. However, assuming that QM is at work, then associated aspects of the mind, such as awareness, consciousness, and imagination, can be described by quantum mechanical notions such as the wavefunction-collapse and Heisenberg’s uncertainty principle – more specifically, by strong and weak observations on quantum systems.

The wavefunction can be understood as an avatar of the state-function in the context of QM. However, while the state-function can be constantly observable in the classical sense, the wavefunction, when subjected to an observation, collapses. When this happens, what was earlier a superposition of multiple eigenstates, metaphorical to physical realities, becomes resolved, in a manner of speaking, into one. This counter-intuitive principle was best summarized by Erwin Schrodinger in 1935 as a thought experiment titled…

[youtube http://www.youtube.com/watch?v=IOYyCHGWJq4?rel=0]

This aspect of observation, as is succinctly explained in the video, is what forces nature’s hand. Now, we pull in Werner Heisenberg and his notoriously annoying principle of uncertainty: if either of two conjugate parameters of a particle is measured, the value of the other parameter is altered. However, when Heisenberg formulated the principle heuristically in 1927, he also thankfully formulated a limit of uncertainty. If a measurement could be performed within the minuscule leeway offered by the constant limit, then the values of the conjugate parameters could be measured simultaneously without any instantaneous alterations. Such a measurement is called a “weak” measurement.

Now, in the brain, if our ability to imagine could be ascribed – figuratively, at least – to our ability to “weakly” measure the properties of a quantum system via its wavefunction, then our brain would be able to comprehend different information-states and eventually arrive at one to act upon. By extension, I may not be implying that our brain could be capable of force-collapsing a wavefunction into a particular state… but what if I am? After all, the CTM does require inputs to be deterministic.

How hard is it to freely commit to a causal chain?

By moving upward from the infraconscious domain of applicability of the CTM to the more complex cognitive functions, we are constantly teaching ourselves how to perform different kinds of tasks. By inculcating a vast and intricately interconnected network of simple memories and meanings, we are engendering the emergence of complexity and complex systems. In this teaching process, we also inculcate the notion of free-will, which is simply a heady combination of traditionalism and rationalism.

While we could be, with the utmost conviction, dreaming up nonsensical images in our heads, those images could just as easily be the result of parsing different memories and meanings (that we already know), simulating them, “weakly” observing them, forcing successive collapses into reality according to our traditional preferences and current environmental stimuli, and then storing them as more memories accompanied by more semantic connotations.