Showing posts with label Quantum Computing. Show all posts
Showing posts with label Quantum Computing. Show all posts

Tuesday, March 10, 2009

Quantum Computing and Mind Simulation

At the end of a recent discussion (in the comments to this post), Allen and I started to discuss whether quantum computing theory had something to say about the potential for simulating the human mind on a computer. I then read a couple of review articles on quantum computing he referenced. Before getting to what I learned (below), I wanted to explain my prior philosophical view on the simulation question.

The Russellian Stance, Functionalism, and Simulation

I endorse a form of the Russellian approach to solving the hard problem of consciousness. Russell described the world as a causal network of events, and he noted that physical theory only describes the extrinsic or dispositional nature of these events. These events also have an intrinsic nature which ultimately grounds the qualitative and experiential character of consciousness. I think quantum mechanics provides support for this view: quantum measurements seem to fit perfectly into this Russellian picture as the base-level events which ultimately underpin both the physical and experiential facts.

Functionalism is the thesis that the mind can be described as an abstract causal system. As a practical matter, the functionalist’s description is taken at a coarse-grained level – i.e. there is some minimal scale below which the actual physical details of the brain/body system are assumed to be irrelevant to its function. It follows that such a functional model could be realized in any number of physical ways, including via computer. Computationalism is the variety of functionalism which pursues the computer modeling approach.

Now, I think the Russellian stance on functionalism and the potential for simulation is nuanced. On the one hand, functionalism is seen as misguided because it only considers extrinsic causal structure. On the other hand, unlike an old fashioned dualist, the Russellian shouldn’t rule out the possibility that the mind could be simulated. The mind, after all, is a product of a natural system – we don’t need extra immaterial stuff to explain it. Perhaps a simulation can get the functional structure right and the correct intrinsic experiential character will come along “for free.”

The problems come with the coarse-graining. In every functionalist account I’ve seen, this takes place at a scale where quantum mechanics is assumed to be safely irrelevant. But every process in the body ultimately is grounded in molecular, atomic and sub-atomic activity which must be described quantum mechanically. So, a coarse-grained, approximated simulation of the brain/body’s causal structure on some physical device would likely miss crucial details which lie at the quantum level (details I think simultaneously crucial to both extrinsic function and conscious experience)

How would a functionalist respond to the quantum question? First, many believe distinctive quantum phenomena effectively “wash out” in a macroscopic system like the human brain/body. This belief is often based on the presumed impact of environmental decoherence. I’m not going to pursue this issue in this post (I discuss this in some of my posts on quantum biology). Another response to the quantum question is an appeal to a commonly believed thesis that any physical system (including a quantum system) can be modeled by a classical computer -- so the traditional functionalist/computationalist approach wouldn’t be missing anything distinctive anyway. This is the view that I wanted to explore by reading up on quantum computing theory.

[Please note the discussion that follows may suffer even more than usual from by my ignorance of the subject matter.]

Simulation and the Church-Turing thesis

So, is it true that any physical system, including a quantum system, can be simulated by a classical computer? Well, this idea has been defended by appeals to versions of the Church-Turing thesis. The original Church-Turing thesis states (in a formulation from this article) that any effectively calculable function can be computed using a Turing machine. (For a description of a Turing machine, see here.) Now it seems that what this thesis really meant can probably only be appreciated by studying its original logical/mathematical context. In his SEP article on the C-T thesis, Jack Copeland first traces the development of the ideas associated with the thesis in the pioneering work on calculation and computing by Turing, Church and others beginning in the 1930’s. Then, Copeland spends much of the rest of the article objecting to how the thesis has been misunderstood and misused by philosophers and computing theorists. The C-T thesis did not purport to say that all physical systems, or even all machines regardless of architecture, could be simulated by a Turing machine. It certainly did not prove anything of the sort. (This discussion reminded me of similar debates over the philosophical applicability of Gödel’s incompleteness theorems beyond their original context – see old posts here and here.)

Nevertheless, as long as one is careful not to inappropriately invoke the authority of the original C-T thesis, one can explore more expansive versions and try to evaluate their validity.

Physical Versions of the C-T thesis

In her review article on Quantum Computing, Dorit Aharonov presents two versions of the thesis. First, (p.3), she presents a simple “physical” version: “A Turing machine can compute any function computable by a reasonable physical device.” She says this is something which cannot be proven, but that no known counterexamples exist. In particular, quantum computers are not believed capable of computing functions non-computable by a Turing machine.

She quickly then notes that: “However, in the theory of computation, we are interested not only in the question of which functions can be computed, but mainly in the cost of computing these functions.” [Emphasis original] The way this is evaluated is by noting whether the computational resources needed rise as a polynomial or an exponential function of input size. It is the former which form the set of tractable computations.

After also discussing the superior efficiency of a probabilistic version of the Turing machine, she presents another thesis to consider (p.4): “The modern Church thesis: A probabilistic Turing machine can simulate any reasonable physical device in polynomial cost.” We have a great deal of evidence, though not proof, that this thesis is contradicted by quantum computers.

The Advantages of Quantum Algorithms

Aharonov explains first, that quantum computers can simulate classical computers, at little loss in efficiency. On the flip side, it appears classical computers can simulate quantum computers but only at exponential cost. What we really want, though, is a positive demonstration of how far quantum computers can outperform their classical counterparts.

The a priori expectation might be that the ability to manipulate qubits, which can be in a superposition of states as opposed to just to two states, would lead to great increases in computing power. Because of the necessity for conducting a measurement to extract results (collapsing superpositions), however, the power of this idea is muted. Other, more subtle sources of limitations on quantum computing are discussed later in the paper.

Despite this, however, many investigations into quantum computing over the years have found quantum algorithms which improve efficiency. One of these algorithms, Shor’s, gives a polynomial algorithm for factoring integers where all known classical algorithms have exponential cost, thus crossing the crucial boundary. It must be pointed out that as of yet there is no proof that a classical polynomial algorithm for factoring is impossible.

Conclusions

Most of what I’ve discussed in Aharonov’s article above comes from the introduction. In the ensuing 60 pages she goes into more detail about the nature of computers and computing, various models for quantum computing algorithms, the issues of noise correction and fault tolerance, and some of her own ideas of what quantum computing theory says about the boundary between quantum and classical regimes in physics.

On the key question of what quantum computers can do better than classical ones, one is left with the impression that the question is much more subtle than might first be imagined. We have some exciting theoretical results, but perhaps fewer than might have been anticipated on a naïve expectation. At the same time, it seems we’re still in the early phase of growth in our knowledge of the field. A lot of interesting work and new developments lie ahead. (The engineering efforts underway toward building quantum computers and the challenges they face is another interesting topic).

Let me return to the question about what all this might mean for the philosophy of mind, assuming (as I do) that the quantum level grounding of biology contributes meaningfully to the mind’s function. I think a modest conclusion is called for. The fact that a classical computer can simulate a quantum computer only at an exponential cost suggests that the project of simulating a human mind is impractical, though not blocked in principle. This conclusion is broadly consistent with my philosophical stance regarding the simulation project.

[P.S: after drafting this I recalled there was a good debate (thanks to Tanasije and Mike) on the simulation topic in the comments to this May 2008 post on Russellian theory. My memory is awful.]

Tuesday, May 15, 2007

In the Beginning was the Qubit

So, how did this party get started? In Programming the Universe (see also prior posts on this topic), Seth Lloyd would like to retell the cosmological story with qubits instead of elementary particles. However, the section of the book (chapter 3) where he does this doesn’t really add much to the standard account. He interprets fluctuations in quantum fields as superpositions of bits whose possible outcomes "0" and "1" represent low and high energy density. The collapse (or decoherence, following Lloyd’s preferred interpretation) of these superpositions creates pockets of high density which can then be the target of gravitational attraction. If you take out the references to bits, his story seems to be the standard cosmological model. If the universe is really a quantum computer, then matter-energy fields (and space-time) would be derived from qubits.

In other words, the real innovation would come if the computational model helps point us toward a theory of quantum gravity. And here, Lloyd does have some ideas. The book has just a few pages on this, but more detail is found in his paper, “A theory of quantum gravity based on quantum computation”. Some impressions from this paper are below (with the caveat that as usual I can’t understand large portions of it).


The idea is that the metric structure of space-time and the behavior of quantum matter fields “are derived from and arise out of an underlying quantum computer. (p.2)”. One starts with the fact a quantum computer can be thought of as a universal theory for discrete quantum mechanics. Quantum computers represent a causal network (=computational history) of interactions – actually superpositions of such networks. These can be represented as a graph, similar to those in causal set theory. Now, for the matter side of things, note that at each vertex of the graph (=logic gate), qubits can be transformed or not. When they are transformed, this is a scattering event. Each computation is a superposition of different computational histories, one for each pattern of scattering events. The events are the matter.

On the gravity side of things, the superpositions of these computational histories will be seen to correspond to a fluctuation of space-time geometry. Lloyd’s strategy is to “embed the computational graph in a space-time manifold by mapping [the computational graph] C into R4 via an embedding mapping E. (pp.6-7)”. He says that if you do this, then general covariance will follow from the fact that the informational flow through the network is independent of the way the computation is embedded in space-time. The next step (which seems to be the key part of the paper) makes some additional assumptions so that the geometries derived from the computation explicitly obey the Einstein equations (in their discrete Regge calculus form).

Now I can’t follow all the steps here, but what I think he is doing amounts to a demonstration of how a quantum computation could be consistent with the emergence of general relativistic space-time, rather than showing that it would actually do so as a matter of course. He ends up being at least partially circular in invoking our knowledge of the Einstein equations to achieve his explicit results (if someone would like to correct me on this, please do). In contrast, Fotini Markopoulou’s desired ambition (see here and here) is to show that the emergence of space-time is a general consequence of an underlying quantum micro-theory (likewise Olaf Dreyer).

The paper finishes with some ideas on how such a theory would impact a variety of topics in cosmology. For instance, singularities correspond to bits entering or leaving the universe, and black holes do lose information; the model can handle different stages of cosmological evolution, etc. This is interesting stuff, and I’ll be interested in seeing if these ideas are developed further.

Something which intrigues me is how one is supposed to think about this new proposed atom of the universe, the qubit. A practical quantum computer uses properties of familiar particles (spin of an electron or polarization of a photon) as qubits. But if these particles (as well as space-time itself) are derived from these postulated elementary qubits, what are they? Is the superposed atomic qubit just a pure possibility of existence?

[UPDATE: 25 May, 2007. My comments in first paragraph of this post are a bit unfair since later in the book (Ch.8 p.196) Lloyd revisits the story of the history of the universe incorporating some of the ideas from his sections on quantum gravity and complexity. In this discussion, here the computation does indeed have priority status over matter and gravity.]

Monday, April 30, 2007

Physical Systems Process Information: So What?

Seth Lloyd’s book (see prior post) has a nice passage in a chapter subsection entitled “So What?” (p. 168). If the universe can indeed be viewed as a quantum computer, why should we care? He poses this further question: “Do we really need a whole new paradigm for thinking about how the universe operates?” Lloyd says (and it would seem difficult to disagree) that the dominant paradigm of the age of science has been that of universe as mechanism. He proposes a new paradigm: “I suggest thinking about the world not simply as a machine, but as a machine that processes information (p.169 – emphasis original).” In my opinion, however, Lloyd’s discussion, while often suggestive, doesn't really answer the "so what" question. Actually, he underplays how radical and interesting a notion this new paradigm really could be.

Unfortunately, in the section quoted from above, Lloyd doesn’t follow through in offering a philosophically compelling interpretation of this new paradigm. He goes on to discuss how the view might better (technically) account for complexity and how it could help on the quest for a theory of quantum gravity – both topics of subsequent sections. Other statements of this sort sprinkled throughout the book are neutral in tone and vague in terms of what they really mean. Here’s the typical quote: “All physical systems register information, and when they evolve dynamically in time, they transform and process that information. (Prologue, p. xi.)”.

I became frustrated at this: What does it really mean to say physical systems process information? In my own (perhaps uninformed) view of classical computing, the only true information processors are the human beings who provide input, program, and interpret the output. The semantics of information processing are provided by humans exclusively, the rest is syntax. This issue is discussed in one subsection of Lloyds’ book, entitled “Meaning” (p.24), where Lloyd relates being asked by a student: “’But doesn’t information have to mean something?’” The response: “’You’re right that when we think of information we normally associate it with meaning,’ I answered. ‘But the meaning of ‘meaning’ is not clear.’” In the rest of the section (written presumably after some reflection on this), he fails to improve on this answer. He discusses how bits can represent information, and then says “the interpreter must provide the meaning.” Note there is nothing innovative or even quantum mechanical about this discussion.

Here’s the unstated radical interpretation of Lloyd’s theory: If physical interactions ubiquitously can be described in terms of information processing, this implies that something we think belongs uniquely to human (and some animal) agents is also a feature of more elementary physical systems: that is, possession of semantic properties, or intentionality. If one is unwilling to take this step, that’s fine, but then there is no important difference between the new and the old paradigm when it comes to interpreting how human life and mind can fit into the picture of an otherwise lifeless mechanistic universe.

Postscript:
It’s not a coincidence that Lloyd’s approach to the measurement problem of QM is conservative. He believes the decoherent-histories approach is practical and useful enough to de-emphasize worries about foundational interpretation.

Tuesday, April 24, 2007

Living and Computing in Lloyd's Universe

I recently read Seth Lloyd’s Programming the Universe. This is a thought-provoking (if a bit meandering) book which explains why we should envision the universe as a quantum computer and how doing so may illuminate our understanding of some difficult questions (it is out in paperback – page references below are to this edition). In addition it offers a useful summary of quantum computing for the general reader, along with discussions of cosmology, thermodynamics and introductory quantum mechanics (all with a computing “gloss”). In this post and one or two to follow, I’ll discuss a couple of Lloyds’ ideas. (For a general review, the NYT’s is here).

As a layperson who had read explanatory books and articles about quantum physics for many years before I ever heard about quantum computers, the first theme the book hammered home for me was that quantum computing in an important sense just is quantum physics. A classical computer can be instantiated in a variety of physical set-ups; a quantum computer is itself a quantum system. While you can try to model a quantum system on a classical computer, you will quickly overwhelm its computational resources. So, quantum computing, in addition to its potential for practical acceleration of computing power generally, gives us a useful and appropriate logical framework to analyze the physics of our world.

The next step is to explore the implications of the ability to perform this kind of “quantum simulation”. Here’s a thumbnail sketch of how the simulation is done (p.149): “Every part of the quantum system to be simulated is mapped onto a collection of qubits in the quantum computer, and interactions between these parts become a sequence of quantum logic operations.” In fact: “…quantum computers could function as universal quantum simulators, whose dynamics could be the analog of any desired physical dynamics. (p.151)” At this point, Lloyd makes the conceptual case that, logically, there is no reason to distinguish between what’s happening in the simulation and the original system.

Now, the step which motivates the book title: while we can’t do it yet, in principle the universe (the accessible part, anyway) is finite in extent, and hypothetically could be simulated in a quantum computer. But, following the point above, since the computer has the same number of qubits as the universe, and since the operations on the qubits simulate the universe’s dynamics, we can say: “Such a quantum computation would constitute a complete description of nature, and so would be indistinguishable from nature. Thus, at bottom, the universe can be thought of as performing a quantum computation. (p.154, emphasis original).”

So what does it mean? What can this view do for us? I think there are two possible answers, one concrete and one more intangible. First, ideas from quantum computing may help in the quest for a theory of quantum gravity. Second, it may offer an improved paradigm for interpreting and understanding the physical world. I’ll follow up on these in future posts.