Scott Aaronson: PHYS771: Lecture 10.5: Penrose
Aha!: Scott Aaronson on Roger Penrose: "if we can only approach mathematical truth with the same unreliable, savannah-optimized tools that we use for doing the laundry, ordering Chinese takeout, etc. -- then it seems we ought to grant computers the same liberty of being fallible. But in that case, the claimed distinction between humans and machines would seem to evaporate...":
PHYS771 Lecture 10.5: Penrose: So, you guys finally finished reading Roger Penrose's The Emperor's New Mind? What did you think of it? (Since I forgot to record this lecture, the class responses are tragically lost to history. But if I recall correctly, the entire class turned out to consist of -- YAWN -- straitlaced, clear-thinking materialistic reductionists who correctly pointed out the glaring holes in Penrose's arguments. No one took Penrose's side, even just for sport.)
Alright, so let me try a new tack: who can summarize Penrose's argument (or more correctly, a half-century-old argument adapted by Penrose) in a few sentences? How about this: Gödel's First Incompleteness Theorem tells us that no computer, working within a fixed formal system F such as Zermelo-Fraenkel set theory, can prove the sentence: G(F) = "This sentence cannot be proved in F." But we humans can just "see" the truth of G(F) -- since if G(F) were false, then it would be provable, which is absurd! Therefore the human mind can do something that no present-day computer can do. Therefore consciousness can't be reducible to computation.
Alright, class: problems with this argument?
Yeah, there are two rather immediate ones:
Why does the computer have to work within a fixed formal system F?
Can humans "see" the truth of G(F)?
Actually, the response I prefer encapsulates both of the above responses as "limiting cases." Recall from Lecture 3 that, by the Second Incompleteness Theorem, G(F) is equivalent to Con(F): the statement that F is consistent. Furthermore, this equivalence can be proved in F itself for any reasonable F. This has two important implications.
First, it means that when Penrose claims that humans can "see" the truth of G(F), really he's just claiming that humans can see the consistency of F! When you put it that way, the problems become more apparent: how can humans see the consistency of F? Exactly which F's are we talking about: Peano Arithmetic? ZF? ZFC? ZFC with large cardinal axioms? Can all humans see the consistency of all these systems, or do you have to be a Penrose-caliber mathematician to see the consistency of the stronger ones? What about the systems that people thought were consistent, but that turned out not to be? And even if you did see the consistency of (say) ZF, how would you convince someone else that you'd seen it? How would the other person know you weren't just pretending? (Models of Zermelo-Fraenkel set theory are like those 3D dot pictures: sometimes you really have to squint...)
The second implication is that, if we grant a computer the same freedom that Penrose effectively grants to humans -- namely, the freedom to assume the consistency of the underlying formal system -- then the computer can prove G(F). So the question boils down to this: can the human mind somehow peer into the Platonic heavens, in order to directly perceive (let's say) the consistency of ZF set theory? If the answer is no -- if we can only approach mathematical truth with the same unreliable, savannah-optimized tools that we use for doing the laundry, ordering Chinese takeout, etc. -- then it seems we ought to grant computers the same liberty of being fallible. But in that case, the claimed distinction between humans and machines would seem to evaporate. (Perhaps Turing himself said it best: "If we want a machine to be intelligent, it can't also be infallible. There are theorems that say almost exactly that.")
In my opinion, then, Penrose doesn't need to be talking about Gödel's theorem at all. The Gödel argument turns out to be just a mathematical restatement of the oldest argument against reductionism in the book: "sure a computer could say it perceives G(F), but it'd just be shuffling symbols around! When I say I perceive G(F), I really mean it! There's something it feels like to be me!" The obvious response is equally old: "what makes you so sure that it doesn't feel like anything to be a computer?"...
Opening the Black Box:
Alright, look: Roger Penrose is one of the greatest mathematical physicists on Earth. Is it possible that we've misconstrued his thinking? To my mind, the most plausible-ish versions of Penrose's argument are the ones based on an "asymmetry of understanding": namely that, while we know the internal workings of a computer, we don't yet know the internal workings of the brain. How can one exploit this asymmetry? Well, given any known Turing machine M, it's certainly possible to construct a sentence that stumps M: S(M) = "Machine M will never output this sentence." There are two cases: either M outputs S(M), in which case it utters a falsehood, or else M doesn't output S(M), in which case there's a mathematical truth to which it can never assent.
The obvious response is, why can't we play the same game with humans? "Roger Penrose will never output this sentence." Well, conceivably there's an answer: because we can formalize what it means for M to output something, by examining its inner workings. (Indeed, "M" is really just shorthand for the appropriate Turing machine state diagram.) But can we formalize what it means for Penrose to output something? The answer depends on what we believe about the internal workings of the brain (or more precisely, Penrose's brain)! And this leads to Penrose's view of the brain as "non-computational." A common misconception is that Penrose thinks the brain is a quantum computer. In reality, a quantum computer would be much weaker than he wants! As we saw before, quantum computers don't even seem able to solve NP-complete problems in polynomial time. Penrose, by contrast, wants the brain to solve uncomputable problems, by exploiting hypothetical collapse effects from a yet-to-be-discovered quantum theory of gravity....
In Shadows, Penrose offers the following classification of views on consciousness:
Consciousness is reducible to computation (the view of strong-AI proponents)
Sure, consciousness can be simulated by a computer, but the simulation couldn't produce "real understanding" (John Searle's view)
Consciousness can't even be simulated by computer, but nevertheless has a scientific explanation (Penrose's own view, according to Shadows)
Consciousness doesn't have a scientific explanation at all (the view of 99% of everyone who ever lived)
Now it seems to me that... Penrose is retreating from view C to view B. For as soon as we say that passing the Turing Test isn't good enough -- that one needs to "pry open the box" and examine a machine's internal workings to know whether it thinks or not -- what could possibly be the content of view C that would distinguish it from view B?... I want to bend over backwards to see if I can figure out what Penrose might be saying. In science, you can always cook up a theory to "explain" the data you've seen so far: just list all the data you've got, and call that your "theory"! The obvious problem here is overfitting. Since your theory doesn't achieve any compression of the original data -- i.e., since it takes as many bits to write down your theory as to write down the data itself -- there's no reason to expect your theory to predict future data. In other words, your theory is a useless piece of shit....
Now, here's the point I keep coming back to: if this is what Penrose means, then he's left the world of Gödel and Turing far behind, and entered my stomping grounds -- the Kingdom of Computational Complexity. How does Penrose, or anyone else, know that there's no small Boolean circuit to simulate Winston Churchill? Presumably we wouldn't be able to prove such a thing, even supposing (for the sake of argument) that we knew what a Churchill simulator meant! All ye who would claim the intractability of finite problems: that way lieth the P versus NP beast, from whose 2n jaws no mortal hath yet escaped....
Let's set aside the specifics of Penrose's ideas, and ask a more general question. Should quantum mechanics have any affect on how we think about the brain?... When people try to make the question more concrete, they often end up asking: "is the brain a quantum computer?" Well, it might be, but I can think of at least four good arguments against this possibility:
The problems for which quantum computers are believed to offer dramatic speedups -- factoring integers, solving Pell's equation, simulating quark-gluon plasmas, approximating the Jones polynomial, etc. -- just don't seem like the sorts of things that would have increased Oog the Caveman's reproductive success relative to his fellow cavemen.
Even if humans could benefit from quantum computing speedups, I don't see any evidence that they're actually doing so. (It's said that Gauss could immediately factor large integers in his head -- but if so, that only proves that Gauss's brain was a quantum computer, not that anyone else's is!)
The brain is a hot, wet environment, and it's hard to understand how long-range coherence could be maintained there. (With today's understanding of quantum error-correction, this is no longer a knock-down argument, but it's still an extremely strong one.)
As I mentioned earlier, even if we suppose the brain is a quantum computer, it doesn't seem to get us anywhere in explaining consciousness, which is the usual problem that these sorts of speculations are invoked to solve!...