Should-Read: Scott Aaronson (2013): Quantum Computing since Democritus: "[Alan Turing's] main idea... as I read it... is a plea against meat chauvinism...

...Sure, Turing makes some scientific arguments, some mathematical arguments, some epistemological arguments. But beneath everything else is a moral argument. Namely: if a computer interacted with us in a way that was indistinguishable from a human, then of course we could say the computer wasn't “really” thinking, that it was just a simulation. But on the same grounds, we could also say that other people aren't really thinking, that they merely act as if they're thinking. So what entitles us to go through such intellectual acrobatics in the one case but not the other?

If you'll allow me to editorialize (as if I ever do otherwise...), this moral question, this question of double standards, is really where Searle, Penrose, and every other “strong AI skeptic” comes up empty for me. One can indeed give weighty and compelling arguments against the possibility of thinking machines. The only problem with these arguments is that they're also arguments against the possibility of thinking brains!... One popular argument is that, if a computer appears to be intelligent, that's merely a reflection of the intelligence of the humans who programmed it. But what if humans’ intelligence is just a reflection of the billion-year evolutionary process that gave rise to it? What frustrates me every time I read the AI skeptics is their failure to consider these parallels honestly. The “qualia” and “aboutness” of other people is simply taken for granted. It's only the qualia of machines that's in question....

There's another thing we appreciate now that people in Turing's time didn't really appreciate. This is that, in trying to write programs to simulate human intelligence, we're competing against a billion years of evolution. And that's damn hard. One counterintuitive consequence is that it's much easier to program a computer to beat Garry Kasparov at chess than to program a computer to recognize faces under varied lighting conditions. Often the hardest tasks for AI are the ones that are trivial for a five-year-old – since those are the ones that are so hardwired by evolution that we don't even think about them.

In the last 60 years, have there been any new insights about the Turing Test itself?... There has... been a famous “attempted” insight, which is called Searle's Chinese Room.... You... answer... questions... in Chinese just by consulting a rule book... carrying out an intelligent Chinese conversation, yet by assumption, you don't understand a word of Chinese! Therefore, symbol-manipulation can't produce understanding.... Considered as an argument... several aspects... have always annoyed me.... The unselfconscious appeal to intuition–“it's just a rule book, for crying out loud!”–on precisely the sort of question where we should expect our intuitions to be least reliable.... The double standard: the idea that a bundle of nerve cells can understand Chinese is taken as, not merely obvious, but so unproblematic that it doesn't even raise the question of why a rule book couldn't understand Chinese as well....

The way it gets so much mileage from... trying to sidestep... computational complexity purely through clever framing. We're invited to imagine someone pushing around slips of paper with zero understanding or insight... But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time?

If each page of the rule book corresponded to one neuron of a native speaker's brain, then probably we'd be talking about a “rule book” at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it's not so hard to imagine that this enormous Chinese-speaking entity that we've brought into being might have something we'd be prepared to call understanding or insight...

Comments