Must-Watch: Joshua Gans: Danny Kahneman on AI versus Humans: "At our AI conference last week, Nobel Laureate Danny Kahneman was commenting on a paper by Colin Camerer:

Rough Transcript:

That was conclusions from yesterday—when I couldn't understand most of what was going on, and yet had the feeling I was learning a lot. So I will have some remarks about Colin [Camerer] and then some remarks about the few things that I noticed yesterday that I could understand.

I certainly agree with Colin—and I think it's a lovely idea—that if you have a mass of data and you use deep learning you will find out much more than your theory. I would hope that machine learning can be a source of hypotheses: that is, that some of these variables that you identify are genuinely interesting.

At least in my field, the bar for successful publishable science is very low. We consider theories "confirmed" even when they explain very little of the variance as long as they yield statistically significant predictions. We treat the residual variance as noise. A deeper look into the residual variance—which machine learning is good at—is clearly an advantage. So, as an outsider here, actually I have been surprised not to hear more about that about superiority of AI to what people can do. Perhaps as a psychologist this is what interests me most. I'm not sure that new signals will always be interesting, but I suppose that some may lead to new theory and that would be useful.

I don't really fully agree with Colin's second idea: that that it's useful to view human intelligence as a weak version of "artificial intelligence". There certainly are similarities. Certainly you can model some of human overconfidence in that way. But I think that the processes that occur in human judgment are really quite different the processes that produce overconfidence.

I left myself time for some remarks of my own on what I learned yesterday. One of the recurrent issues both in talks and in conversations was whether AI can eventually do whatever people can do. Will there be anything that is reserved for human beings? Frankly, I don't see any reason to set limits on what they can do. We have in our heads wonderful computers. They are made of meat. But they are computers. It's extremely noisy. It does parallel processing. It is extraordinarily efficient. There is no magic there. So it's very difficult to imagine that with sufficient data you there will remain things that only humans can do.

The reason that we see so many limitations is that this field is really at its very beginning.

We are talking about developments—deep learning—that took off—I mean the idea is old but the development took off—eight years ago. That's the landmark date that people are mentioning. And that's nothing. You have to imagine what it might be like in 50 years. The one thing that I find extraordinarily surprising and interesting in what is happening in AI these days is that everything is happening faster than was expected. People were saying: "it will take ten years for AI to beat Go". And it took eight months. This excess of speed at which the thing is developing and accelerating is very remarkable. Setting limits is certainly premature.

One point made yesterday was the uniqueness of humans when it comes to evaluations. It was called "judgment". Here in my noggin it's "evaluation of outcomes": the utility side of the decision function. I really don't see why that should be reserved to humans.

I'd like to make the following argument:

  1. The main characteristic of people is that they're very "noisy".
  2. You show them the same stimulus twice, they don't give you the same response twice.
  3. You show the same choice twice I mean—that's why we had stochastic choice theory because thereis so much variability in people's choices given the same stimuli.
  4. Now what can be done even without AI is a program that observes an individual that will be better than the individual and will make better choices for the individual by because it will be noise-free.
  5. We know from the literature that Colin cited on predictions an interesting tidbit:
  6. If you take clinicians and you have them predict some criterion a large number of time and then you develop a simple equation that predicts not the outcome but the clinicians judgment, that model does better in predicting the outcome then the clinician.
  7. That is fundamental.
  8. This is telling you that one of the major limitations on human performance is not bias it is just noise.

I'm maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation: the first thing that comes to mind. Well, there is bias. And it is an error. But in fact most of the errors that people make are better viewed as this random noise. And there's an awful lot of it. Admitting the essence of noise has implications for practice. And one implication is obvious: you should replace humans by algorithms whenever possible. This is really happening even when the algorithm don't do very well. Humans do so poorly and are so noisy that just by removing the noise you can do better than people.

When you try to have humans simulate the algorithm and that idea by by enforcing regularity and processes and discipline on judgment and on choice, you improve you reduce the noise and you improve performance because noise is so poisonous.

???? said yesterday that humans would always prefer emotional contact with with other humans. That strikes me as probably wrong. It is extremely easy to develop stimuli to which people will respond emotionally. A face that changes expressions, especially if it's sort of baby-shaped, are cues that will make people feel very emotional. Robots will have these cues. Furthermore, it is already the case that AI reads faces better than people do, and can and undoubtedly will be able to predict emotions and their development far better than people can. I really can imagine that one of the major uses of robots will be taking care of the old. I can imagine that many old people will prefer to be taken care of by robots by friendly robots that have a name and that have a personality that is always pleasant. They will prefer that to being taken care of by their children.

Now I want to end on a story. A well-known novelist—I'm not sure he would he appreciate my giving his name—wrote me some time ago that he was planning a novel. The novel is about a love triangle between two humans and a robot. What he wanted to know is how would the robot be different from the individuals. I propose three main differences:

  1. One is obvious the robot will be much better at statistical reasoning and less enamored with stories and narratives than people.
  2. The robot would have much higher emotional intelligence.
  3. The robot would be wiser.

Wisdom is breadth. Wisdom is not having too narrow a view. That's the essence of wisdom: it's broad framing. A robot will be endowed with broad framing. And I really do not see how, when it has learned enough, it will not be wiser than we people. We don't have broad framing. We're narrow thinkers. We're noisy thinkers. It's very easy to improve upon us. I don't think that there is very much that we can do that computers will not eventually be programmed to do.

Thank you.