Lacker: GPT-3 vs. Turing—Noted
Much of our intelligence is devoted to constructing models of other minds based on observations of actions, expressions, and words. We have a strong tendency to overdo it. Thus we can be very easily grifted to really, really believe that there is another mind—another Turing-class entity—back there when there isn't: the lightning does not really come from a large red-haired guy with a big hammer and anger management problems who drives a cart pulled by two goats. The latest example of this is GPT-3:
Kevin Lacker: Giving GPT-3 a Turing Test http://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html: ‘An interesting corollary is that GPT-3 often finds it easier to write code to solve a programming problem, than to solve the problem on one example input...
Q: Write one line of Ruby code to reverse an array.
A: ary.reverseQ: Write one line of Ruby code to count how many X's are in an array.
A: ary.count { |x| x == 'X' }Q: Reverse the following array: [1, 3, 5, 6, 10, 4, 2, 77]
A: [10, 6, 4, 2, 77, 3, 5, 1]
Q: How many X's are in this array: [1, 3, 5, 6, 10, 4, 2, 77]
A: 2This problem shows up in more human questions as well, if you ask it about the result of a sequence of operations:
Q: If I have a marble and a paper clip in a box, put a pencil in the box, and remove the marble, what is left?
A: A paper clip.Q: If I have two shoes in a box, put a pencil in the box, and remove one shoe, what is left?
A: A shoe.Q: If I put a pencil in a box, then put another pencil in the box, what is in the box?
A: Two pencils.Q: Then if I take out a pencil and put in a frog, what is in the box?
A: A frog.It’s like GPT-3 has a limited short-term memory, and has trouble reasoning about more than one or two objects in a sentence.…
.#noted #2020-08-02