Must-Read: In Charlie Stross's view, the discourse and worries about Artificial Intelligence "Singularity" are of primary interest to sociologists, psychologists, and theologists; and not to economists or technologists. Stross says that economists and technologists should, instead, think of AI as purposeful cognitively informed nonhuman behavior toward goals—and view the development of, growth of, and consequences of those slow AI entities called "business corporations" as our template for thinking about the coming of nonhuman near-Turing class cognition to our world: Charlie Stross: Dude, you broke the future!: "If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion...

...I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model... and try and come up with a better model.... History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844. I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?...


Comments