AI will bring us many new understandings. And confusions, as Joseph Weizenbaum warned.
As we think our electronic world is becoming more human, often it’s becoming less.
In the past few years, Siri and Alexa and their kin have shot past the Turing test, proposed by British mathematician Alan Turing in 1950. We can’t always tell if there’s a human or machine on the other side of conversation. The raw power of the underlying artificial intelligence keeps accelerating, especially for the type of AI known as deep learning, built on connections between layers of neural networks. Deep learning systems already can beat humans at making predictions from, say, medical images. And they can make findings that humans wouldn’t attempt—for instance, tapping ECG data to predict patient sex and age.
Although some of these models try valiantly to explain their decisions, more often than not it’s a mistake to think we understand what’s going on under the covers. “A full explanation might require looking at thousands or tens of thousands of variables and complex probabilistic relationships that connects things where we don’t see any connections,” says David Weinberger, author of Everyday Chaos. “You have to look at all of that, and in many instances we just can’t.”
It’s also a mistake to believe deep learning and other AI technologies actually understand our world. They don’t see a kitten or a tumor or your favorite Calvin and Hobbes collection. All they see are patterns of swirls in their oceans of data.
When chatting with Siri and Alexa and our other semi-loyal cloud servants, though, we tend to anthropomorphize these beasts. Seeing the world as human-like has been a common human trait for longer than we can track. We imagined supernatural beings based on the worst human patriarchs; now we teach our children that dolphins are happy to be enslaved so that they can entertain us. Back in the 1980s as he introduced a crude personal robot, Nolan Bushnell remarked that the robot’s bugs were what gave it personality. We’re still there, looking for personality as we try to tease Siri.
So it’s good to think carefully about the right roles for the strange computing power lurking so many places. In medicine, better ways to figure the around-the-clock insulin dosing for people with type 1 diabetes would be great. Ditto a tool to predict if someone in the ICU will go into cardiac arrest shortly. But forget any chatbot “therapist” that claims to understands us.
Back in 1966 Joseph Weizenbaum wrote the first chatbot, Eliza, with one variant called Doctor modeled on simple psychotherapy. Weizenbaum was horrified when his secretary didn’t want him to see her conversation with the Doctor and then when other computer scientists suggested building clinical versions.
“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” he wrote in Computer Power and Human Reason, published in 1976. “Computers and men are not species of the same genus… However much intelligence computers may attain, now or in the future, theirs must always be an intelligence alien to genuine human problems and concerns.”