Is There Something Uncanny About Machines That Can Think?
Thinking machines are consistently in the news these days, and often a topic of discussion here at 13.7. Last week, Alva Noë came out as a singularity skeptic, and three of us contributed to Edge.org's annual question for 2015: What do you think about machines that think?
In response to the Edge.org question, I argued that we shouldn't be chauvinists when it comes to defining thinking — that is, we should resist the temptation to restrict what counts as thinking to "thinking like adult humans" or "thinking like contemporary computers." Marcelo Gleiser suggested that we're already living as transhumans, enhanced by our technogadgets and medical improvements. And Stuart Kauffman considered Turing machines, the quantum and human choice.
In addressing the relationship between humans and thinking machines, all three of our responses — and those by many others — raised questions about what (if anything) makes us uniquely human. Part of what's fascinating about the idea of thinking machines, after all, is that they seem to approach and encroach on a uniquely human niche, homo sapiens — the wise.
Consider, for contrast, encountering "thinking" aliens, some alternative life form that rivals or exceeds our own intelligence. The experience would be strange, to be sure, but there may be something uniquely uncanny about thinking machines. While they can (or will some day) mirror us in capabilities, they are unlikely to do so in composition. My hypothetical aliens, at least, would have biological origins of some kind, whereas today's computers do so only in the sense that they are human artifacts and, therefore, have an origin that follows from our own.
When it comes to human-like robots and other artifacts, some have described an "uncanny valley": a level of similarity to natural beings that may be too close for comfort, compelling yet off. We might be slightly revolted by a mechanical appendage, for instance, or made uneasy by a realistically human robot face.
Examples of the "uncanny valley" phenomenon are overwhelmingly visual and behavioral, but is there also an uncanny valley when it comes to thinking? In other words, is there something uncanny or uncomfortable about intelligence that's almost like ours — but not quite? For instance, is there something uncanny about a chat bot that effectively mimics human conversation, but through a process of keyword matching that bears little resemblance to human learning and language production?
My sense is that the valley of "uncanny thinking" is real, but elicits a more existential than visceral response. And if that's so, perhaps it's because we're threatened by the idea that human thinking isn't unique, and that maybe human thinking isn't so special.
Joichi Ito, the director of the MIT Media Lab, ends his response to the Edge.org annual question with this sobering thought: "Maybe we've done more damage by believing that humans are special than we possibly could by embracing a more humble relationship with the other creatures, objects and machines around us."
Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo