Artificial Intelligence Enters Brave New World The idea of what Artificial Intelligence should be has evolved over the past 50 years — from solving puzzles and playing chess to emulating the abilities of a child: walking, recognizing objects. A recent conference brought together those who invent the future.
NPR logo

Artificial Intelligence Enters Brave New World

  • Download
  • <iframe src="https://www.npr.org/player/embed/16816185/16816159" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Artificial Intelligence Enters Brave New World

Artificial Intelligence Enters Brave New World

  • Download
  • <iframe src="https://www.npr.org/player/embed/16816185/16816159" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

LIANE HANSEN, host:

Okay. Here's one more question for you. What is intelligence? Is it the ability to solve a puzzle or write a novel or what? Well, this next story is about how scientists are grappling with what makes a machine intelligent - artificial intelligence it's called, AI for short.

Rick Kleffel of member station KUSP filed this report after attending the Second Annual Singularity Summit, a gathering in San Francisco of scientists, entrepreneurs, students, authors, visionaries, futurists and, of course, computer geeks.

RICK KLEFFEL: In 1956, humanity began to design its own successor. When the forefathers of artificial intelligence gathered at Dartmouth College in Hanover, New Hampshire, coming up with a brain in a box seemed to be a straightforward problem with an achievable solution.

But at the 2007 Singularity Summit, Dr. Rodney Brooks, chief technology officer of the company iRobot, suggested that these bright, young men were only looking at part of the problem.

Dr. RODNEY BROOKS (Co-founder; Chief Technology Officer, iRobot): And as I talk about intelligence, I think they got inspired about what made them more intelligent than the other people they knew and what they were good at with solving puzzles and doing mathematics and that seemed the essence of intelligence to them.

KLEFFEL: Brooks says that this definition is incomplete. It dismisses skills the forefathers saw as unimportant.

Dr. BROOKS: You know what a 2-year-old child could do, just - it will walk around and climb on things, recognize objects, name them, talk about them. Anyone could do that. That's not the essence of intelligence. So out of that 1956 then came, you know, playing chess, proving theorems. That was intelligence, but what seemed that only gets you so far.

KLEFFEL: At the 2007 Singularity Summit, the speakers are working from a newly informed division of human intelligence.

Sam Adams is a distinguished engineer within IBM's research division, who work on a project following up Deep Blue. It's called Joshua Blue, and it's an attempt to create a toddler in a box.

Mr. SAM ADAMS (Engineer, Research Division, IBM): The only proven pathway to human levels of intelligence we have is the one we walk ourselves. So let's go look at that and learn from it and see if we can build something like it. We found, following the child, looking the childhood development, taking it seriously, go across campus and talk to the psychology department. You know, talk to neurophysiology department. Find out what they've been learning. And then use your engineering skill to try to build analogs of that in the system.

KLEFFEL: While Sam Adams and other speakers focused on how we might go about creating an artificial intelligence, Dr. Barney Pell suggested one way of determining if we had succeeded.

Dr. BARNEY PELL (CEO, Powerset): One of the definitions that I used was by a high school replacement robot that can actually compete with high school graduates. We'll know that's been achieved when you start finding that robots are winning at least half the jobs, you know, in the job market competition with high school graduates. That's a very clear milestone.

KLEFFEL: When Deep Blue beat Garry Kasporov at chess, Kasporov famously commented at least he couldn't enjoy its victory. But Deep Blue is an expert system that could only play chess, not a self-aware learning intelligence.

Jamais Cascio serves as the global future strategist for the Center for Responsible Nanotechnology. He suggests that there are now two types of artificial intelligence.

Mr. JAMAIS CASCIO (Global Future Strategist, Center for Responsible Nanotechnology): What AI, as it's understood today means, are these really narrow specific forums of machine cognition. You're not going to get Deep Blue to start playing backgammon. It's a very dedicated system. Conversely, artificial general intelligence is the idea that you can, in fact, create these machine cognition systems that rather than being focused on particular narrow subjects, actually have the ability to learn something new.

KLEFFEL: Though artificial general intelligent, AGI, maybe something we can create, it may not be something we can sell.

Dr. James Hughes is the executive director of the Institute for Ethics and Emerging Technologies.

Dr. JAMES HUGHES (Director, Institute for Ethics and Emerging Technologies): There are very few companies that actually want to do that. I mean, you don't really want to have an argument with your toaster about whether it's going to give you a toast in the morning.

KLEFFEL: Still, there is a need for AGI and a need to understand potential problems before they get out of hand.

James Hughes.

Dr. HUGHES: We're deploying robots in the field and in Iraq. And one of the questions that's come up is do you want to empower a robot to go around killing humans but then what if you give it the command that he can destroy a materiel in the field but you can't destroy humans. Well, the materiel is sometimes held in the hands of the humans.

KLEFFEL: According to Hughes, the questions go beyond the ability of machines to distinguish a target.

Dr. HUGHES: There are many ethical questions of the liability and how much autonomy we want to give to machines, who's responsible when a machine makes a mistake and those kinds of questions, in addition to what kinds of ethics are going to drive these machines. I mean, we could imagine that there could be Sharia robots or there could be utilitarian robots. There could be, you know, the anthological Catholic ethics robots. So that's part of the debate as well.

KLEFFEL: At this year's Singularity Summit, speakers explore the consequences of creating a self-aware artificial general intelligence.

Paul Saffo is a futurist who teaches at Stanford University.

Professor PAUL SAFFO (Technology Forecaster; Consulting Associate, Stanford University): The optimistic scenario is they'll treat us like pets, and the pessimistic scenario is they'll treat us like food.

KLEFFEL: When Saffo looked for a positive, compelling vision of artificial intelligence, he quoted a forty-year-old poem by Richard Brautigan.

Prof. SAFFO: I like to think, and it has to be, of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters and all watched over by machines of loving grace.

I think that's a wonderfulsome(ph).

(Soundbite of applause)

KLEFFEL: The Singularity Summit takes place annually, but the work goes on year round as the conversations continue between those who imagine and those who invent the future.

For NPR News, I'm Rick Kleffel.

Copyright © 2007 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.