With Gift, Carnegie Mellon Scholars Seek To Better Define Artificial Intelligence Ethics : All Tech Considered What happens when you make robots that are smart, independent thinkers — and then try to limit their autonomy? A $10 million gift is aimed at answering such questions at Carnegie Mellon University.
NPR logo

Scholars Delve Deeper Into The Ethics Of Artificial Intelligence

  • Download
  • <iframe src="https://www.npr.org/player/embed/502905772/502918119" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Scholars Delve Deeper Into The Ethics Of Artificial Intelligence

Scholars Delve Deeper Into The Ethics Of Artificial Intelligence

  • Download
  • <iframe src="https://www.npr.org/player/embed/502905772/502918119" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

AUDIE CORNISH, HOST:

In 1941, science fiction writer Isaac Asimov stated the Three Laws of Robotics in a short story "Runaround." And those laws are the starting point for this week's All Tech Considered.

(SOUNDBITE OF MUSIC)

CORNISH: Asimov's first law...

COMPUTER-GENERATED VOICE: A robot may not injure a human being or through inaction, allow a human being to come to harm.

CORNISH: Law two...

COMPUTER-GENERATED VOICE: A robot must obey the orders given by human beings, except where such orders would conflict with the first law.

CORNISH: And law three...

COMPUTER-GENERATED VOICE: A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

CORNISH: Now, these laws come from the world of science fiction, but the real world is catching up. This month, a law firm gave Pittsburgh's Carnegie Mellon University $10 million to explore the ethics of artificial intelligence or AI. Peter Kalis is chairman of the law firm K&L Gates.

He says technology is dashing ahead of the law leading to questions that were never taken seriously before such as what happens when you make robots that are smart, independent thinkers and then try to limit their autonomy?

PETER KALIS: One expert said we'll be at a fulcrum point when you give an instruction to your robot to go to work in the morning and it turns around and says, I'd rather go to the beach or more perilously, if we were to launch a robot on the battlefield and all of a sudden it took a more partial liking to the enemy than it did to its human sponsor.

CORNISH: He says that one day, we'll want laws to keep our freethinking robots from running wild. But we'll also have to weigh such laws against the U.S. Constitution.

KALIS: It says that every person should benefit from equal protection under the law. Well, I don't think anyone contemplated that person would include an artificially intelligent robot, yet I hear people seriously maintaining that artificially intelligent robots ought to replace judges. When we get to that point, it's a matter of profound constitutional and social consequence for any country, any nation which prizes the rule of law.

CORNISH: With the law firm's gift, Carnegie Mellon president Subra Suresh says the University will be able to dig into issues now emerging.

SUBRA SURESH: Take driverless cars. If there's an accident involving a driverless car, what policies do we have in place? What kind of insurance coverage do they have? And who needs to take insurance?

CORNISH: You're in Pittsburgh, and that's where Uber is testing self-driving taxis. Have you actually taken one?

SURESH: Yeah, I took - the mayor of Pittsburgh and I took the inaugural ride about a couple of months ago.

CORNISH: So it sounds like while you were on this ride, you had far more questions than the average person.

SURESH: We were talking about this, you know, if somebody came and hit us now, are we liable or is somebody else liable? The clarification is not there yet.

CORNISH: And those issues go beyond self-driving cars and renegade robots inside the next generation of smartphones, in those chips embedded in home appliances and the ever-expanding collection of personal data being stored in the cloud. Questions about what's right and wrong are open to study.

I asked Carnegie Mellon's Subra Suresh if Isaac Asimov's Three Laws of Robotics were all we had to govern AI right now and if he wanted the university's ethics experts to come up with some sort of moral guideline that everyone understands.

SURESH: I think putting all three laws into one, do no harm could be the very first one.

CORNISH: He says we're at, quote, "an interesting point in the intersection of humans and technology, one we don't have any prior experience with."

Copyright © 2016 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.