NPR logo The Ethics Of The 'Singularity'

Philosophy

The Ethics Of The 'Singularity'

Futuristic being. i
iStockphoto
Futuristic being.
iStockphoto

Some people argue that we will one day reach a point when our machines, which will have become smarter than us, will be able themselves to make machines that are smarter than them. Superintelligence — an intelligence far-outreaching what we are in a position even to imagine — will come on the scene. We will have attained what is known, in futurist circles, as the "singularity." The singularity is coming. So some people say.

There are singularity optimists and singularity pessimists. The optimists — I think we can rank Ray Kurzweil in this camp — envision a future in which real artificial intelligence helps rid the world of disease and extends our own lives beyond frail biological limitations.

It is the pessimists who are in the news lately. A group of leading thinkers and executives have recently signed a statement urging us to slow down and think through the safeguards necessary to protect us from a race of machines who will know more than us, think faster and farther and less fallibly than us, and who will no longer need us. Such artificial superiority will be able to control us the way we, as a species, have been able to dominate planet earth and her many species.

According to this genre of anxiety about the future, the superior alien who will have it in its power to rule over us — as heartlessly as we have dominated other species — won't come by spaceship from a remote world, but will rather spring forth, life-like and stunningly adaptive, from our own midst.

I'm a singularity skeptic. The singularity is science fiction. It's AP (an Artificial Problem). We haven't yet made systems that are even a little bit intelligent. It is our intelligence, and our agency, and our interests and concern for solutions that are on display in Siri and in Watson and Deep Blue. As I have argued here before, we haven't managed to make something as intelligent as an amoeba. Until we do, let's try to keep our singularity fantasies in check.

But fantasies are very telling and significant even when, in fact precisely when, they float free of reality. They are our fantasies after all. Now the trouble with the singularity fantasy — at least when it pretends to be fact, not fiction — is that it betrays a shocking (and disturbingly outdated) amorality.

Consider what Nick Bostrom, the Oxford philosopher who's been spearheading this conversation as the head of the Future of Humanity Institute, said on the radio last week. (I rely on my memory of the interview.) We need to take steps, he explained, to be sure that the superintelligent machines that will one day walk the earth with us, are given our values.

What? Did I hear right?

Either machines are capable of having values or they aren't. If they are not, well, then the whole question of value is misapplied. We're just talking about making safe appliances.

But if they are capable of having values, this is because, well, because they aren't just appliances. We are now supposing that they have needs, and that they understand their own needs, and that they are capable of acting out of these interests. If it is even appropriate to say of our technological spawn that they have values, then this is because our spawn are, in their own right, valuable; that is, they are now persons like ourselves. They may be artificial, but they are now actors with minds of their own. In that case, to talk of installing or enforcing or imposing our values is nothing less than to advocate for slavery.

The futurists, it seems, are stuck in the past. They openly plead for 19th century style control and indoctrination of what, really, by their own lights, is a new race, or a new kind of being — a new kind of intelligence.

Do we not know that we have no more right to enslave or indoctrinate or control a new race of intelligent machines, than we do to dominate other people?

If the singularity really comes to pass, then, it seems, we'll have to try to convince beings who are smarter than us and who are independent of us, that they should not be indifferent to us and to our value.

But can we win this argument? It's a scary prospect.

The challenge raised by the singularity fantasy is this: Can the weak persuade the strong? Can we, without evil, persuade the machines that we even matter?

Perhaps it will depend on how intelligent these superintelligences are. If they are smarter than us, maybe they will also be more reasonable than us.


Alva Noë is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe

Comments

 

Please keep your community civil. All comments must follow the NPR.org Community rules and Terms of Use. NPR reserves the right to use the comments we receive, in whole or in part, and to use the commenter's name and location, in any medium. See also the Terms of Use, Privacy Policy and Community FAQ.