NPR logo Are We To Become Gods, The Destroyers Of Our World?

Philosophy

Are We To Become Gods, The Destroyers Of Our World?

The debate about AI continues. i
iStockphoto
The debate about AI continues.
iStockphoto

In the stylish new sci-fi thriller Ex Machina, Frankenstein's old theme re-emerges in a beautifully designed setting: Instead of the Gothic castle we have a spectacular estate in a vast mountainous wilderness, home of the recluse genius who wants to create the first true artificial intelligence.

As in Mary Shelley's classic, cutting-edge science serves as inspiration to a moral tale, one that explores the boundary between humans and gods.

(Spoiler Alert: Although I will try not to tell more than enough of the story, this essay may compromise your viewing experience.)

For Shelley's "creature," the cutting-edge science was the effect of electricity on animal tissue, in particular Galvani's discovery in the late 1780s that electric currents make muscles twitch. Is electricity the secret of life? That was the question Galvani asked — and Shelley fictionalized.

Cut to 2015. What are the great monsters that science can create in our time? Genetic misfits and nuclear holocaust, for sure. But the greatest threat to our species is an invention that can surpass us in creativity, an artificial intelligence more powerful than we are. Picture a conscious machine capable of feeling and vastly more intelligent than we are. Are we to co-exist with "them?" How are we to decide? It seems that, if they are more intelligent, the decision is not ours but theirs.

Stephen Hawking, for one, is concerned.

"The development of full artificial intelligence could spell the end of the human race," he claimed recently.

Elon Musk, the founder of Tesla Motors and SpaceX, agrees, stating that AI is probably "our biggest existential threat." He likened AI to "summoning the demon."

The quote in this essay's title is a variation from the Bhagavad Gita, made famous by J. Robert Oppenheimer after the first successful nuclear bomb test in New Mexico.

"Now I am become Death the destroyer of worlds," said Vishnu to the prince Arjuna. The quote reappears in Ex Machina, as does the notion that if man can create intelligent machines, then man becomes god.

Is that what we are trying to do?

In his thoughtful book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom cautions of the dangers of AI, comparing our fate to that of the gorillas: The same way that we can decide whether gorillas live or die, an advanced AI can do the same to us.

Ex Machina explores the theme, coupling it to another variable we don't see in most philosophical or scientific analyses: It's not just the smarts of the machine that can lead us to our demise; it, also, can have sex appeal and seduce us in ways that are more emotional than intellectual. We all have certain ideals of beauty. A smart machine can figure those out and make itself irresistible. Rationally, we may want to destroy it; but emotionally, we can't.

Ex Machina is the movie Her on steroids. This is the big point of departure between Ava, the AI in Ex Machina, and the creature in Frankenstein. Ava makes herself beautiful and attractive, using seductive techniques to break free. Frankenstein's "hideous phantasm" has no such choice, wanting instead a partner as hideous as himself to hide away in some forgotten corner of the globe. The "creature" wants detached happiness; Ava wants freedom and power. Her biggest weapon is that she looks like us — without being one of us.

Are we far from strong AI? Yes. But the efforts are out there and, if some predictions are right (some are known to have been wrong), it will be here in a matter of decades. The question then becomes whether two intelligences can co-exist. If our past and present history is any indication, and if the fate of the Neanderthals did indeed have something to do with the emergence of the Cro-Magnons (us) during the late Paleolithic, the future doesn't bode well.

We should indeed take Ex Machina and Nick Bostrom seriously and find pathways to ensure that any AI we create doesn't end up destroying us.


Marcelo Gleiser is a theoretical physicist and cosmologist — and professor of natural philosophy, physics and astronomy at Dartmouth College. He is the co-founder of 13.7, a prolific author of papers and essays, and active promoter of science to the general public. His latest book is The Island of Knowledge: The Limits of Science and the Search for Meaning. You can keep up with Marcelo on Facebook and Twitter: @mgleiser.

Comments

 

Please keep your community civil. All comments must follow the NPR.org Community rules and Terms of Use. NPR reserves the right to use the comments we receive, in whole or in part, and to use the commenter's name and location, in any medium. See also the Terms of Use, Privacy Policy and Community FAQ.