GUY RAZ, HOST:
Could we just acknowledge, for a moment, that things like big data and AI are going to be revolutionary? I mean, they are going to change everything.
YUVAL NOAH HARARI: Yes, in almost every field of action, and the opportunities are amazing.
(SOUNDBITE OF MUSIC)
RAZ: This is Yuval Noah Harari. He's a historian and an author.
HARARI: So in 30 years, artificial intelligence and biometric sensors might provide even the poorest people in society with far better health care than the richest people get today. You can hardly think of a system, whether it's communication or traffic or electricity, which won't benefit from these kinds of developments.
RAZ: But Yuval thinks that, in the bigger picture, how AI and big data might affect political power could have really dangerous unintended consequences.
HARARI: More and more governments are reaching the conclusion that this is like the industrial revolution in the 19th century. Whoever leads the world in AI will dominate the entire world.
RAZ: And when we come back in just a moment, Yuval Harari will explain how AI might threaten to destroy liberal democracy. Stay with us. I'm Guy Raz, and you're listening to the TED Radio Hour from NPR.
(SOUNDBITE OF MUSIC)
RAZ: It's the TED Radio Hour from NPR. I'm Guy Raz. And on the show today, ideas about Unintended Consequences and the dark side of innovation. And just a moment ago, we were hearing from historian Yuval Noah Harari, who warns that AI and big data could present a real threat to liberal democracy.
HARARI: If you look at the clash between communism and liberalism, you can say it was a clash between two different systems for processing data and for making decisions. The liberal system is, in essence, a distributed system. It distributes information and the power to make decisions between many individuals and organizations. In contrast, communism and other dictatorial systems - they centralize. They concentrate all the information and power in one place, like in Moscow in the case of the Soviet Union.
HARARI: Now, given the technology of the 20th century, it was simply inefficient because nobody had the ability to process all the information fast enough and make good decisions - I mean, how many cabbages to grow in the Soviet Union. How many cars to manufacture - how much will each car cost? We tried to make all these decisions in one place when what you have is typewriters and filing cabinets and pen and paper and things like that. It just doesn't work.
(SOUNDBITE OF MUSIC)
HARARI: And this is one of the main reasons, if not the main reason, that the Soviet Union collapsed. So this was really a result of the prevailing technological conditions.
RAZ: But those technological conditions have obviously changed. Here's more from Yuval Noah Harari on the TED stage.
(SOUNDBITE OF TED TALK)
HARARI: In the 20th century, democracy and capitalism defeated fascism and communism because democracy was better at processing data and making decisions. But it is not a law of nature that centralized data processing is always less efficient than distributed data processing. With the rise of artificial intelligence and machine learning, it might become feasible to process enormous amounts of information very efficiently in one place, and then centralized data processing will be more efficient than distributed data processing.
The greatest danger that now faces liberal democracy is that the revolution in information technology will make dictatorships more efficient than democracies, and then the main handicap of authoritarian regimes in the 20th century - their attempt to concentrate all the information in one place - it will become their greatest advantage.
RAZ: So are you saying that the threat to liberal democracy increases as the ability of machines to process more and more amounts of data improves?
HARARI: Yes, in many ways. So in the 20th century, the supporters of liberal democracy had a kind of relatively easy time because you did not have to choose between ethics and efficiency. The most ethical thing to do was also the most efficient thing to do. To give power to the people, to give freedom to individuals - all these things were good, both ethically and economically.
And most governments around the world that liberalized their societies in the last few decades - they thought, if we want a thriving economy like the U.S. economy or like the German economy, we need to liberalize our societies. So even if we don't like very much to do it, we have to do it. But what happens if, suddenly, this is no longer the case? It's still the best thing to do from an ethical perspective - to protect the privacy and the rights of individuals. But it's no longer the most efficient thing to do.
The most efficient thing to do is perhaps to build these giant databases, which ignore completely the privacy and the rights of individuals. And it's the most efficient thing to do to allow algorithms to make decisions on behalf of human beings. The algorithms will decide who we will accept to these universities. The algorithms will tell you what to study and where to live and even whom to marry. And if this is more efficient, what happens to the ideals of freedom and human rights and individualism? This becomes a much more problematic issue than in the 20th century.
(SOUNDBITE OF TED TALK)
HARARI: Another technological danger that threatens the future of democracy is the merger of information technology with biotechnology, which might result in the creation of algorithms that know me better than I know myself. And once you have such algorithms, an external system, like the government, cannot just predict my decisions, it can also manipulate my feelings, my emotions. A dictator may not be able to provide me with good health care, but he will be able to make me love him and to make me hate the opposition.
The enemies of liberal democracy, they have a method. They hack our feelings, not our emails, not our bank accounts - they hack our feelings of fear and hate and vanity, and then use these feelings to polarize and destroy democracy from within because, in the end, democracy is not based on the human rationality. It's based on human feelings. During elections and referendums, you're not being asked, what do you think? You're actually being asked, how do you feel? And if somebody can manipulate your emotions effectively, democracy will become an emotional puppet show.
RAZ: So your conclusion is, he who controls the data, controls the people.
HARARI: Yes, and if you start with the understanding that, at least according to science, our feelings do not represent some mystical free will. They represent biochemical processes in our bodies and, of course, influences from the environment. Now, what we also need to remember is that it should be technically possible to decipher, to hack human beings and human feelings. In order to hack a human being, you need a lot of biological knowledge and you need a lot of computing power. And until today, nobody could do it.
HARARI: And therefore, people could believe that humans are unhackable - that human feelings reflect free will, and nobody can ever understand me and manipulate me. And this was true for the whole of history, but this is no longer true. Once you have a system that can decipher the human operating system, it can predict human decisions, and it can manipulate human desires and human feelings.
I mean, until today, no politician really had the ability to understand the human emotional system. By trial and error, they see what works, and it changes all the time. But if we reach a point when we can reliably decipher the human biochemical system and basically sell you anything, whether it's a product or a politician, then we have a completely new kind of politics.
RAZ: You know, I know that you probably come across, you know, people who have helped to create this technology - people who had this kind of utopian idea of how data and data processing could change the world in positive ways, you know? But those same people, you have to wonder whether they stop to think about the unintended consequences.
HARARI: When you develop this kind of the technology, in most cases, you obviously focus on the positive implications. And until today, humankind has managed to avoid the worst consequences. The most obvious example is nuclear technology. All the doomsday prophecies from the 1950s and 1960s about a nuclear war which will destroy human civilization, it didn't happen. Humankind successfully rose to the challenge of nuclear technology. Whether we can do it again with AI and with biotechnology is an open question.
RAZ: Yuval, you will know this well as an Israeli - somebody who lives in the biblical lands. Prophets are rarely rewarded. In fact, they're usually disliked, even when they're right. And oftentimes, when they're right, it doesn't matter because their warnings are so dark, and we ignore them at our peril.
HARARI: I definitely don't see myself as a prophet. And I don't think that anybody can prophesize the future. Actually, it's pointless. Again, I define myself as a historian. And what I try to do is map different possibilities. There are always more than one way in which we can go from here. And the reason I think it's important to have this discussion is because it's not too late.
(SOUNDBITE OF MUSIC)
HARARI: I see my job in changing the discussion in the present. We can still influence the direction in which this technology is going. There are always different possibilities.
RAZ: Yuval Noah Harari teaches history at the Hebrew University of Jerusalem. His latest book is called "21 Lessons For The 21st Century." You can see his entire talk at ted.com.
NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.