History Of Our Time: Software's Power To Replace Humans Rachel Martin talks to Jaron Lanier, a Silicon Valley pioneer, who offers some solutions to the impact of technology on our livelihoods and our humanity.

History Of Our Time: Software's Power To Replace Humans

History Of Our Time: Software's Power To Replace Humans

  • Download
  • <iframe src="https://www.npr.org/player/embed/532724722/532724723" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

Rachel Martin talks to Jaron Lanier, a Silicon Valley pioneer, who offers some solutions to the impact of technology on our livelihoods and our humanity.


We've been talking about our disoriented time and the forces that got us here. It's a series of conversations we call the History Of Our Time. Last week, retired Admiral James Stavridis laid out an American case for keeping sea lanes free from single-country domination.


JAMES STAVRIDIS: The nation that profits the most from a peaceful global commons, from oceans upon which 50,000 ships can sail in a given day moving cargo is the United States.

MARTIN: This week, we turn to a Silicon Valley pioneer and philosopher who worries about how technology is shaping our global commons. In books like "Who Owns The Future?" and "You Are Not A Gadget," Jaron Lanier questions technology's role in our lives and not just its impact on jobs.

In fact, he argues that technology, and the Internet in particular, needs the brainpower that comes from humans. Lanier points to one example - language translation programs versus human translators.

JARON LANIER: If you look really carefully at how those services work, they have to go grab translation examples every single day by the millions from those same translators without even asking them or telling them it's happening.

MARTIN: What do you mean? Why - how are people grabbing it from those individual human translators?

LANIER: Well, it's just like the way search engines like Google or Bing have to scrape the whole Internet to find the very latest links. In the same way, every single day, the machine language translation programs have to go through the whole net to find examples where people have translated between languages. So we're grabbing all this stuff.

And so this is the thing, those people who are translating, they're not actually obsolete. All we're really doing is we're taking their value, we're taking their work, it's just that we're pretending that the people who are the source of the value don't exist anymore.

MARTIN: So that's depressing because...


MARTIN: ...Technology is supposed to make us freer. It was supposed to be the great democratizer (ph).

LANIER: Yeah, well, you know, it shouldn't be depressing 'cause technology is something you can change. If you feel that the problem is, say, human nature, that's depressing (laughter) because then you're, like, really stuck with it.

I personally think that a lot of what's really steered us wrong is a stupid business plan, you know, where we put everybody in a barter pseudo economy and then we get money for manipulating what they can see.

MARTIN: A stupid business plan?


MARTIN: What does that mean?

LANIER: All right, so the current approximate business plan goes like this. Let's say I'm a big social media company. So I have one of the biggest computers that can grab all your data and determine things about you that you haven't ever realized about yourself.

MARTIN: Yeah, that's creepy.

LANIER: So how do I make money with that? Well, the current way is that people come along and pay me to influence you, to affect your behavior. You don't have time to search through an infinite number of tweets or posts or news stories or whatever, so I select them for you so that I actually can predictably and measurably, in real time, tell how well I'm manipulating you. So it's a giant behavior manipulation scheme.

MARTIN: Is it immoral, and should that matter?

LANIER: I don't think it's moral. I think it's immoral to manipulate people without them understanding how and why they're being manipulated. Some people have called this the black-box society where you have this - these algorithms that are making decisions about what you see, what you can do.

In many cases, especially if you're not in a sort of favored position in our society, for instance, if you're poor, black, it affects what loans you can get, where you can end up living, what schools you might get into. There are all of these things being affected by these algorithms that we don't get to see. We don't - nobody understands them. And I have a moral problem with that.

MARTIN: So Lanier says humans' contributions to some kinds of technology are being taken for free and often used against them. He says he and others are working on ways that humans, like those translators, can actually get paid for the value they add to technology. But that still leaves the present, a present where Lanier says technology and tech-driven events frequently haven't kept their promise.

So I want to switch gears a little bit and talk about the more philosophical part of this issue, which is how technology is changing us as people, as a culture, as a society.


MARTIN: Where do you see the most damage occurring?

LANIER: Well, I mean, you know, I have to look at results. And the ideology is still ringing so strong in Silicon Valley that inevitably it'll create more freedom and more fulfillment in all of this. And I just look at the actual world that we're creating, and so far, it just isn't doing that.

I got to say, I'm looking at a lot of failure here. I'm looking at an Arab Spring that didn't create jobs for people, and so it wasn't sustainable. I'm looking at a bunch of really noisy political media that just seems to be creating outcomes, at least in America and also around the world that are not really directly in anyone's interest.

It feels degrading, you know? And I have to ask, at what point, as engineers, do we accept empirical results instead of ideology? And it just looks like it's not working.

MARTIN: Does that mean Silicon Valley - and we use that term broadly - but it does it mean that technology companies need to start prioritizing the social impact of the products they make and the services they provide?

LANIER: I'm a little unhappy with this idea that we have relegated more and more social responsibility to corporations that are inherently not tasked to be the central sources of social responsibility. But when we say, oh, social media must be the enforcer of, you know, controlling hate speech or meanness or something like that, there's a point in which we're recreating some kind of nanny state that I think we don't want or some kind of overbearing government-like entity that we have much less power over than one that's voted on democratically.

So, I mean, I think it's fine to talk about social responsibility for corporations, and I'm way into that. But we have to also look to larger society and government for those functions. We can't allow the tech companies to become the new government.


MARTIN: Jaron Lanier is a computer scientist, philosopher and serial entrepreneur. His current title is interdisciplinary scientist at Microsoft Research. And at the top of his personal home page, you will find these words, quote, "Jaron has no social media accounts at all and all purported ones are fake."


Copyright © 2017 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.