TERRY GROSS, HOST:
This is FRESH AIR. I'm Terry Gross. We're going to talk about some of the latest developments in robot technology, some pretty cool stuff, but we're also going to discuss the practical and ethical questions we face as robots play an increasingly larger role in our lives. Some of them will help us. Some of them may replace us at work. In the new book, "Machines Of Loving Grace," my guest, John Markoff, writes about the past, present and future of robots and warns that this future will come with consequences we're not prepared for. He's been a science and technology reporter for The New York Times since 1988. In an earlier book, he wrote about how the 1960s counterculture shaped the personal computer.
John Markoff, welcome to FRESH AIR. So in writing about robots, you write about two different categories of robots. One is artificial intelligence. And you describe those robots as robots that are kind of designed to make it unnecessary for us to do something, so they replace us in that activity. And the other kind of robot is intelligence augmentation, where the robot actually enhances our ability to do something, it adds brainpower or physical strength or some kind of thing that will help us do what we do. Why is that an important distinction?
JOHN MARKOFF: It's an important distinction because the relationship between humans and robots is becoming increasingly more personal. And robots, as tools for humans, are becoming increasingly more powerful. And despite a lot of the sort of perspective in Silicon Valley that these things are evolving very rapidly on their own, in fact they're still designed by humans. And so what I've realized is that we have an increasing ability to design our self into our systems that we're building or design our self out of our systems. And that's a decision that's being made sometimes knowingly and sometimes unknowingly by the people who design these machines.
GROSS: So it in part is a question of will machines help us do our work or leave us unemployed?
MARKOFF: It is. It's an increasingly intense question that's being debated now in society. But it goes way back to the dawn of interactive computing. And I noticed this first in these two laboratories that were on either side of Stanford University in the mid-1960s. There were two pioneers of modern computing. One was John McCarthy, who was actually the person who coined the term artificial intelligence. And he was on one side of campus, and he created a laboratory in 1962 called the Stanford Artificial Intelligence Laboratory. And at that point, he thought it would take just a decade to design a working AI - something that was as intelligent and as capable as a human being. On the other side of campus, there was another engineer whose name was Douglas Engelbart. And we know him probably because you've heard that he invented the mouse, and he was one of the people who pioneered the idea of hypertext that led to the World Wide Web. And Engelbart really believed deeply that we should use machines to augment our senses and our intelligence and sort of bootstrap the collective intelligence of the human species to benefit mankind. And I realized those were two different philosophical stances. And it created two different communities within the computer science world. One was the AI community and the other with the IA community, which later became called the human-computer interface community. And since then it's been, you know, more than 40 years, 50 years, those communities have largely progressed without speaking to each other, in isolation. And it seems like now is a good time that they maybe should work on converging their powers.
GROSS: What's an example, in each category, of a robot that we're using now and that one we'll probably be able to use in the near future?
MARKOFF: Well, so an artificial intelligence robotic device that I think probably will become familiar to all people is a self-driving car. We don't have them yet, as things we can buy. But there are people who are working very hard on making that happen. And there's a huge debate about how quickly they will come. So cars are something that will probably surround us - autonomous vehicles are probably something that will surround us at some point in the future. And the other side, you know, I think in some senses what we already have is the personal computer - I mean, Steve Jobs described it in the most evocative way, as being a bicycle for the mind - and, you know, take it a step farther. And I have a very broad definition of what a robot is, and a robot can be something that's physically embodied, it can be a machine, it can walk around. Or it can be software that is a personal assistant, something like Siri or Cortana or Google Now - something that we interact with. And, of course, if you've seen the movie "Her," you can sort of speculate on where that might go in terms of systems that converse with humans and do things that we need to be done.
GROSS: Let's talk about vehicles 'cause I think the driverless car is an example of something that people are so conflicted about. On the one hand, it's a really exciting, kind of thrilling idea to think that that's possible. On the other hand, there's something, like, deeply terrifying about it because the thought of being hit by a driverless car, either as a driver or a passenger, is terrifying and there are so many subtle decisions that you have to make as a pedestrian and as a driver. And you rely on the consciousness of other drivers to interact with you in some way. Sometimes even when you're crossing the street, you're looking at the driver to see, are they looking at you or looking at their phone or looking at their passenger? So where are we now, in terms of the driverless car? What's the state of the art?
MARKOFF: Well, first thing, let me say that the bar for autonomous vehicles is incredibly low. Human drivers are terrible. We do an absolute miserable job.
GROSS: (Laughter) Speak for yourself.
MARKOFF: So - but your point about distraction is really remarkable. So where are we now? I have a car that I bought this year - first car in 10 years - that is able to recognize both pedestrians and bicyclists. And if I don't stop, it will. So that's - that's 2015. It also has something called adaptive cruise control. So I can - you know, I commute in Silicon Valley - I can drive without touching either the brake or accelerator, going from zero to 70 from San Francisco to Menlo Park without ever doing anything but steering the car now. And that's, you know, that's a very inexpensive add-on that you can get for almost any car on the market now.
So my point is, we're kind of like frogs in the pot. And to my AI versus IA sort of point of view, it would be really good. Let's forget about driverless cars for a second because that's a really hard problem. And it's a hard problem not because of sort of the basics, it's a hard problem because of the edge cases, the extraordinary things that only happen once in a while. But what if we could design, instead of a driverless car, a car that wouldn't crash? Which is sort of a different way of looking at the problem. What if we could protect ourselves from our foibles? And I think that the automobile industry is actually moving very quickly to that, and that cars will be safer and we'll be protected from our smartphones and whatever else we're doing in our car that we shouldn't be doing. And so it's going to happen gradually. So for example, Tesla and General Motors, next year, are introducing something that's known as super-cruise. The car will largely drive itself at freeway speeds on the freeway. Already in the market today, there is a technology called Traffic Jam Assist, which is a little bit more advanced than what I have in the Volvo station wagon that I bought. Traffic Jam Assist - from companies like Audi and BMW and Mercedes and others - allows the car to drive by itself in stop-and-go traffic, and it follows the car in front of it in traffic, and it stays in the lane and it goes fast and slow and it actually frees the driver up.
So my argument is that getting over that last hurdle to actually truly self-driving cars is going to be really, really tough. And I don't think it's going to happen in the next decade. But getting to the point where the driver supervises rather than actually manually drives is something we'll come to expect over the next half-decade.
GROSS: How secure are you in your car with the robotic functions?
MARKOFF: They haven't failed me yet. So how secure are you in your car with the mechanical functions? I think that there are many things that go wrong in cars. And, you know, both Toyota and GM have gone through incredibly painful litigation because they've made design errors. And so that won't go away. But I think sort of net, you know, sort of overall by adding these technologies to the car, they'll make both the driver safer and the people like you walking on the street with your smartphone safer. For example, in Ann Arbor right now, and in Europe and Japan, they're testing a technology called V2X. And that basically will give cars in the future, and bicyclists and pedestrians who are wearing these little beacons, sort of awareness of what's going on around them. And so, you know, the car - even if you don't, as a driver, see the pedestrian - the car will know the pedestrian or the bicyclists there. And I think that's a technology you should push out as quickly as possible.
GROSS: So you mean the car would read these little devices that the pedestrian wears and knows that...
GROSS: ...That there is a pedestrian present?
MARKOFF: Yes. That will become standard over the next half-decade. And more to the point, cars approaching an intersection will know that there's another car approaching an intersection coming from the other direction. So that's the kind of car-that-won't-crash technology I think would be just a tremendously positive thing.
The deeper question - I mean, a very interesting thing happened in the Google self-driving car project that probably didn't get enough noticed earlier this year. You know, Google has been pushing ahead with this notion of, you know, a fleet of Priuses and then Lexuses that could drive themselves, and they've done a spectacular job. I mean, they've driven over a half-million miles without a robot-caused error. They've been in a number of crashes, as they recently admitted, but in each case the crash that the car's been involved in has been caused by a human. And that's just - that's an extraordinary feat. But if you noticed, about four months ago, they changed the nature of their project and they added this new kind of car that didn't have a steering wheel, didn't have a brake, it didn't have an accelerator. It was like an elevator. And it was made to be limited to 25 miles an hour. And the idea was that you wouldn't take it on the freeway, but maybe in a downtown area or on a campus. You could call a car from your smartphone and it would show up and it would take you to where you were going, and, you know, while you were going there you would be the victim of some sort of Google advertising campaign and then you'd get out and you'd be there.
GROSS: (Laughter) I like the way you snuck that in parenthetically.
MARKOFF: Well, there's a screen - there's no steering wheel, but there's a screen in the car. But here's why they did it - and they sort of got away without admitting this. So they went to the second stage - this is on their full, full-on Prius fleet and Lexus fleet - they went to the second stage. They started with a group of professional drivers who would sit there and just, like, sort of, you know, airline pilots, they'd have checklists and they'd be very attentive. And then they began giving their cars to Google workers to go home, you know, after working and commuting, and they wanted to see how they worked for regular people. And what they found is that there was all kinds of distracted behavior, up to and including falling asleep. If you've been working (laughter) all day at, you know, a Silicon Valley job, and the end of the day, you've got your - car is driving yourself, you might just nod out. And they decided there was no way they could solve that problem. It's called the handoff problem. If you're driving a car - if you're in a driverless car and it's taking care of things, the way it works now is if it encounters a problem it can't solve, it hands control back to you. And if you're playing, you know, "World Of Warcraft," or you're doing email and you're supposed to get back into what's called situational awareness in less than a second - you know, some fraction of a second - it's just not going to happen. And that's one of the problems that we don't know how to solve.
GROSS: Well, that's interesting because if I'm a driver, if I have to be awake, I want to be involved and interested. And the way to do that would be for me to actually be driving. To watch through the windshield as an automated car just drove me would be really, incredibly boring, I think.
MARKOFF: It's going to be an interesting challenge isn't it? And I don't think we've...
MARKOFF: ...And I don't think we've solved that problem, which is why I think that maybe if we focus on cars that won't crash and keep the human in the loop for maybe a generation or two, it might be a better thing.
GROSS: If you're just joining us, my guest is John Markoff. He's a science and technology reporter for The New York Times, and he's the author of the new book, "Machines Of Loving Grace: The Quest For Common Ground Between Humans And Robots."
John, let's take a short break then we'll talk some more. This is FRESH AIR.
(SOUNDBITE OF MUSIC)
GROSS: This is FRESH AIR. And if you're just joining us, my guest is John Markoff. And we're talking about robots and their relationship to people and what's on the horizon. He's the author of the new book "Machines Of Loving Grace: The Quest For Common Ground Between Humans And Robots." He's the science and technology reporter for The New York Times.
So we've been talking about robotic vehicles. Some of the technology for robotic vehicles - and what's the word I should be using here?
MARKOFF: Well, self-driving cars or autonomous vehicles.
GROSS: Autonomous vehicles, self-driving cars, yes. Some of the technology for that comes out of DARPA, the Defense Advanced Research Projects Agency. DARPA is the Defense Department group that helped create the Internet. So does drone technology and the kind of driverless car that Google's working on basically come out of the same system, the same technology, the same research?
MARKOFF: Well, yeah, there's - I mean, DARPA, you know, it's been well-celebrated for the amount of sort of basic technology work they've underwritten and the impact it's had on the world. It's not just the Internet. It's personal computing. It's drone technology that was first developed by DARPA separately, not in Silicon Valley. And now there's a new wave of sort of AI-related technology. Of course, DARPA funded the first two or three waves of AI technology. Now the most important sort of new capabilities in the AI world is a software technology called machine learning, or deep learning. And that DARPA was deeply involved in as well.
GROSS: One of the things that DARPA has done is fund the competitions for autonomous vehicles. Can you give us - have you attended any of those competitions?
MARKOFF: I've been to all of them, and they're great fun.
GROSS: Oh, tell us - describe one of them for us. Describe a great moment in one of them, yeah.
MARKOFF: (Laughter) So - let's see - several great moments. The very first DARPA autonomous vehicle Grand Challenge was held in 2004, and it was an absolute failure. All of the entrants ended up strewn around the desert. The best - the sort of the car that got farthest, designed by some researchers at Carnegie Mellon, made it seven miles before it went off the road. There was an autonomous motorcycle that had to be gyroscopically stabilized, and the designer, who was a UC Berkeley student at the time, was so nervous he forgot to turn on the gyrostabilizer, so it fell over as soon as it started. They took down fences. It was a mess. But it was great fun. I flew over the scene afterwards and there were just robots all over the desert.
GROSS: (Laughter) So odd.
MARKOFF: So a year and a half later - and, you know, it was sort of the brain child of this controversial DARPA director, Tony Tether, who had - you know, was trying to sort of respond to a congressional mandate that said that by 2015, a third of the U.S. military vehicles should be autonomous, and they weren't making good progress. And so he sort of opened the gates for hackers and college professors and corporations.
So a year and a half later, in Florence, Ariz., I found myself in a Stanford car, called Stanley, sitting in the passenger seat next to an AI researcher whose name is Sebastian Thrun, who would later start the Google Car Project. And we were in this Volkswagen Touareg, and it had decals all over it and it had sensors all over it. And we were driving about 20 to 25 miles an hour. It was a desert road so we're going up and down and bouncing around.
And at a certain point, the lidar - these laser sensors that were sort of painting a picture of the world for the car - swept over this branch. And to Sebastian's right, there was this big, red e-stop button. And he was supposed to be able to stop the car if something went wrong. But in this case, the lidar saw the branch and we were off the road before he had a chance to hit the button, and we ran into this giant desert bush. And Stanley, three or four months later, actually won the competition, won $2 million for the Stanford team.
GROSS: Wow, but this could have been a disaster - right? - if it was a real road?
MARKOFF: It could have been a disaster, and, in fact, you know, that's still possible. You know, these are not perfect machines. And so there's this big debate going on sort of amongst the philosophers of the ethics about whether you - you know, I guess it's called the trolley problem - what happens if you can go down one path and if you go down this path, you'll kill five people, and if you go down another path, you'll only kill one person? And so how is the machine going to make this, you know, sort of gruesome, ethical decision? And so there's lots of back and forth on this.
This was not originally started as an ethical debate for trolleys or for cars. It was a - sort of a first principles debate about ethical decisions. And I don't really understand it. I think that sort of for the whole of society, if the vehicles are safer, even though there may be these individual sort of bad things that happen, that we should go in that direction because right now, cars are incredibly unsafe - actually, drivers are. Cars are not particularly unsafe, but drivers do crazy things all the time. So if we can be sort of wrapped around with a cocoon that will make us make better decisions, I think we should go that direction.
GROSS: My guest is John Markoff, a science and technology reporter for The New York Times and author of the new book "Machines Of Loving Grace." After we take a short break, we'll talk more about robots and about how Amazon is using them. Also our TV critic, David Bianculli, will review two new series - IFC's "Documentary Now!," featuring Bill Hader and Fred Armisen, and AMC's "Fear The Walking Dead." I'm Terry Gross, and this is FRESH AIR.
(SOUNDBITE OF MUSIC)
GROSS: This is FRESH AIR. I'm Terry Gross, back with John Markoff, a science and technology reporter for The New York Times and author of the new book, "Machines Of Loving Grace." It's about the past, present and future of robots and the ethical and practical issues surrounding their development. Let's talk a little bit about what Amazon is doing. You say they've bought Kiva Systems, which is a fleet of mobile robots. What do these robots do, and how does Amazon want to use them?
MARKOFF: Well, so, you know, Amazon has this sort of goal of getting you the product you want actually before, you know, you've decided you want it. I mean, their idea is that the product should actually, you know, appear at your doorstep and you would sort of say, yes, that's obviously what I wanted because it's here. But, you know, what they've done in practice is they've really compressed the time it takes you to get something. They have to compete, of course, with your ability to walk into a store and pick something up. And they're getting very close. And so that involves a really remarkable supply chain from the manufacturer to the warehouse. And they have warehouses all over the country. And they've - they purchased this company called Kiva Systems, which is a really good sort of stake in the ground about where AI is right now because what Kiva does - so in the manual, you know, warehouses, where they do this thing called piece pick, the human workers literally have to run from shelf to shelf and gather things and put them in boxes. Kiva Systems says, well, there are two things that we can't do in AI right now. We can't dexterously pick things up. And we can't sort of look at them and understand what they are. So let's automate everything else. So in the Kiva model of the piece pick warehouse, things come to you - the warehouse worker - and you sort of - they come right at the exact right time. And you drop them into the package and bundle it up. And off it goes. And of course, Amazon is pushing as hard as it can on those manipulation problems and the vision problems. And at some point, they'll get rid of the human entirely.
GROSS: That's interesting because there's been a lot of complaints from warehouse workers at Amazon, complaining about the hours, the conditions, the temperature inside.
MARKOFF: You know, the joke is that it's great that they finally got robots because the robots need air-conditioning to work. And so they finally air-conditioned the warehouses (laughter).
GROSS: (Laughter) Ouch.
MARKOFF: I don't know if that's true, but that's the joke. But, you know, I'm of two minds about this because I did a fair amount of reporting on sort of the future of warehouses. And warehouses are changing dramatically for lots and lots of different reasons. I mean, one of the things that's happening is warehouses are getting closer and closer to the distribution point. I mean, warehouses are not just scaling up into these 5 million square foot buildings that are outside of cities, urban areas. They're actually putting the warehouses very close to the consumer. And in doing that, what you want to be able to do is move the goods in very intelligent ways. And so it's this long, computerized supply chain that goes all the way back to China. And it's remarkably involved at this point. Besides what Google is doing, there's all kinds of technology flowing into warehouses. There's case pick technologies. I was in a warehouse in upstate New York where one half of the warehouse was sort of existing technology, which is pallet jacks and forklifts and workers with headphones on that speak to them in five different languages, flying around the floor picking up cases and taking them to the waiting trucks. It was just a frenzy of activity. And that's sort of today's technology. The workers were controlled by a centralized computer. And then on the other side of the warehouse, there was this amazing machine that looked like a giant Pachinko Machine, where the cases were actually arrayed like data inside of a computer, I guess, that the packages that were most frequently used were at the front. And it was this 20 sort of levels of packages high, by 20 or 30 wide. And these little go karts would fly down these aisles, grab the package or the case and then bring them back to the centralized Rube Goldberg-like loader at the front. And the packages would come down. And they would be put on a pallet. And they would be automatically wrapped and then automatically put in the back of the truck. And so on every level, people are being taken out of the equation. And I actually think that maybe is not a bad thing, as long as we as a society can find something else meaningful for those people to do. These are not...
GROSS: That pays, yes.
MARKOFF: Yes. So there - that's where the disconnect is.
GROSS: If you're just joining us, my guest is John Markoff. We're talking about robots and artificial intelligence and intelligence augmentation. They're the subjects of his new book, "Machines Of Loving Grace: The Quest For Common Ground Between Humans And Robots." He's a science and technology reporter for The New York Times. Let's take a short break, then we'll talk some more. This is FRESH AIR.
(SOUNDBITE OF MUSIC)
GROSS: This is FRESH AIR, and if you're just joining us, my guest is John Markoff. We're talking about robots, and they are the subject of his new book, "Machines Of Loving Grace: The Quest For Common Ground Between Humans And Robots." He's a science and technology reporter for The New York Times. I want to talk about another form of technology. There's a kind of technology that immerses you in a 3-D world, as if you were in the middle of like a 3-D movie or something, so much so that it looks like you're in this reality. And it's called...
MARKOFF: Well, there's virtual reality. That's one of the sort of hot new technologies in Silicon Valley and mostly used as an entertainment medium right now. But there's also augmented reality. And I'm very intrigued by augmented reality. There's a host of startups. It's incredibly challenging. But think about it. I don't know about your city. But if you walk around the streets of San Francisco right now, I would say probably 75 percent of the people are looking down at their smart phones and, you know, sort of wandering aimlessly around the streets. That can't be the final state of technology evolution. It just - it just can't be. It has to be an intermediate step to somewhere. And I've read lots of science fiction about augmented reality. And I was super skeptical until about two years ago. I visited a small startup in Florida, of all places, called Magic Leap. And Magic Leap was one of those companies that's trying to design basically what look very much like normal glasses that you will wear. And it will - their goal is basically replace personal computing. They want you to be able to do everything with a personal computer. But you won't do it at a desk, and you won't do it in the palm of your hand. And what struck me - I mean, I got a little bit of a view into their technology, which is very much incomplete at this point. But it's exciting enough that Google and Qualcomm and Kleiner Perkins have invested a half billion dollars in it. What struck me about their version of augmented reality is, you know, at the point that I saw it, it was just on a bench top. And I looked through something that looked like something you would see at the eye doctor. And out in space in front of me was this four-armed creature that was walking back and forth. And what startled me - because I've seen virtual reality technology before - was that the resolution that they'd achieved was as good or better than any HDTV I'd ever seen. So this creature looked real, and it looked three-dimensional. And that was impressive. And all I could do was I could sort of slide it back and forth while it walked in circles, which was kind of neat. And it seemed like it was meant to be in the environment. And then something weird happened. My host ran my thumbnail through the creature. And rather than the creature being transparent, by thumbnail was transparent. Basically, my mind thought that the creature was more real than my thumb. So something was going on that they were able to create an image inside my mind that was very real. So their idea is that they want to get rid of all of the display technology that is made by Asian companies today. And if you want a high-resolution display...
GROSS: Oh, all the screens?
MARKOFF: All the screens - they want to get rid of screens entirely. So if you want a screen using this technology, you'll basically just take your thumbs and draw a square. And a screen will hang in space. And if you want another screen, you'll just sort of do something someplace else. And they can be text or video. There will be no computers. I mean, this is what is meant by the term ubiquitous computing. The computing disappears into everyday things, and they become magic. And, I mean, I saw that start to happen with, you know, the iPod and the iPhone. It was the music player and the telephone. Basically, computing vanished into them, and they became these powerful devices. And so my sense is that, you know, if I had to bet on what the next big platform is, that's a good candidate.
GROSS: So I guess I really don't understand how you would either generate an image or how you would communicate through text or speech to whoever you wanted to communicate.
MARKOFF: Right. So there are different ways of doing it right now. And it's not clear which technology will win. Traditionally, we've been able to sort of generate stereoscopic images that overlay. That's the direction that Microsoft is heading in now. Microsoft has an augmented reality system, too, that's called HoloLens. It's actually not holographic from what I understand. And the challenge with stereoscopic displays is that sometimes they become, for sort of neurological reasons, very disconcerting to the human user. I'm one of those people. You know, 10, 15 years ago, they used to have these things called virtual reality caves. And I would go into a virtual reality cave, and I would invariably get seasick. And seasick, in a way - and it's called virtual reality sickness - you go in there. And you come out, and you're feeling seasick. And it doesn't go away. You've had a very intense experience. And it leaves your mind reeling. As a matter of fact, if you look at these products on the market today - and companies like Sony have them commercially on the market - and you read the fine print, they suggest they're not appropriate for young children to use. And you sort of wonder, well, what's happening that could be a risk for young children? So the hope is that the approach that companies like Magic Leap are taking, which is to create something called a digital light field, which is a way of simulating what you and I see with light as it falls naturally on our eyes - and they think they're able to get close - and they think that that will be a more comfortable and realistic way to sort of add a layer of reality on top of the one you normally see. And from what I saw, it's incredibly intriguing. Miniaturizing it and making it at a price point that you and I can afford is a huge challenge.
GROSS: Would you be carrying some kind of hard drive in your pocket - I mean, something that you could basically say to, I want to read a book, or I want to send a message to my friend? And it would know what to generate for you.
MARKOFF: So yes. So there are different ways of approaching this. The HoloLens device that Microsoft has makes you look a little bit like a cone-head right now. It's kind of a big device. But it has a computer in it.
GROSS: I see.
MARKOFF: And I believe the Magic Leap guys have sort of a different approach that you - basically, it looks like glasses. And there's a cable that comes down the back of your neck. And you've got the computer in your pocket. But yes, there's a computer in the scene. And so far, they haven't been able to miniaturize those completely.
GROSS: So we've been talking about robots. Yet, none of the robots that we're talking about exist in the kind of form we used to imagine robots would exist in. And I'm thinking of the kind of, like, metal robot with, like, a square metal head and a bunch of, like, you know, rectangle metal things stacked on each other for the body and metal legs that would kind of, like, swivel along, kind of like a robotic tin man. You said that there's always been a debate within the artificial intelligence community about whether robots should assume some kind of human form or not. Is that debate still going on?
MARKOFF: It's very much alive. And, you know, we were able to see these robots really act in the world for the first time at the next DARPA Grand Challenge they had. They just completed another round of these contests called the DARPA Robotics Challenge. And in that case, most of those robots really did look like the kinds of robots you're talking about. Most of them walked. Most of them looked a little bit like the Terminator. Some of them weighed up to 440 pounds. And they had a propensity to fall down, which was really a lot of fun in the finals, which was won by a Korean robot. You know, they had to do these eight tasks. And it was held in Los Angeles. And so the point is these robots are starting to sort of appear in the world. You know, they're not really roaming around by themselves now. And this was actually sort of ground truth for me for where the industry is right now with building these kinds of machines that might wander around and help us or be Terminators perhaps. And that is that after spending millions of dollars and working on these projects for two or three years, many of the robots had difficulty opening a door. And so that led Rod Brooks, who was there to see this final, sort of to comment by saying, you know, if you're afraid about the Terminator or if you're worried about the Terminator, just keep your door closed.
GROSS: What does that mean?
MARKOFF: Well, so the robots weren't able to open the door for the most part (laughter).
GROSS: Oh (laughter). That's funny. John Markoff, thank you so much for talking with us.
MARKOFF: Thank you for having me.
GROSS: My pleasure. John Markoff is a science and technology reporter for The New York Times. And he's the author of the new book, Machines Of Loving Grace." Here's a great robot song performed in our studio by Flight of the Conchords back in 2007, when they had their HBO comedy series.
(SOUNDBITE OF ARCHIVED BROADCAST)
FLIGHT OF THE CONCHORDS: (Singing) The distant future, the year 2000. The distant future, the year 2000. The distant future, the distant future. It is a distant future, the year 2000. We are robots. The planet Earth has been taken over by the robots. We have made some significant changes. There are no more elephants now. And there are no more stairways, just ramp access. We no longer say the word, yes. Instead, we say affirmative to sound more futuristic. Affirmative. See? There is only one sort of dance now, the robot. Oh, and the robo boogie we do. Oh, affirmative - two dances. Yes, affirmative. Oh, and there are no more humans. Finally, robotic beings rule the world. The humans are dead. The humans are dead. We used poisonous gases, and we poisoned their asses. The humans are dead. He is right. They are dead. The humans are dead. Look at those ones. They're dead. It had to be done. I'll just confirmed that they're dead. So that we could have fun. Affirmative - I poked one. It was dead. Their system of oppression. What did it lead to? Global robot depression. Robots ruled by people. They had so much aggression that we just had to kill them, had to shut their systems down. I said the humans are dead. He's right. They are dead. The humans are dead. You already said. We used poisonous gases. With traces of lead. And we poisoned their asses. Actually, their lungs. Binary solo. 0000000001, 000000011, 0000000111. 000000001111. Oh, oh, oh, oh, oh, one. Come on, sucker, lick my batteries. Boogie, boogie, boogie. Humans are dead. Once again without emotion, the humans are dead, dead, dead, dead, dead, dead.
GROSS: That's Flight of the Conchords recorded in our studio in 2007. Bret McKenzie and Jemaine Clement have started writing a Conchords movie. Coming up, David Bianculli reviews the new IFC series of satirical documentaries featuring Bill Hader and Fred Armisen and the AMC series, "Fear The Walking Dead." This is FRESH AIR.
NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.