Could You Kill A Robot? Will we one day create machines that are essentially just like us? People have been wrestling with that question since the advent of robotics. But maybe we're missing another, even more intriguing question: what can robots teach us about ourselves? We ponder that question with Kate Darling of the MIT Media Lab in a special taping at the Aspen Ideas Festival.
NPR logo

Could You Kill A Robot?

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Could You Kill A Robot?


This is HIDDEN BRAIN. I'm Shankar Vedantam.


VEDANTAM: Have you ever talked to your computer, cursed it for making a mistake?


DAVID HERMAN: (As Michael Bolton) PC LOAD LETTER, what the [expletive] does that mean?

VEDANTAM: Have you ever argued with the traffic directions you get from Google Maps or Waze?


AUTOMATED VOICE: Starting route to Grover's Mill Road. Head North on...

VEDANTAM: Have you ever looked at a Roomba cleaning the floor on the other side of the room and told it, please come over to this side, turn left, left?


UNIDENTIFIED MAN: It just ran itself right off of the edge.

VEDANTAM: Robots and artificial intelligence are playing an ever larger role in all of our lives. Of course, this is not the role that science fiction once imagined.


MICHAEL BIEHN: (As Kyle Reese) It doesn't feel pity or remorse or fear.

VEDANTAM: Robots bent on our destruction remain the stuff of movies like "Terminator." And robot sentience is still an idea that's far off in the future. But there's a lot we're learning about smart machines, and there's a lot that smart machines are teaching us about how we connect with the world around us and with each other. This week on HIDDEN BRAIN, can robots teach us what it means to be human?


VEDANTAM: My guest today has spent a lot of time thinking about how we interact with smart machines and how those interactions might change the way we relate to one another. Kate Darling is a research specialist at the MIT Media Lab. She joined us recently in front of a live audience at the Hotel Jerome in Aspen, Colo., as part of the Aspen Ideas Festival. Also on stage was a robot, a green robot dinosaur about the size of a small dog known as a PLEO. It's going to be part of this conversation. But before we get to that, here's Kate.


VEDANTAM: Kate, welcome to HIDDEN BRAIN.

KATE DARLING: Thank you for having me.

VEDANTAM: You find that there is an interesting point in the relationship between humans and machines. And that point comes when we give a machine a name. I understand that you have three of these PLEO dinosaurs at your home. Can you tell me some of the names that you have given to your robots?

DARLING: Yes. So the very first one I bought I named Yochai after Yochai Benkler, who's a Harvard professor who's done some work in intellectual property and other areas that I've always admired. And the second one I adopted after I filmed a Canadian documentary where the show host had to name the robot. And he gave the robot the same name he had, which was Peter (ph). So the second one has a boring name. And then...


DARLING: ...The third one is named Mr. Spaghetti. I don't know if people outside of Boston are familiar with this, but the Boston public transportation system, they wanted to crowdsource a name for their mascot dog. And the Internet decided that the dog should be named Mr. Spaghetti. And, of course, they refused to do that and named the dog Hunter. So Mr. Spaghetti became a big thing in Boston for a while. It was - people were very outraged about this. And so I named my PLEO - my third one - Mr. Spaghetti.

VEDANTAM: I understand that companies actually have found that if you sell a robot with the name of the robot on the box, it changes the way people will interact with that robot than if you just said, this is a dinosaur.

DARLING: So this is - yeah - this is not - I don't have any data on this, but, yes, I have talked to companies who feel that it helps with adoption and trust of the technology. Even very, very simple robots like boxes on wheels that deliver medicine in hospitals, if you give them a little nameplate that says Betsy (ph), their understanding is that people are a little bit more forgiving of the robot. So instead of this stupid machine doesn't work, they'll say, oh, Betsy made a mistake.


VEDANTAM: And I'm wondering if you've spent time thinking about why this happens. At some level, if I came up to you at home and I said, Kate, is Mr. Spaghetti alive? You would almost certainly tell me, no, Mr. Spaghetti's not alive. I assume you don't think Mr. Spaghetti is alive, right?




VEDANTAM: ...Given that you know that Mr. Spaghetti's not alive, why do you think giving him a name changes your relationship to him?

DARLING: With robots in particular, it's combined with just our general tendency to anthropomorphize these things. And we're also primed by science fiction and pop culture to give robots names and view them as entities with personalities. And it's more than just the name, right? I mean, robots move around in a way that seems autonomous to us. We respond to that type of physical movement. Our brains will project intent onto it. So I think robots are the perfect mixture of something that we will very willingly treat with human qualities or lifelike qualities.

VEDANTAM: I understand from one of your papers that there have been examples, especially in military settings, where robots have been assigned to do very dangerous tasks because you don't want human beings to go out and do those tasks. But eventually, the military folks who are running the robots start to relate to them as if they were actually fellow soldiers.

DARLING: There's been some research on this. So Julie Carpenter has done some research. And there's just countless stories online, on Reddit, everywhere about soldiers becoming emotionally attached to the bomb disposal robots that they work with. They'll give them names. They'll give them medals of honor. They'll have funerals for them.

And, you know, it's also kind of interesting because the robots aren't special robots. They're just kind of sticks on wheels. But I think just also the situation that the soldiers find themselves in, where the robot is basically risking its life to save human lives, might also lead them to become very attached to these devices.

VEDANTAM: You had a wonderful example some time ago looking at a colonel, I believe - I don't know if this was your work or you were citing someone else's work - of a colonel who was supervising a robot that was doing sort of mine disposal. Tell us the story about the mine disposal robot.

DARLING: Oh, yeah, this is an incredible - it was an article in The Washington Post, I believe, back in 2007. And the United States military was testing this new robot that could walk over a field with landmines and diffuse them by stepping on them and blowing them up. And the robot itself was shaped like a stick insect.

So it would walk around on six legs. And every time it stepped on a landmine, one of the legs would blow up. And then it would just continue on the remaining legs. And so they were testing this. And the colonel who was in charge of this exercise ended up calling it off because he said it was too inhumane to watch this thing drag itself across the field on its remaining legs.


VEDANTAM: And so this raises sort of an interesting question because, on the one hand, we can understand how this desire to anthropomorphize robots is very human in some ways. But in some ways - in this case, that defeats the whole point of having a robot that's trying to find the mines.

DARLING: It does. I mean, it's really anything from inefficient to dangerous in these contexts where we want robots to be strictly used as tools to anthropomorphize them. And it's a very difficult design challenge as well because how do you create a device that people want to use but don't like too much?

VEDANTAM: All right. So we have this wonderful little prop in front of us. It's a PLEO dinosaur. I want you to tell me a little bit about the PLEO dinosaur - how it works and how you've come to own three of them, Kate.

DARLING: (Laughter).

VEDANTAM: What is the dinosaur? What does it do?

DARLING: It's basically an expensive toy. I bought the first one, I think, in 2007. There we go. It's awake. They have a lot of motors and touch sensors. And they have an infrared camera and microphones. So they're pretty cool pieces of technology for a toy. And that's initially why I bought one because I was fascinated by everything that it can do. Like, if it starts walking around, it can walk to the edge of the table. It can look down, measure the distance to the floor. It knows that there's a drop, and it'll get scared and walk backwards.


DARLING: And then they go through different life phases - adolescent and fully grown. And, you know, it'll have moods and...

VEDANTAM: So I think what we should do - we bought the robot at HIDDEN BRAIN a couple of weeks ago. We haven't had a chance to give it a name yet. And I thought we should actually reserve the honors for this evening where we're talking to Kate and see if Kate wants to try and name this dinosaur, you know, since she cares about dinosaurs so much. I was looking up Kate's Twitter feed this morning. I understand that you're going to have a baby soon. Congratulations.

DARLING: Yes, I don't have a name for that, either.



VEDANTAM: Just FYI, she sometimes refers to the baby as baby bot, so just - for whatever that's worth. And one retweet that you have on your Twitter feed cracked me up. It said, you don't really know how many people you don't like until you start trying to pick baby names.


DARLING: Yeah, that was - that's a quote from my husband.


VEDANTAM: So I don't want you to tell me - you apparently haven't yet picked your baby's name. So do you have any choices, of top choices? Or is there a name, a spare name that you might care to give the dinosaur?

DARLING: Well, the problem is we've had a girl's name picked out for years and now we're having a boy. And we just can't - we don't even have any contenders.

VEDANTAM: No contenders. What was - what would have been your favorite girl's name if it had - if you had had a girl?

DARLING: Well, so when I first started dating my now-husband, he at some point said, if I ever had a daughter, I already know what I would name her. And I was like, oh, really? We're going to fight about this one. And he said, yeah, I would name her Samantha and Sam for short because Sam is kind of gender neutral. And I was like, oh, I really love that. So that one was picked out very easily.

VEDANTAM: All right, since you're not having a girl, you're going to have a boy, would you mind if you considered naming the dinosaur Samantha? How would you feel about that?

DARLING: Oh, that would be awesome. We should name the dinosaur Samantha.

VEDANTAM: All right, so henceforth, this dinosaur will be called Samantha...


VEDANTAM: ...Or Sam for short.


VEDANTAM: Now, some time ago, Kate conducted a very interesting experiment with the PLEO dinosaurs. And to sort of show how this works, I have a second prop here which is under the table.


VEDANTAM: It's a hammer, a large hammer which we borrowed from the hotel. Now, as you all know, the dinosaur is obviously not alive. It's just cloth and plastic and a battery and wires. It has a name, of course, Samantha, but...


VEDANTAM: ...It isn't alive in any sense of the term. And so Kate, I'm going to actually give you the hammer.

DARLING: Oh, no.


VEDANTAM: Kate, would you consider destroying Samantha?



VEDANTAM: It's just a machine.

DARLING: I only make other people do that. I don't do it myself.


VEDANTAM: You wouldn't even consider harming the dinosaur?

DARLING: Well, so my problem is that I already know the results of our research. And that would say something about me as a person, so I'm going to say no, I'm not willing to do it.


VEDANTAM: Tell me about the experiment. So you had volunteers come up, and you basically introduced them to these lovable dinosaurs. And then you give them a hammer like this. And you told them to do what?

DARLING: Well, so - OK, so this was the workshop part that we used the dinosaurs for. They're a little too expensive to do an experiment with a hundred participants. So the workshop that we did, in a nonscientific setting, we had five of these robot dinosaurs. We gave them to groups of people and had them name them, interact with them, play with them. We had them personify them a little bit by doing a little fashion show with - and a fashion contest.

And then after about an hour, we asked them to torture and kill them. And we had a variety of instruments. We had a hammer, a hatchet and I forget what else. And - but like even though we tried to make it dramatic, it turned out to be a little bit more dramatic than we expected it to be.

And they really refused to even hit the things. And so we had to kind of start playing mind games with them. We said, OK, you can save your group's dinosaur if you hit another group's dinosaur with the hammer.

VEDANTAM: Oh, my gosh.


DARLING: And they tried. And they couldn't do that, either. This one woman was standing over the thing trying and she just couldn't. She ended up petting it instead.


DARLING: And then, finally, we said, OK, well, we're going to destroy all of the robots unless someone takes a hatchet to one of them. And finally, someone did.

VEDANTAM: Wait, so you said unless one of you kills one of them, we are going to kill all of them?

DARLING: Yeah. I think this might have been my partner's idea. So I did this with a friend named Hannes Gassert. We did this at a conference called Lift in Geneva. And we had to improvise because people really didn't want to do it. So we threatened them.


DARLING: And finally, someone did.

VEDANTAM: Samantha clearly doesn't want you to harm her.

DARLING: Yeah, clearly, clearly.

VEDANTAM: So what do you think is going on? I mean, at a rational level, the dinosaur obviously is not alive. Why do you think we have such reluctance to harming the dinosaur? In fact, I might have the battery removed so the dinosaur stops making noise.

DARLING: Well, I mean, it behaves in a really lifelike way. I mean, we have over a century of animation expertise in creating compelling characters that are very lifelike that people will automatically project life onto. I mean, look at, you know, Pixar movies, for example. It's incredible. And I know that a lot of social roboticists actually work with animators to create these compelling characters.

And so, you know, it's very hard to not see this as some sort of living entity, even though you know perfectly well that it's just a machine because it's moving in this way that we automatically subconsciously associate with states of mind. And so I just think it's really uncomfortable to people, particularly for robots like this that can display, you know, a simulation of pain or discomfort, to have to watch that. I mean, it's just not comfortable.

VEDANTAM: What did you find in terms of who was willing to do it and who wasn't? I mean, when you looked at the people who were willing to destroy a dinosaur, a dinosaur like the PLEO, you found that there were certain characteristics that were attached to people who were more or less likely to do the deed.

DARLING: So the follow-up study that we did, not with the dinosaurs, we did with HEXBUGs, which are a very simple toy that moves around like an insect. And there, we were looking at people's hesitation to hit the HEXBUG and whether they would hesitate more if we gave it a name and whether they would hesitate more if they had natural tendencies for empathy, for empathic concern. And, you know, we found that people with low empathic concern for other people, they didn't much care about the HEXBUG and would hit it much more quickly. And people with high empathic concern would hesitate more. And some even refused to hit the HEXBUGs.

VEDANTAM: So in many ways, what you're saying is that potentially the way we relate to these inanimate objects might actually say something about us at a deeper level than just our relationship to the machine.

DARLING: Yes, possibly. I mean, we know now, or we have some indication that we can measure people's empathy using robots, which is pretty interesting.

VEDANTAM: You know, my colleagues and I were discussing ahead of this interview whether you would actually destroy the dinosaur. And we were torn because we said, on the one hand, you of all people should know that these are just machines, and that it's an irrational belief to project lifelike values on them. But on the other hand, I said, you know, it's really unlikely she's going to do it because she's going to look like a really bad person if she smashes the dinosaur in front of 200 people.

DARLING: I mean, I don't know if you've been watching "Westworld" at all, but the people who don't hesitate to shoot the robots, they seem pretty callous to us. It's - I - and I think maybe there is something to it. Of course, we can rationalize it. Of course, you know, if I had to, I could take the hammer and smash the robot, and, you know, I wouldn't have nightmares about it. But I think that perhaps turning off that basic instinct to hesitate to do that might be more harmful than override. You know, I think overriding it might be more harmful than just going with it.

VEDANTAM: I want to talk about the most important line we draw between machines and humans, and it's not intelligence, but it's consciousness. I want to play you a little clip from "Star Trek."


PATRICK STEWART: (As Captain Jean-Luc Picard) Now tell me, Commander. What is Data?

BRIAN BROPHY: (As Commander Bruce Maddox) I don't understand.

STEWART: (As Captain Jean-Luc Picard) What is he?

BROPHY: (As Commander Bruce Maddox) A machine.

STEWART: (As Captain Jean-Luc Picard) Is he? Are you sure?

BROPHY: (As Commander Bruce Maddox) Yes.

STEWART: (As Captain Jean-Luc Picard) You see, he's met two of your three criteria for sentience, so what if he meets the third - consciousness - in even the smallest degree? What is he then? I don't know. Do you?

VEDANTAM: So this has been a perennial concern in science fiction, which is the idea that at some point machines will become conscious and sentient. And very often, it's in the context of, you know, the machines will rise up and harm the humans and destroy us. But as I read your research, I actually found myself thinking, is our desire to believe that the machines can become conscious actually just an extension of what we've been talking about the last 20 minutes? Which is we project sentience onto machines all the time. And so when we imagine what they're going to be like in the future, the first thing that pops in our head is they're going to become conscious.

DARLING: Yeah. I think there's a lot of projection happening there. I also think that before we get to the question of robot rights and consciousness, you know, we have to ask ourselves, how do robots fit into our lives when we perceive them as conscious? Because I think that's when it starts to get morally messy and not when they actually inherently have some sort of consciousness.


VEDANTAM: There's a lot that's morally messy about how humans interact with robots. When we come back, we're going to delve into some of those moral and ethical issues, including the deeply troubling case of a Japanese company that builds sex robots designed to look like children. Stay with us.


VEDANTAM: This is HIDDEN BRAIN. I'm Shankar Vedantam. If humans have a tendency to anthropomorphize machines, to see them as human, it isn't surprising that we're all so willing to bring all the biases we have toward our fellow human beings into the machine world. Many of the intelligent assistants being built by major companies - Siri or Alexa - are being given women's names. Many of the genius machines are often given men's names - HAL or Watson.

Now, you can say Siri and Alexa aren't people, why should we care? Why should we care if people sexually harass their virtual assistants, as has been shown to sometimes happen? MIT's Kate Darling says we should care because the way we treat robots may have implications for the way we treat other human beings.

DARLING: It might. We don't know but it might. And one example with the virtual assistants you just mentioned is children. So parents have started observing. And this this is anecdotal but they've started observing that their kids adopt behavioral patterns based on how they're interacting with these devices and how they're conversing with them. And there are some cool stories.

Like there was story in The New York Times a few years ago where a mother was talking about how her autistic son had developed a relationship with Siri, the voice assistant. And she said this was awesome because Siri is very patient. She will answer questions repeatedly and consistently. And apparently, this is really important for autistic kids. But also because her voice recognition is so bad, he learned to articulate his words really clearly. And it improved his communication with others.

Now, that's great, but these things aren't designed with autistic kids in mind, right? That's kind of more of a coincidence than anything. And so there are also perhaps some unintended effects that are more negative. And so one guy wrote a blog post a while back where he said Amazon's Echo is magical but it's turning my child into an [expletive] because Alexa doesn't require please or thank you or any of the standard politeness that you want your kids to learn when they're conversing and when they're, you know, demanding things of you. So, you know, it starts there. But I think that as this technology improves and gets better at mimicking real conversations or life-like behavior, you have to wonder to what extent that gets muddled in our subconscious and not just in children's subconscious but maybe even in our own.

VEDANTAM: Do you think it's a coincidence that most of the virtual assistants are given female names and female identities?

DARLING: I think it's a combination of whatever market research but also just people not thinking. I mean, I visited IBM Watson in Austin. And there is a room that you can go into, and you can talk to Watson. And he has this deep booming male voice. And you can ask questions. And at the time I went there, there was this second AI in the room that turned on the lights and greeted the visitors. And that one had a female voice. And I pointed that out. And it seemed like they hadn't really considered that.

So it's, you know, it's a mixture of people thinking, oh, this is going to sell better and people just not thinking at all because the teams that are building this technology are predominantly young, white and male. And they have these blind spots where they don't even consider what biases they might perpetuate through the design of these systems.

VEDANTAM: So which brings us to the question of sex robots. I'm curious what you make of this. And there are really complicated arguments on both sides of this question. Should we use machines as, you know, sexual companions? Should we use them in ways that could potentially satisfy the needs of groups of people who in some ways we are troubled by?

DARLING: Yeah. So, you know, referring to pedophilia specifically, this is a very difficult area because we know almost nothing about pedophilia generally. And we have absolutely no idea what the effects of technologies like this could be if they provide kind of an immersive sexual experience. I mean, it could be that this is a very useful outlet to use therapeutically - yeah,

VEDANTAM: A safety valve.

DARLING: ...Basically - that ends up preventing real child abuse. And on the other hand, it could be that this is something that normalizes and perpetuates certain behaviors. And we literally have no idea what direction this goes in. And I think this is a question that we're going to be facing pretty soon. I mean, like you said, there are companies that make the dolls. There's legal cases already about this. And there's a lot of moral panic about sex technology but also, I mean, in this case, very understandable emotional responses to, you know, regulation of child abuse. So it's very difficult. And you can't research it - in the U.S., at least.

VEDANTAM: So you're sometimes called a robot ethicist. And you've sometimes said we might need to establish a limited legal status for robots. What do you mean by that?

DARLING: Yeah. It's a little bit of a provocation. But my sense is that, you know, if we have evidence that behaving violently towards very lifelike objects not only tells us something about you as a person but can also change people and desensitize them to that behavior in other contexts. So, you know, if you're used to kicking a robot dog, you know, are you more likely to kick a real dog? Then that might actually be an argument, if that's the case, to give robots certain legal protections the same way that we give animals protections but for a different reason.

We like to tell ourselves that we give animals protection from abuse because they actually experience pain and suffering. I actually don't think that's the only reason we do it. But for robots, the idea would be not that they experience anything, but rather that it's desensitizing to us and it has a negative effect on our behavior to be abusive towards the robots.

VEDANTAM: So here's a thing that's sort of - it's worth sort of pondering for a moment. If you hear, for example, that someone owns a bunch of chickens in their farm, right? So it's their farm, their chickens. They own the chickens. And they're really mistreating the chickens, torturing them, harming them. You could sort of make a property rights argument and say they can do whatever they want with their property, but I think many of us would say even though the chicken belongs to you, there are certain things you can and cannot do with the chicken.

And I'm not sure it's just about our concern that if you mistreat the chicken, that means you will turn into the kind of person who might mistreat other people. There's sort of a level. There's a certain moral level at which I think the idea of abusing animals is offensive to us. But I'm wondering the same thing is true with machines as well, which is it's not just the case that it might be that people who harm machines are also willing to harm humans but just the act of harming things that look and feel and sound sentient is morally offensive in some way.

DARLING: Yeah. So I think that's absolutely how we've approached most animal protections because it's also - it's very clear that we care more about certain animals than others and not based on any biological criteria. So I think that we just find it morally offensive, for example, to torture cats. Or, you know, in the United States, we don't like the idea of eating horses. But in Europe, they're like, what's the difference between a horse and a cow? They're both delicious.

So there's - that's definitely how we tend to operate and how we tend to pass these laws. And I don't see why that couldn't also apply to machines once they get to a more advanced level where we really do perceive them as lifelike and it is really offensive to us to see them be abused.

VEDANTAM: The Devil's Advocate side of that argument, of course, is that would people then say pressing a switch and turning off a machine, that's unethical because you're essentially killing the robot?

DARLING: But we don't protect animals from being killed. We just protect them from being treated unnecessarily cruelly. So I actually think animal abuse laws are a pretty good parallel here.

VEDANTAM: You argue that robots might one day expand the boundaries of how humans relate to one another. I want to play you a short clip from the movie "Her," where a man falls in love with his operating system but then discovers something about her.


JOAQUIN PHOENIX: (As Theodore) Are you in love with anyone else?

SCARLETT JOHANSSON: (As Samantha) What makes you ask that?

PHOENIX: (As Theodore) I don't know. Are you?

JOHANSSON: (As Samantha) I've been trying to figure out how to talk to you about this.

PHOENIX: (As Theodore) How many others?

JOHANSSON: (As Samantha) Six hundred forty-one.

PHOENIX: (As Theodore) What? What are you talking about? That's insane. That's [expletive] insane.

VEDANTAM: So I'm wondering, Kate, is it possible that as we start to relate to machines, is it possible that they will change the range of ways we relate to one another? In other words, is it possible they'll expand how we think about relationships themselves?

DARLING: Maybe. But I think it's important to remember - the thing that I couldn't get out of my mind when I was watching "Her" specifically is that there is a company that makes this operating system. And if I were the company, I would be like - I would program it to say, oh, yes, I'm seeing 641 other people, but for $20,000, you can get exclusively me.


DARLING: I - like, that's the direction it's going to go in, right? It's not that the machines are going to become conscious and, you know, develop their own forms of relationship.

VEDANTAM: You mentioned "Westworld" some moments ago, and I want to play a clip from "Westworld." For those of you who haven't seen "Westworld," humans interact with robots in - robots are extremely lifelike, so lifelike that it's sometimes difficult to tell whether you are talking to a robot or you're talking to a human. In the scene that I'm about to play you, a man named William interacts with a woman who may or may not be a robot.


UNIDENTIFIED ACTRESS: (As character) You want to ask, so ask.

JIMMI SIMPSON: (As William) Are you real?

UNIDENTIFIED ACTRESS: (As character) Well, if you can't tell, does it matter?

VEDANTAM: So as I watched this scene and as I read your work, I actually had a thought. And I wanted to sort of run this thought experiment by you. Which is that, you know, on one end of the spectrum, we have these machines that are increasingly becoming lifelike, human-like. You know, they respond in very intelligent ways. They seem as if they're alive. And on the other hand, we're learning all kinds of things about human beings that show us that even the most complex aspects of our minds are governed by a set of rules and laws.

And in some ways, our minds function a little bit like machines. And I'm wondering, is there really a huge distinction? Is it possible - is the real question not so much can machines become more human-like, but is it actually possible that humans are actually just highly evolved machines?

DARLING: I have no doubt that we are highly evolved machines. I don't think we understand how we work yet. And I don't think we're going to get to that understanding anytime soon. But yeah, I think - I do think that follow a set of rules and that we're essentially programmed. I don't tend - so I don't distinguish between souls and other entities without souls. And so it's much easier for me to say, yeah, it's probably all the same. But I can see that other people would find that distinction difficult.

VEDANTAM: Do you ever talk about this? Do you ever run this by other people and sort of say - do you tell your husband, for example, I like you very much, but I think you're a really intelligent machine that I love dearly?

DARLING: (Laughter) I haven't explicitly said that to him, but...


VEDANTAM: When you go home from this trip.

DARLING: Yeah. We'll see how that goes.

VEDANTAM: Kate Darling is a research specialist at the MIT Media Lab. Our conversation today was taped before a live audience at the hotel Jerome in Aspen, Colo., as part of the Aspen Ideas Festival. Kate, thank you for joining me today on HIDDEN BRAIN.

DARLING: Thank you so much.


VEDANTAM: This week's show was produced by Tara Boyle and Renee Klahr. Our team includes Jenny Schmidt, Maggie Penman, Rhaina Cohen and Parth Shah.

Our unsung heroes this week are the staff of the Annabelle Inn in Aspen, Colo. They kindly let us take over that conference center for several interviews with researchers who were in town for the Aspen Ideas Festival. They even turned off the fountains in their outdoor seating area so that we'd have studio-quality sound while recording. Marie Casanova (ph), Doug Parks (ph), Mike Clemons (ph), thanks so much for your hospitality and your willingness to help us make a little bit of audio magic.

You can find photos and a video of Samantha, our PLEO dinosaur, on our Instagram page. We're also on Facebook and Twitter. And if you enjoyed this week's show, we'd love if you shared this episode with friends on social media. I'm Shankar Vedantam, and this is NPR.


DARLING: There we go. It just mooed like a cow.

Copyright © 2017 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.