Recommendation systems are increasingly determining what we like and dislike : Planet Money Recommendation systems have changed how we choose what we want. But are they choosing what we want? | Subscribe to our weekly newsletter here.

Runaway Recommendation Engine

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript



KEVIN ROOSE: There's a person who I'm pretty sure helped make the modern internet so addictive. But it was kind of an accident, and I'm not even sure he knows it yet. His name is Doug Terry.

DOUG TERRY: I'm a distinguished scientist at Amazon.


Oh, wow, a distinguished scientist.

TERRY: (Laughter).

CHILDS: Do we call you Distinguished Doug?

TERRY: It's actually - Distinguished Doctor Doug is what people call me.

ROOSE: Distinguished Doctor Doug is an engineer, an inventor.

CHILDS: What's the thing that you're proudest of that you've invented ever?

TERRY: The thing that I'm...

CHILDS: Unless it's a state secret.

TERRY: (Laughter) Well, there's this thing called the domain name system, all these little names that you type in these days with dots on it, you know,

CHILDS: You did that?

TERRY: That was my PhD dissertation, was designing that, yeah.

CHILDS: In 1990, Doug was a researcher at this utopia called Xerox PARC, this Disneyland of computer inventions that would give the world such hits as the personal computer, Ethernet, laser jet printing. Doug says their inventions were often just solutions to things that annoyed them.

ROOSE: Right, which for Doug was his email. Back then, email didn't really have rules yet or etiquette. Doug's inbox every day was just full of chain letters and meeting-scheduling threads and message board posts and reply-alls and people just spamming each other with random links. It was chaos.

TERRY: And so people were getting, you know, hundreds of email messages per day, which by today's standards maybe isn't that much, but at the time it seemed like a lot to be spending an hour of my day reading email.

CHILDS: A whole hour.

TERRY: (Laughter).

ROOSE: Right. I know, right? Let's trade. I want to go back to an hour of email a day.

CHILDS: Seriously.

ROOSE: Doug's inbox, like everybody's inbox, just displayed emails in the order they were received. And you could do some simple kinds of filtering, like send all emails from that super annoying co-worker directly to the trash. But there wasn't a really good way to do other more subtle kinds of filtering - filtering that required a little more judgment.

TERRY: And that's where I came up with this idea of saying, you know, let's figure out how to make humans part of the system.

CHILDS: Taking human likes and dislikes and turning them into a recommendation system - that was Doug and his team's invention. It was just for them and some of their colleagues to clean up their inboxes, but their solution would eventually come to define the modern internet.

ROOSE: Doug's invention - this thing that he created to manage his inbox - was the first real recommendation engine. And today, the entire world runs on recommendation engines. Billions of dollars a year are spent and lost based on what is and isn't recommended to us. They determine everything that shows up on our Instagram feeds, our TikTok For You pages, our Twitter timelines. They determine what we watch or listen to, maybe which politicians we vote for. They tell us what we like and what we hate.


ROOSE: Hello, and welcome to PLANET MONEY. I'm Kevin Roose, New York Times tech columnist.

CHILDS: And I'm Mary Childs. And, Kevin, you wrote this fantastic book, "Futureproof: 9 Rules For Humans In The Age Of Automation," which is how I learned about Doug and the world of recommendation algorithms. Your book is so good, Kevin. I love it. I read it every day.

ROOSE: Thank you for recommending it. Today on the show, how Doug's mission to build a better email inbox kind of created the world as we know it and not only changed what we watch on TV and what music we listen to, but really changed us - all of us - at a really fundamental level.


CHILDS: Distinguished Doctor Doug and his team are about to do this world-changing thing. They're going to start with fixing their email inboxes, and in doing so, they will teach machines how to fake making value judgments, how to curate, how to determine importance.

ROOSE: Because the problem as he saw it wasn't actually that people were getting too many emails. Like, that was bad, but the bigger problem was that all the emails looked the same.

TERRY: An email message from my boss saying, I need a response in five minutes, is much more important than, you know, a message forwarded from a friend of mine saying, did you see the baseball game last night?

CHILDS: Doug's inbox was organized by chronology, but he doesn't want chronology. He wants priority. So he starts to teach his inbox what was important. Emails from one person to just one other person - important. Send it to the top of the inbox. Emails to a bunch of people - eh, bottom of the list, maybe straight to the trash.

ROOSE: You need sort of, like, the bouncer at the velvet rope...

TERRY: (Laughter).

ROOSE: ...Over your inbox saying, like, you can come in...

CHILDS: (Laughter).

ROOSE: ...But you look like trouble.

TERRY: Well, yeah, but not only the bouncer, but a personal bouncer - right? - someone who knows me personally and knows what - you know, the people I want to let in and the people I don't want to let in.

ROOSE: So he and a group of co-workers build this filtering program, and his inbox actually gets a little cleaner. His boss's emails are surfacing properly.

CHILDS: Their next step is to add another layer from a totally different approach, one that draws on the infinitely complicated and ever-changing human user.

How do you talk to your bouncer and say, hey, man, not so many today, or, like, listen. You let somebody in last week who was a real scoundrel. I don't want that to happen again.

TERRY: Well, it had - I introduced "like-it" and "hate-it" buttons. So now on Facebook you always see "like-it" buttons or "like" buttons and so on. The system I built was the first one that had a "like-it" button, and it also had a "hate-it" button.

ROOSE: Wait, you invented...

CHILDS: You invented the "like" button?

ROOSE: You invented the "like" button and the domain name?

TERRY: (Laughter).

CHILDS: We are in the presence of greatness.

TERRY: (Laughter).

CHILDS: So Doug is happily hitting "like" and "hate" on his emails, and it's working. His inbox is even better. So he invites some of his co-workers to try out their little inbox filter, too. And they start hitting "like" and "hate" on their emails, too, which leads Doug and his colleagues to their next real big insight. Doug didn't need to rate every single email because his colleagues were doing it, too, with essentially the same emails.

ROOSE: This right here is the insight that will change the internet forever, that Doug's colleagues "likes" and "hates" could act as a filter on his inbox. He called this collaborative filtering.

ROOSE: The whole point of collaborative filtering is, I would say I liked or hated something, and then that would help other people because that's the community aspect. Then other people could say, oh, show me articles sent to this newsgroup on baseball that Doug "liked," or don't send me anything that Doug didn't "like."

ROOSE: Doug and his colleagues built an email system that sorted people's inboxes. It recommended the emails they should look at first by using this collaborative filtering idea, and it kind of worked. He says his email time got cut down from an hour a day down to 30 minutes a day. It improved his life and the lives and inboxes of his Xerox PARC co-workers.

CHILDS: And because it was at Xerox PARC, which is where the architecture of the computer age was drafted, their collaborative filtering invention becomes part of the canon, a tool for every other engineer who comes after.

ROOSE: And that's what the next part of this show is about - taking Doug's invention and scaling it to billions of people, attaching it to superpowered artificial intelligence and using it to bring order to this chaotic world of online information.

CHILDS: The modern era of collaborative filtering started 16 years after Doug and his team's invention in 2006. It started because this big company, Netflix, had a problem-tunity (ph). If people couldn't find something they wanted to rent, they were more likely to cancel their subscription. If Netflix hooked them with a really good recommendation right away, they were happier. They would consume more. So anything that made those recommendations more precise and appealing to users - that was money.

ROOSE: Netflix, at the time, was using a descendant of Doug's collaborative filtering idea as the basis for their recommendation system, which drove 60% of rentals. But its recommendation system had plateaued. It wasn't improving.

CHILDS: Here's Robert Bell - Bob - he is also a distinguished computer scientist - talking about Netflix's problem.


BOB BELL: And so they had some ideas for how to improve those recommendations, but they weren't quite sure how to do it. And so they came up with the idea of a contest.

CHILDS: This is from a talk Bob gave about entering this Netflix contest.


BELL: And to show that they were really serious about this, the idea was to release data to people. And they offered a million-dollar prize.

CHILDS: A million dollars - they called it, very creatively, the Netflix Prize.

ROOSE: (Laughter) And to win, you had to improve Netflix's recommendations by at least 10% to make better predictions than Netflix had about what people on its service would watch and like. And to help, Netflix published all of its data of all customers' anonymized ratings of movies and TV shows and whatever else - more than 100 million ratings in all from 480,000 customers.

CHILDS: 100 million ratings - that was a ton of data, way more than anyone had had access to before, big enough that engineers could actually play with it, which made it the most exciting thing to happen in data science since ever. Almost 50,000 teams downloaded the data, including Bob.

ROOSE: He downloads his data, and he starts digging in.

BELL: And the surprise for everybody, I think, is that the most-rated movie during this period was "Miss Congeniality." That was rated by almost half of the users.

CHILDS: Bob and his team set out to make a better recommendation system. He starts with the obvious part of collaborative filtering - the part Netflix was already doing - basically matching up people with similar tastes. Like, oh, you liked "Miss Congeniality." Well, we know that people who like "Miss Congeniality" often tend to like "Legally Blonde." So maybe you should watch that. But this system couldn't take into account how weird people are and how weird movies are.

BELL: Movies are very complex things and humans are even more complex. And so what we see in a movie that we like or dislike is very hard to characterize in just a handful of factors. And so in some sense, in order to model all of that, you almost need an unlimited number of factors that you might consider.

CHILDS: The original collaborative filtering required users to tell the algorithm, hey, I like what this guy likes, thumbs up, or thumbs down. Bob and his team took this idea further. They realized that machines could figure out people's tastes on their own by grouping movies together and teasing out what they had in common based on factors that you couldn't just see or guess.

ROOSE: Right. Like, I no longer had to say, hey, I liked "The Pelican Brief" and "Erin Brockovich." Please find me other movies to watch. Now the machine can actually sniff out, on its own, these hidden threads between the movies I watched. Like, maybe it's suspenseful legal thrillers, or maybe its depth of character development, or maybe it's something that only the machine sees.

CHILDS: Bob and his team looked at data that was explicit, like ratings, and implicit. Like, the data point that helped them the most was whether you had rated something at all.

BELL: If we're trying to figure out whether you'd like a science fiction movie, if you've rated other science fiction movies, even if you didn't rate them real high, that sort of says that you least have some interest in science fiction.

CHILDS: Using that factor got them to a 5% improvement over what Netflix was already doing, which meant Bob and his team were halfway to the prize already. From there, they added layer and layer and layer of different factors until they did it. They crossed the 10% improvement threshold. Bob and his team won the Netflix prize and the million dollars.

ROOSE: People who work on recommendation algorithms told me that this Netflix prize ended up being this huge moment in their field, that this thrillingly large data set that got all these brilliant engineers excited about building recommendation engines basically started a recommendation revolution. And this is when lots of other tech companies started realizing that recommendations could be used for so much more than just movies. They could help us figure out which music to listen to, which clothes to wear, which restaurants to patronize, maybe even which news was important.

CHILDS: And for a long time, people were mostly focused on building these cool new exciting algorithms. Scientists were stoked to see what they could do. And most users were like, oh, great, I really am interested in this. Thank you. But there's obviously another side to this, which is when do these suggestions stop being helpful nudges towards something I would get to eventually and start becoming programming? That's after the break.


CHILDS: In the world of recommendation algorithms, there's this team of researchers who've been studying the code and the systems and their effects on us to find out if in addition to guiding us to our preferences these programs can actually change our preferences.

JINGJING ZHANG: My name is Jingjing Zhang. And my work - I look at how all our recommendations affect our decision-making.

CHILDS: So you would really think that Jingjing would be better than the average person at not getting sucked into a vortex of addictive stuff.

ZHANG: In my personal life, if I go to YouTube and then search for certain videos, they would have related videos. I thought, oh, these must be highly relevant to what I'm looking for now. So I will consume more and more and then spend more time than I expected (laughter) on YouTube or some other websites.

ROOSE: I can definitely relate. And to figure out if recommendation systems are changing us, Jingjing and her team created a series of experiments using college students, basically fiddling with recommendations and seeing how those recommendations affected the students' behavior.

CHILDS: And in one of those experiments, they were trying to gauge just how much of an effect these algorithms have on us, how powerful the power of robot suggestion is. So they chose something consumable but perfectly subjective - music.

ZHANG: What we did is that we took the top 100 songs from this annual Billboard list, and then we provide different recommendation for these songs.

CHILDS: The recommendations, which were on a five-star system, were manipulated, tweaked up or down. But the researchers told the students that the ratings were perfectly tailored to them. The students had to listen to the whole song and then say if they wanted to buy it, and if so, how much they would pay. And this is a great way to design an experiment, by the way, because it relies on something that economists call revealed preference theory, which basically holds that if you want to figure out what someone actually likes, look at what they actually pay for.

ROOSE: And, like, Jingjing and her team are pretty sure that forcing students to actually listen to these songs would negate the impact of the recommendations. The students would form their own opinions and ignore the star ratings. You're not going to let an algorithm boss you around, right?

ZHANG: What we found is that on average for each one-star manipulation, that's going to increase the willingness to pay per price by the 7% to 17%.

CHILDS: The students offered significantly more money for higher-rated songs, even when those ratings were totally manipulated. Jingjing tested this and retested this. And the results were clear. When a machine tells us that we're going to like something, we trust the machine more than ourselves.

ROOSE: And, like, look, recommendations aren't all bad. Sometimes they're great. They save us time. They help us avoid decision fatigue. Sometimes I just don't want to, like, manually curate my own playlists of vibey electronic music. But here's what I worry about. These recommendation systems are getting so good that if we aren't vigilant, we're just going to end up drifting toward whatever the machine tells us we like.

CHILDS: This isn't just a problem of human psychology. It's also a computer science problem. Jingjing says it becomes a feedback loop. Those little drifts add up.

ZHANG: Over time, this will make the system less effective, less accurate and provide less diverse recommendations. Eventually, I know this longitudinal impact on the system will make the system provide similar items to everybody, like, regardless of personal test.

CHILDS: Which, knowing the history, knowing how all of this got started, this is so far from the original goal. We built - or, OK, Doug built these machines to save us time and help us find what we want.

ROOSE: Now, instead of Doug's friends and colleagues all helpfully filtering each other's emails, it's just these giant opaque algorithms owned by companies that want to make a profit off our behavior recommending things that will keep us scrolling and clicking and watching forever and ever.

Do you ever feel like you contributed to something that is not entirely good for society?

TERRY: I hope not. So I - up until this point - thanks, Kevin - I hadn't really felt that way.

CHILDS: And to be fair to Doug and his team, this is not actually what they built. They built an algorithm designed to read their minds. There is a huge difference between an algorithm built to read your mind and an algorithm designed to lead your mind, to take you to some new destination that you maybe wouldn't have arrived at yourself.

ROOSE: Right. Like, there's always been advertising - billboards and magazine spreads and TV ads - and you could see those things and decide, like, no, I don't like those shoes, not for me. But now, we've got these machines telling us, like, no, no, no. You do like those shoes. They are perfectly selected for you. Trust us on this one because we know you better than you know yourself. And sometimes, that might be true. But it's not always true. And so now, the question for all of us is which of our preferences are actually ours, and which were put there by a machine?


CHILDS: If you know anyone who inadvertently changed the course of history and there's a great story behind it, let us know. Send us an email - We're also on all the social media things influencing your choices. This week on TikTok, we explore the idea that you can only be friends with about 150 people.

Today's show was produced by Dan Girma with help from Emma Peaslee, mastered by Gilly Moon and edited by Nick Fountain. PLANET MONEY's supervising producer is Alex Goldmark. Thank you also to Jesse Bockstedt (ph) and Cami Rothe (ph).

Here's a decision I am not influencing you to make. Kevin has that new book out. This is basically a chapter from it - "Futureproof: Nine Rules For" - Kevin, what is it again?

ROOSE: "Nine Rules For Humans In The Age Of Automation."

CHILDS: ..."Humans In The Age Of Automation" - stunning, beautiful, resist that machine drift.

ROOSE: Thank you for that very human, non-algorithmic recommendation.

CHILDS: (Laughter) I have not been manipulated. OK, Kevin, say your name.

ROOSE: I'm Kevin Roose.

CHILDS: And I'm Mary Childs. This is NPR. Thanks for listening.


Copyright © 2021 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.