Two behavioral scientists who study honesty accused of using falsified data : Planet Money Dan Ariely and Francesca Gino are two of the biggest stars in behavioral science. Both have conducted blockbuster research into how to make people more honest, research we've highlighted on Planet Money. The two worked together on a paper about how to "nudge" people to be more honest on things like forms or tax returns. Their trick: move the location where people attest that they have filled in a form honestly from the bottom of the form to the top.

But recently, questions have arisen about whether the data Ariely and Gino relied on in their famous paper about honesty were fabricated — whether their research into honesty was itself built on lies. The blog Data Colada went looking for clues in the cells of the studies' Excel spreadsheets, the shapes of their data distributions, and even the fonts that were used.

The Hartford, an insurance company that collaborated with Ariely on one implicated study, told NPR this week in a statement that it could confirm that the data it had provided for that study had been altered had been altered after they gave it to Ariely, but prior to the research's publication: "It is clear the data was manipulated inappropriately and supplemented by synthesized or fabricated data."

Ariely denies that he was responsible for the falsified data. "Getting the data file was the extent of my involvement with the data," he told NPR.

Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.

Fabricated data in research about honesty. You can't make this stuff up. Or, can you?

  • Download
  • <iframe src="https://www.npr.org/player/embed/1190568472/1198959348" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

SYLVIE DOUGLIS, BYLINE: This is PLANET MONEY from NPR.

(SOUNDBITE OF DROP ELECTRIC SONG, "WAKING UP TO THE FIRE")

NICK FOUNTAIN, HOST:

About a decade and a half ago, this one idea started showing up everywhere - the nudge.

JEFF GUO, HOST:

The idea of the nudge came from behavioral economics, basically said, if you really understood how people think, their psychology, you could make a huge difference in the world just by doing these little tweaks, just by nudging people in the right direction.

FOUNTAIN: Like, if you want people to use less electricity, use social pressure. You could send them a letter saying, hey, you're using way more electricity than your neighbors.

GUO: Or organ donation. How do we get people to sign up to be organ donors?

(SOUNDBITE OF ARCHIVED RECORDING)

DAN ARIELY: Turns out the secret has to do with the form at the DMV.

GUO: This is one of the most famous behavioral economists around, Dan Ariely. He's giving a TED talk that's gotten more than 10 million views. And, yeah, he's right. Some European countries require people to opt out of being organ donors instead of opting in when they're getting their driver's licenses. And making people opt out - that gets a lot more people on the organ donor list.

FOUNTAIN: Ariely, here in a different TED talk - he's given about a dozen...

(SOUNDBITE OF ARCHIVED RECORDING)

ARIELY: So what have we learned from this about cheating?

FOUNTAIN: ...He's talking about some of his own research focused on how you can get people to be more honest with simple reminders.

(SOUNDBITE OF ARCHIVED RECORDING)

ARIELY: A lot of people can cheat. They cheat just by a little bit. When we remind people about their morality, they cheat less.

GUO: Ariely has had some blockbuster research into the subject - why people cheat and how to make them more honest. In one study, he asked people to recall the Ten Commandments - you know, like, thou shalt not this, thou shall not that - and then he had them play this game where they would be tempted to cheat.

(SOUNDBITE OF ARCHIVED RECORDING)A

ARIELY: The moment people thought about trying to recall the Ten Commandments, they stopped cheating. In fact...

FOUNTAIN: Another one of Ariely's big-deal papers was about this one simple trick that would make people way more likely to tell the truth when they're filling out forms or paperwork. For this one, he collaborated with a bunch of researchers, including another huge star of behavioral science, Francesca Gino. Here's Gino talking about that trick in 2015 on PLANET MONEY.

(SOUNDBITE OF ARCHIVED NPR BROADCAST)

FRANCESCA GINO: We should sign at the top of the form.

JACOB GOLDSTEIN: That's it. Instead of signing at the bottom, we should sign at the top.

GINO: Exactly. It's that simple.

FOUNTAIN: Makes sense. When you fill out a form and then, at the very end, you're asked to swear that everything you wrote was true, was it? Close enough. But if you swear to tell the truth ahead of time, your brain is primed for honesty.

GUO: Gino and Ariely - they became these academic superstars for all this nudge research, especially for their papers on how to get people to be more honest. And nudges - well, nudges became all the rage. In the U.K., the government created an entire organization. It's called the Behavioural Insights Team, or, as they were nicknamed...

MICHAEL SANDERS: The Nudge Unit.

GUO: That's Michael Sanders. Back in 2014, he was the Nudge Unit's chief scientist. It was his job to test out nudges all around the world.

FOUNTAIN: Yeah. And Michael still remembers the moment this famous sign-on-the-top idea was brought before his team. He was at his office in London, and one of Gino and Ariely's co-authors came in to give a presentation on the research.

GUO: What was the mood like in the room?

SANDERS: I think, you know, it's - we're British, so we don't have electrified rooms. People don't sort of - people don't buzz like they do in America.

GUO: (Laughter).

SANDERS: But, you know, I exchanged glances with a couple of senior colleagues, and I said, we could do this in this area. We could do this in this area. There are so many opportunities for this.

GUO: This sign-on-the-top idea - it was the quintessential nudge, so clever and so easy to implement and so surprisingly powerful.

FOUNTAIN: It's just so simple.

SANDERS: It is. Many liars are simple, allegedly.

FOUNTAIN: That quintessential nudge idea was clever and was powerful. And also, it was a lie. Hello, and welcome to PLANET MONEY. I'm Nick Fountain.

GUO: And I'm Jeff Guo. Dan Ariely and Francesca Gino have played this central role in taking nudge theory mainstream, and they've done really well for themselves. Gino's this big-shot professor at Harvard Business School with all these speaking gigs and book deals. Ariely's got this fancy job at Duke with multiple best-selling books. One of them's even being turned into a TV show by NBC.

FOUNTAIN: But now their famous sign-on-the-top paper we were talking about before - that paper is at the center of a massive scandal. And these two superstars, famous for showing how to nudge people to be more honest - they are being accused of dishonesty themselves. Today on the show, the truth about dishonesty, dishonesty about truth and the spreadsheet detective who cracked the case wide open.

(SOUNDBITE OF MUSIC)

GUO: Back in 2014, before two titans of behavioral science were engulfed in scandal and back when nudges were still becoming the hotness, Michael Sanders was the chief scientist at the British government's Nudge Unit. It was his job to take some of these nudges and test them out in the field.

FOUNTAIN: You guys were sort of like special economics agents bringing behavioral econ to the far-flung corners of the world? That was sort of your mission?

SANDERS: I mean, that sounds very grand, doesn't it? Yes, let's say that was our mission.

FOUNTAIN: Which is why, when the government of Guatemala had a problem getting its citizens to pay their taxes, Michael was like, this is perfect for us.

SANDERS: Guatemala has among the - if not the - lowest level of tax compliance of any country across Latin America. And so the opportunity for up, the opportunity to improve things was just enormous in that context.

FOUNTAIN: You felt like you could make a real difference there?

SANDERS: Yeah.

GUO: The Guatemalan government was losing hundreds of millions of dollars every year to tax evasion, money that could go to building roads and schools and whatever else. Maybe Michael, armed with all his little nudges, could help fix that.

FOUNTAIN: So Michael and a small team flew to Guatemala City to meet with tax authorities there and asked them how they collect taxes. And they said, well, one of the ways is called the value-added tax, which is kind of like a sales tax. Every month, businesses are supposed to fill out this form where they declare how much they owe in VAT.

SANDERS: And most people fill in the form - most people - well, most people who do fill in the form say, oh, I don't owe any. I don't owe any sales tax. And that's just not really credible, frankly.

FOUNTAIN: I feel like in places like the U.K. and in Europe, the VAT is just part of the sales price, right?

SANDERS: I mean, I'm not going to take lectures from Americans about how you should run a tax system.

(LAUGHTER)

GUO: Michael, as you might expect, immediately thought back to that famous paper by Dan Ariely and Francesca Gino, the one where, if you make people promise to be honest before they fill something out, they are much more likely to tell the truth than if they promise after they fill it out. Michael and his colleagues realized they could use this idea on the Guatemalan tax website. Before people entered their taxes, they would first have to pledge that they were going to tell the truth.

FOUNTAIN: And Michael tells the tax authorities, there is real research behind this done by professors from Harvard and Duke. And also, to prove to you that this works, we're going to run an experiment to test it. Some people will get the new honesty pledge at the start, and others won't.

GUO: And what'd they say?

SANDERS: They said, let's do it. Let's go. So we were in Guatemala for three days, and we went from, hi, my name's Michael, to, we're running this study, and we're going to launch it, and we're going to run it for three months with 3 million tax returns. And we'll call you back in four months with the results.

GUO: Four months go by. Michael gets the data set, and he stays up late crunching the numbers.

FOUNTAIN: Did the intervention work?

SANDERS: It worked not at all. So we were...

FOUNTAIN: Not at all.

SANDERS: Not at all. There are few things in my life that I am, in a statistical sense, more confident of than that this didn't work.

FOUNTAIN: But it should have worked. The original research behind the whole sign-on-the-top-of-the-form thing had been done by a team that included two of the biggest stars in behavioral science. And Michael's thinking, I can't even repeat their research.

SANDERS: I can't even copy them right, right? It's not like I'm trying to come up with something new and innovative, and I'm struggling at that part. I just can't even copy them in a way that creates the right results.

FOUNTAIN: But here's the thing. Michael wasn't the only one having trouble getting the right results. Around the time he published his paper describing his failure in Guatemala, other researchers started noticing their experiments were failing, too.

GUO: Eventually, one of Gino and Ariely's co-authors on that original sign-on-the-top paper convinced them that they needed to revisit their findings, too. So they tried to redo one of their experiments from the paper, and this time, it straight up didn't work.

FOUNTAIN: This was a big deal. One of the most famous nudge studies couldn't be replicated. But that didn't necessarily mean anyone had done anything wrong. This is how science works. Sometimes, you just get fluke results.

GUO: In 2020, a bunch of researchers, including Ariely, Gino and the other original co-authors, published something that says, yeah, maybe this sign-on-the-top intervention doesn't actually work. But, more importantly, they also published, for the very first time, all the original data, all those Excel spreadsheets that they had used.

FOUNTAIN: Which, if you had some suspicions about what was going on here, might be very interesting.

URI SIMONSOHN: I'm Uri. I'm a professor of behavioral science at ESADE business school in Spain.

FOUNTAIN: In addition to teaching behavioral science, Uri Simonsohn has a little side project called Data Colada, where he and some colleagues evangelized to other behavioral scientists about how they should conduct and analyze their experiments.

GUO: But what the Data Colada folks are really famous for are their investigations into research that they think smells a little fishy. Maybe a researcher juiced his regressions a little bit to get the results he wanted, or maybe he played fast and loose with his Bayes factors.

FOUNTAIN: Those tricky Bayesians. Anyways, Uri, in particular, loves digging through people's spreadsheets for clues on how exactly they were put together and why.

SIMONSOHN: It's like a puzzle that nobody knows a solution to, and then you solve it. It's - I mean, it's like true crime. Like, it's why true crime is so interesting to people, and they're trying to figure it out, right?

FOUNTAIN: And it makes for a good podcast, too.

SIMONSOHN: (Laughter) Hopefully.

FOUNTAIN: Because of Data Colada's reputation, some tipsters reached out to Uri and said, hey, you know how the original sign-on-the-top paper had three different experiments in it? You might want to take a closer look at experiment No. 3, the one about car insurance.

GUO: Yeah. The car insurance experiment. You know how your car insurance company occasionally asks you what the mileage is on your odometer? That is a sneaky way of asking how much you've been driving because car insurers want to charge people who drive more more. So drivers have an incentive to lie, which is where this sign-on-the-top intervention comes in.

FOUNTAIN: One of the superstar researchers, Dan Ariely, worked with a car insurer. Ariely said he asked the company to send out different versions of the same form to around 13,000 customers, requesting them to update the odometer readings the company had on file for them.

GUO: Half the people got forms with honesty pledges right at the top, and half of them got them on the bottom. And according to the paper, the ones who signed at the top, they were way more honest.

FOUNTAIN: But the tipsters who reached out to Uri said, there's lots of weird stuff going on in the data. Like, even though when people originally reported their odometer readings, they often rounded to, like, the nearest thousand or 500, they did not do that when they updated their results.

GUO: And, the tipsters said, look at the fonts. Why are half the cells in this column in Calibri, and the other half are in Cambria?

SIMONSOHN: We were puzzled. Like, why would this happen?

FOUNTAIN: Uri is like, fine, fine, fine. But immediately he hones in on this one thing.

SIMONSOHN: This was, like, piece of evidence seven. And I was like, I don't care about pieces of evidence one through six. Like, this is it.

GUO: Piece of evidence No. 7 - the distribution of miles driven. OK. So usually if you took a bunch of people, what you'd expect is that most of them are driving, like, a normal number of miles in a year - right? - like, say, 12,000 miles. And a few people might be driving way less, only, like, a thousand miles a year. And maybe a few people might be driving way more, maybe, like, 50,000 miles. But overall, the data would look like a bell curve with a big hump of people in the middle.

FOUNTAIN: But when Uri looked at the distribution of miles driven in Dan Ariely's experiment, he did not see a bell curve at all. He saw basically a flat line.

SIMONSOHN: It just had an impossible shape. Like, there were just as many people who drove 1,000 miles as there were who drove 2,000 miles as there were who drove 10,000 miles as there were 40,000 miles all the way to 50,000 miles. And that's just crazy.

FOUNTAIN: Uri says anyone who works with data would immediately be like, there's no way.

SIMONSOHN: Like, I shared that with people. People started laughing hysterically, looking at the screen. Like, it was just like...

FOUNTAIN: Oh, my God.

SIMONSOHN: ...Like, just a literal reaction. So it was just self-evidently faked.

GUO: This data set was faked. And as if it could not get any more obvious, even though there were lots of people who drove nearly 50,000 miles, it all cuts off right there. Not a single customer, apparently, drove more than 50,000 miles.

FOUNTAIN: When he saw that, Uri immediately knew the exact spreadsheet command whoever faked the data had used.

SIMONSOHN: RANDBETWEEN - so give me a random number in between 0 and 50,000. And if you do that, you get exactly this spreadsheet. So it's kind of remarkable.

FOUNTAIN: So you know how they did it?

SIMONSOHN: Yeah. Yeah. We figured out every step of the way, like, how the spreadsheet was generated.

GUO: Uri had done it. He'd cracked the case.

SIMONSOHN: It's just so - I've never seen anything so blatant in my life. It's just incredible. Like...

FOUNTAIN: So as far as smoking-gun evidence goes, this, for you, is like - it's - this is it.

SIMONSOHN: Yeah. Nothing will ever match this.

GUO: But there was another unsolved mystery here. Who was responsible for this fake data?

SIMONSOHN: Yeah. What I wanted to look for was things that would point to who did it.

FOUNTAIN: And there was not a great way for him to figure that out. Before Uri and his colleagues published what they had found, they reached out to the authors of that famous paper to give them a chance to respond. Usually when they do this, he says, it can get kind of contentious, but this time, everyone was pleasant.

SIMONSOHN: That was very unusual 'cause we were saying, hey, you have some fake data there. And they were like, oh, we sure do. Sorry. And so they - usually, we get - I mean, we were not expecting this. So almost immediately, Dan Ariely replied and said, just to be clear, I was the only person responsible for this in this team.

GUO: This, meaning Ariely was responsible for getting the data from the insurance company. But Dan Ariely was not taking responsibility for forging the data. He basically blamed the insurance company. He said, quote, "the data were collected, entered, merged and anonymized by the company and then sent to me. This was the data file that was used for the analysis and then shared publicly."

FOUNTAIN: So Uri knew the data had been forged, but without access to the original data from the insurance company, there was no way to prove who did the forging. The case was at a dead end.

GUO: But not for long. In another experiment in that same paper, Uri would find another smoking gun. And this time, there were fingerprints.

FOUNTAIN: Also, after the break, we uncover a little evidence ourselves.

(SOUNDBITE OF ADRIAN QUESADA AND SKINNY WILLIAMS’ "OUTLAW MYSTIQUE")

FOUNTAIN: So here's where we're at. After Uri and Data Colada pointed out that the car insurance experiment in that original sign-on-the-top paper had fake data in it, the authors of the paper retracted it. If you look up the paper online, right there on the top, there's now a warning in red that basically says this paper is no good. But it's not like anyone lost their jobs over it.

GUO: Now, Uri and his team had actually gotten another tip that there might be something wrong with the work of one of the paper's co-authors, Francesca Gino. And they start looking at a lot of her research, including one of the experiments she worked on for the sign-on-the-top paper. So they start digging in. Maybe this time they can do more than just show that the data was obviously wrong. Maybe they can find definitive proof of how the data had been manipulated.

FOUNTAIN: So the experiment in question asked people to solve some math problems. They were told they would get paid a certain amount of money for each one they got right. They did their math, and then they got to say on this official-looking form how many problems they'd gotten right. The ones who signed on the top of the form, the paper said, were way more honest.

GUO: But unlike with the odometer data, there was nothing obviously wrong with the math problem data. Uri looked at it. He looked at it again.

FOUNTAIN: And then he thought back to this one secret little quirk about the spreadsheet program Excel.

SIMONSOHN: Excel remembers.

GUO: Excel remembers. And when I first heard about this, I could not believe this was true. But if you use formulas in Excel - and everybody uses formulas in Excel - Excel essentially keeps track of where you move stuff, and it records all your moves in a secret file called calc chain.

SIMONSOHN: Calc chain's something that think 99.9% of people have never heard of before.

GUO: But Uri, of course, is part of the 0.1% who has heard of it. He loves it.

SIMONSOHN: Yeah. It's very - once you figure it out, it's very addictive. Like, you get a data set, and you start just checking the calc chain file, you know, like, immediately.

GUO: So Uri pulls up the calc chain file for experiment No. 1 and realizes that a bunch of those spreadsheet cells, they have been moved by somebody, and all of those cells that have been moved, every single one, happens to strengthen the finding that signing on the top increases honesty. Like, how convenient, right?

FOUNTAIN: Yes. And this time, Uri and his colleagues at Data Colada could do more than prove that the data was fishy. They could prove that somebody involved in the research had gone in and actually futzed with the data.

SIMONSOHN: And that really sort of wraps it up in a very compelling way 'cause you move - you have, like, confirmatory evidence from a complete different source of information.

FOUNTAIN: And by all accounts, this was Francesca Gino's experiment, run under her supervision, the results analyzed by her.

GUO: So just to reiterate, two experiments by two different superstars of the field, two researchers who are famous for showing how to make people more honest, their own studies about honesty are based on lies, on fraudulent data. Like, this is bananas.

FOUNTAIN: And it's even bigger than that. Uri's team has identified three more suspicious experiments done by Gino, pointing out what they call, quote, "evidence of fraud." She's on administrative leave from Harvard, and two more of her papers have already been retracted.

GUO: Dan Ariely's story - it's also getting more complicated. Remember, back in 2021, Ariely more or less blamed the insurance company for the fabricated data that had appeared in his odometer study. He said, quote, "the data were collected, entered, merged and anonymized by the company and then sent to me. This was the data file that was used for the analysis and then shared publicly."

FOUNTAIN: Now, for years, the insurance company, which is called The Hartford, hasn't said much of anything about all this. But a couple of days ago, we convinced the company to finally go on the record.

GUO: The company told us in a statement that they'd pulled the original data set they sent to Dan Ariely, and it looked dramatically different from the published data from Ariely's experiment. The company said they'd only given Ariely data for about 3,700 insurance policies. But in Ariely's paper, he claimed he had data for more than 13,000 policies. That's a difference of almost 10,000.

FOUNTAIN: In their statement, the company told us, quote, "though some of the data in the published study data is originally sourced from our data, it is clear that the data was manipulated inappropriately and supplemented by synthesized or fabricated data."

GUO: The company basically confirmed everything that Uri and the Data Colada people had said about Ariely's numbers. And the company made it clear that whatever had gone wrong with the data set had gone wrong after they had already given it to Ariely.

FOUNTAIN: We shared parts of the insurance company statement with Ariely, and he responded in an emailed statement, quote, "as I said two years ago, I was responsible for the relationship with the insurance company that provided the data for the paper. I got the data file from the insurance company in about 2007, and I can't tell now who had access to it. Getting the data file was the extent of my involvement with the data."

GUO: We also reached out to Francesca Gino, but she didn't make herself available to be interviewed for this episode. Nor did representatives from Harvard or Duke. The co-authors of the original paper declined to be interviewed. But the rest of behavioral science, they are commenting aplenty.

SANDERS: So this is - it's - this is a disgrace.

GUO: This is Michael Sanders again. He's the researcher who tried to bring the sign-on-the-top thing to Guatemala who left thinking that he had been the one who somehow messed up. Now he feels like the whole field was duped, not just him.

SANDERS: Sometimes it does make me feel like we are stupid when you see Dan Ariely, who, you know, famously says everybody lies a little bit and Francesca Gino says, here's my book on how you can succeed at work if you don't follow the rules. And it's like you walk in with an eye patch and a tricorn hat and a cutlass, then a hundred pages in the book later, we're like, oh, my God, they were a pirate. I never saw it coming.

FOUNTAIN: But Michael says the problem with all these fake studies is bigger than the feelings of a bunch of academics like him. In Guatemala, there was this big investment. Hundreds of thousands of people were put through this experiment that was never going to work.

SANDERS: All of that time and money could have been spent on something different, a different intervention with a better chance of working if we'd known, right? This is not a petty, academic squabble about he said, she said. This is - like, this has real impact in the real world. So I'm pissed off about that. I'm also pissed off by how stupid it is. So when you see, like, eminent Harvard professor has committed research fraud, what you want is really sophisticated, clever ways of cheating, not, oh, I thought I'd copy and paste these observations 'cause they were higher, and that'll make my result come together. This is dumb. And that annoys me. But it's also a source of terror because that means there'll be people out there who have cheated in a really sophisticated way who we haven't caught and who we may never catch.

GUO: Where Michael is, that's where his whole field is right now. Because of the questions around Dan Ariely and Francesca Gino's work, this cloud of uncertainty now kind of hangs over all of behavioral science. Like, what else might not be true?

FOUNTAIN: Ariely and Gino's research offered this really tempting story that if you're just clever enough, you could come up with all these powerful nudges - one simple trick after another that could solve all the world's problems. But now that story is starting to look like just that, a story.

(SOUNDBITE OF ADRIAN QUESADA AND SKINNY WILLIAMS’ "LONE STAR DESERT SURFER")

FOUNTAIN: If you know of anything sketchy going on in your neck of the woods, let us know. We're at planetmoney@npr.org. I'm at nfountain@npr.org. We're also on all the socials media.

GUO: This episode was produced by Emma Peaslee, with help from Willa Rubin. It was edited by Keith Romer and fact-checked by Sierra Juarez. It was engineered by Robert Rodriguez. Alex Goldmark is our executive producer. I'm Jeff Guo.

FOUNTAIN: And I'm Nick Fountain. This is NPR. Thank you for listening.

Copyright © 2023 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.