The Future of Lie Detecting Guests discuss current technology used for detecting lies, and a new device based on brain scans. It's potentially more effective than other means, but the device raises issues of accuracy, ethics, consent and privacy.
NPR logo

The Future of Lie Detecting

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
The Future of Lie Detecting

The Future of Lie Detecting

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript


This is TALK OF THE NATION. I'm Neal Conan in Washington. Deceit appears to have been part of human consciousness since before Cain professed ignorance of Abel's whereabouts, and society struggles to figure out who's telling the truth and who isn't.

The 20th century introduced the polygraph test, which doesn't test truth but the sweat and stress associated with telling a lie. Even its staunchest defenders concede it's not always accurate. The test alone can make some people nervous enough to skew the results, while some others can lie without stress.

Now there's a promising new test that measures not anxiety, but the lie itself. It's based on brain scans produced by MRI - Magnetic Resonance Imaging - and this technology isn't down the road someday, it makes its commercial debut next month. It raises issues about accuracy and fairness, about the nature of deception, and our goals in uncovering it. Put simply, can we handle the truth?

Later in the program, we admit it. We'd probably not be able to take a polygraph and swear that we never taped Dynasty or 90210 at least once in our lives. We'll talk about the death of Aaron Spelling and the rise of jiggle TV.

But first, building a better lie detector. If you have questions about how the technology works, what it measures, and how it might be used, give us a call. Our number here in Washington is 800-989-8255. That's 800-989-TALK. E-mail is

Daniel Langleben - Glang - excuse me, Langleben is a pioneer in the lie detection field. He's an assistant professor of psychiatry at the University of Pennsylvania, and he joins us from the studios of the Wharton School on that campus. And I apologize for mangling your name, sir.

Professor DANIEL LANGLEBEN (Professor of Psychiatry, University of Pennsylvania): Good afternoon.

CONAN: Good afternoon. Does this technology work?

Prof. LANGLEBEN: Well, it depends what you mean by work. This technology could differentiate - distinguish a lie and truth under controlled conditions in the laboratory. Between that and commercial applications or any other clinical applications that you mentioned, there is some experimentation to do. But the existing body of work demonstrates that, under some conditions, it could work.

CONAN: It could work. Now, as I understand it, you started out studying ADHD, you know, attention deficit disorder. How did you get into the search for a better lie detector?

Prof. LANGLEBEN: Well, I actually encountered some anecdotal and published reports on the specific characteristics of children with ADHD vis-à-vis deception. That is there was a suggestion that children with ADHD are worse or less successful liars than other children.

CONAN: Mm-hmm.

Prof. LANGLEBEN: And we also know that one of the key deficiencies in ADHD is a poor response inhibition, a poor ability to control their behavior. And so if you put those two things together, you could make up a hypothesis that deception requires response inhibition. Since functional MRI is one of the premier tools for studying - for correlating brain and behavior...

CONAN: Mm-hmm.

Prof. LANGLEBEN: ...this would be a tool of choice to test this hypothesis, that is that deception requires response inhibition, and that was our first work.

CONAN: What does it look like when you actually see the picture?

Prof. LANGLEBEN: Well, it looks like - first of all, what you're looking at are two things. Images that you're getting while the MRI scan is running...

CONAN: Mm-hmm.

Prof. LANGLEBEN: ...are not the images that you actually use to distinguish between lie and truth. What you look at is the statistical maps of the differences between two types of responses. For example, a response to question A and question B. And that is what we're using to discriminate between lie and truth.

CONAN: I see, so I...

Prof. LANGLEBEN: All right, when you...

CONAN: You might ask, for example, is your name Daniel, and you would give a response.


CONAN: And then the next question is, is your name Fred, and we'd take a look at the difference between the two.

Prof. LANGLEBEN: Correct. And what you will see is increased - a relative increase in activity in some parts of the brain during particular type of questions. And then what you need to do is establish the pattern, knowing what is true and what is not true, try and correlate between a pattern that is the typical of truth and a pattern typical of lie. And then you will want to extrapolate it to the next group of people or subjects of patients that you will study to see whether their responses fit into that pattern that you just established.

CONAN: And that sounds, to me, at least at this point, is it capable of distinguishing, you know, shades of gray? Or is it...

Prof. LANGLEBEN: Well, no. This is what the paradigm or the format in which you ask the questions is for.


Prof. LANGLEBEN: To remove the shades of gray. And this technique is used with polygraph as well.

CONAN: They're all yes or no questions. Or black-and-white questions, yeah.

Prof. LANGLEBEN: Yes and no or relative - or questions with a limited number of possible answers. And I wouldn't be surprised if, with some sophistication, it can increase the number of possible answers to three, four, five, six, as long as the number is limited and the pattern is pre-established.

CONAN: Mm-hmm. Let's see if we can get some listeners questions in on this, too. 800-989-8255, if you'd like to join us, 800-989-TALK. E-mail is Steve, Steve's calling from Tulsa, Oklahoma.

STEVE (Caller): Yeah, I have a couple of questions, and I'll get my - listen to the response off the air, or off the phone. One thing that I have a question about is how do they - let's say someone is - to clear themselves of crime are required to take a - this new form of lie detector, we'll call it, how do they get around the HIPAA violation or HIPAA rules as far as medical information being released? And secondly, isn't the MRI interpretation as good as the interpreter, and wouldn't that hold a lot of challenges when court actions occur?

CONAN: Well, Daniel, I think you're certainly competent to answer the second part of that question.

Prof. LANGLEBEN: Well, both of those are excellent questions. Question number one about HIPAA: here, you need to decide whether we're talking about research or we're talking about a diagnostic procedure that falls in this realm of medicine or you're talking about non-medical use of a technology, and depending on that, HIPAA will or will not apply. And also it, of course, raises the issue of whether you want or can do this sort of testing in individuals who are not interested in cooperating with you.

CONAN: Mm-hmm.

Prof. LANGLEBEN: And I'll proceed to the second question.

CONAN: Go ahead.

Prof. LANGLEBEN: Can you repeat, by the way, that question to me? I think I lost that one.

CONAN: It was basically, I think, doesn't the quality of the test depend on the quality of the person who's interpreting the MRI?

Prof. LANGLEBEN: Right. Well, here is one area where MRI is - I expect MRI to have a significant advantage over at least the current state of interpretation of polygraph data, because MRI data is essentially analyzed and interpreted almost automatically. The only thing that is happening by hand is presetting of the thresholds of what's significant and what's not. So the analysis and interpretation of MRI data is very much a procedure that follows a set algorithm that has very little human intervention.

CONAN: While the polygraph has a considerable human factor in it.

Prof. LANGLEBEN: Correct.


Prof. LANGLEBEN: But it doesn't mean that polygraph cannot be made into much more algorithm-like process if somebody was interested in doing that.

CONAN: Would there be any benefit - just to follow up just a moment on Steve's thoughts - would be there be any benefit to incorporate, to use both of these simultaneously, since the kinds of questions are not too dissimilar?

Prof. LANGLEBEN: Absolutely. And this actually - if I can guess and imagine the future, this is what is most likely to happen, because we're talking about two measurements of different parts of the system, which is a human being. And while the polygraph is measuring essentially one channel of data, that is - or several channels of data coming from the activity of the peripheral nervous system, which is heart rates, skin conductants, et cetera.

While the FMRI's looking at brain activity, which has many more dimensions in a way, but it is a different spot to look at the activity of the human organism. So a combination of the two is much more - is likely to create superior accuracy. And more than that, the polygraph could serve essentially as a screening test for the more specific test of a functional MRI.

CONAN: Okay. Steve, thanks very much for the call.

STEVE: You're welcome.

CONAN: All right. The University of Pennsylvania is a hot bed for the development of lie detection technology, therefore, a hot bed of debate over its policy, its ethics, and its social implications. Paul Root Wolpe is a Senior Fellow at Penn's Center for Bioethics. He tracks development of lie detection technology. He's also on the board of the Neuroethics Society. And he's with us also from the Wharton School studios on the Penn campus, sitting right next to his colleague there. Nice to have you on the program today.

Prof. PAUL WOLPE (Senior Fellow, University of Pennsylvania's Center for Bioethics): Pleasure to be here.

CONAN: First Paul Root Wolpe, there have to be concerns about this. There are many concerns about this. And one of the fortunate things is that many of these scientists who are actually developing this are thoughtful about it and share many of these concerns. And there are two kinds of concerns.

There are concerns about the efficacy of the test itself and whether it works the way it's advertised and in whose hands it will work appropriately. And the second set of issues are what kind of a truthful society do we want, and under what circumstances do we want to use an accurate lie detector? And who gets to use it and who shouldn't be using it? And do we really want a society where even telling the kinds of social truths that many societies are based on -Asian society, for example, is actually constructed - many Asian societies - on a set of social lie-telling rituals.

So lies have an important place in society. There are some kinds of lies that all of us tell, and some kinds of lies that we wouldn't - you know, is that a beautiful baby, isn't my baby beautiful - those are the kinds of lies we all think are appropriate.

CONAN: Do I look fat in this? Yes.

(Soundbite of laughter)

Prof. WOLPE: Right. So there is a real debate that has to happen over who should have access to the technology and under what circumstances.

CONAN: Because it becomes clear, reading about this technology - and Dr. Langleben you can help us out here - it's coming, if it's not here yet, and there is a commercial version of this that makes its debut next month, whether that's good or not. This is happening. It's down the road.

Prof. LANGLEBEN: Right. And it's just beginning now. And as we move through the next years and decades, this technology is going to get more robust and more accurate. And that's why now is the time to begin this conversation.

CONAN: All right. If you'd like to join the conversation give us a phone call. Our number is 800-989-8255, that's 800-989-TALK. The e-mail address is We're talking about lies and new technology to detect them. I'm Neal Conan. We'll be back after a short break. This is TALK OF THE NATION, from NPR News.

(Soundbite of music)

CONAN: This is TALK OF THE NATION. I'm Neal Conan in Washington. We're talking today about building a better lie detector. New technology uses brain scans and promises greater accuracy. It also raises questions of ethics, privacy, and consent. Our guests are Daniel Langleben, who's an assistant professor of psychiatry at the University of Pennsylvania. And Paul Root Wolpe, a senior fellow at Penn's Center for Bioethics.

Of course, you're invited to join us. 800-989-8255, 800-989-TALK. E-mail is And let's talk with Greg. Greg's calling from Provo in Utah.

GREG (CALLER): Hi, I do criminal defense work. I'm an attorney that represents a lot of criminals and I've met some pretty good liars, I think, in my time. Some that are pathological. They can tell the same story over and over again, and I think actually believe it when everyone else in the world is saying something different. And would their brain scans necessarily show that they were lying or are they - are their brains wired differently such that they can tell the story over and over again and convince themselves that it's the truth?

CONAN: Daniel Langleben?

Prof. LANGLEBEN: Well, again, this is an excellent question. And the answer is double here. First of all, theoretically, fMRI is much more likely to uncover a deception in an individual who does not have an emotional response to his or her behavior, which is a psychopath, because our tests were specifically designed to avoid provoking any excessive emotion or any emotion at all.

CONAN: Mm-hmm.

Prof. LANGLEBEN: While polygraph is essentially based on an anxiety related response and if such response is blunted, such as you would expect in a person with an anti-social personality disorder, what you call a psychopath, you are very likely to have a false negative finding. In fact, I think there are programs of training for polygraph evasion that teach people to do that.

I would expect that it would be much more difficult to do with your own brain activity, though, even that may not be impossible. I can imagine some system of biofeedback that could allow you to manipulate your own brain output, but that would be many steps away from what's available now.

CONAN: Hmm. So, in other words, there's promise that - you're saying it would be much more accurate than polygraph.

Prof. LANGLEBEN: That it would be much more valuable in this particular type of populations which are the prime sort of target of lie detection.

CONAN: Good liars? Good liars is who we're after here.

Prof. LANGLEBEN: So this would be an example of what I was suggesting as a potential combination of the use of polygraph and fMRI. That is an individual who cannot be tested with a polygraph successfully may be tested with a functional MRI. However, there is a sort of - the point is that this needs to be experimentally tested.

CONAN: Mm-hmm.

Prof. LANGLEBEN: This is, like I said, a theoretical prediction. And until it has been tested under controlled experimental conditions, in peer reviewed, academic-based work, you cannot really say that that's true. You can say that this is what you expect to be true.

Prof. WOLPE: And, Neal, I think this is one of the big issues that we're dealing with right now. We have two companies out there who are planning, in the fairly near future, on bringing brain imaging lie-detection technology to market. Many of these questions, pathological liars, people with personality disorders, have - these questions have not yet been tested empirically.

We don't know how people with different kinds of personalities, in different kinds of situations, are going to look when the fMRI technology is used on them. It hasn't been tested robustly with very different populations, with different ethnic populations in a robust way. And so my big concern about this technology is the prematurely of bringing it to market at this time; I don't think it's ready for primetime yet, and it's of great concern that there's a race now to bring this technology to market.


GREG: Somebody with mental disease, though, is going to - their brain's going to be much different than a normal brain regardless of what test is being done though, correct?

Prof. LANGLEBEN: I would expect so, and, again, it would be a speculation. I just submitted a paper for publication exactly on this topic. And, unfortunately, there isn't experimental data on the subject and it needs to be done.

CONAN: Mm-hmm. Greg, thanks very much for the call.

GREG: Thank you.

CONAN: And let's go to Ray(ph). And Ray's calling from San Francisco.

RAY (CALLER): You know, most of us, at one time or another have harbored an idea or concealed a notion of some kind or other consciously and maybe the interviewers who were trained to use the new system of lie-detection could tease out or try to tease out our concealed information or deceits or conspiracies.

But, you know, all these things are on a continuum and when you think of it, most of us carry around notions of dubious - of factuality, factoids, PR hype, and various kinds of news spin and so on are in our consciousness at various times, so that must also be evaluated by an interviewer. So how do you distinguish between the (unintelligible).

CONAN: Presumably, Ray, you ask a specific question. Like, you know, where were you on Thursday at 9:00? And you're not going to say, you know - you're answer's not going to be conditioned by what you saw earlier that day on CNN or Fox, so...

Prof. WOLPE: You have to understand that lie detectors do not determine truth. The best lie detectors can do is determine what you think is true. And so a skilled person using this understands that what you are saying is simply what is true for you. And once you have that, then you have to begin to evaluate the information for whatever purpose you want to use it for.

RAY: Right. So what you're saying then is that an interviewer is going to look to infer some dissonance and that may not even be that the person being interviewed is aware of it.

CONAN: Hmm. Would that be accurate, Mr. Langleben?

Prof. LANGLEBEN: I'm sorry, can you repeat that?

CONAN: Ray's point is that there could be truths teased out that even the subject is not aware of that they are being exposed.

Prof. LANGLEBEN: Well, that would fall under a very different category and we could not even call it lie detection. If you want to describe it in some way, you would call it mind reading; essentially, trying to test what is stored in your memory without you even trying to shape it or deliver it to you, which is very different from lie-detection and it's a different line of research. Is it possible? It may be possible and it's even more fraught with potential dangers and problems and misinterpretation than lie-detection.

RAY: Well, do you find that verges on the area that the MRI will read?

Prof. LANGLEBEN: Again, it's completely outside of the scope of lie detection, but there are actually - there are a large number of researchers doing fMRI research related to this topic, essentially trying to correlate content of your memory with some kind of MRI pattern. And that includes false memory.

RAY: Maybe you're familiar with the story entitled The Demolished Man by Alfred Bestor, an advertising writer and science fiction author, who proposed a system for interfering with mind readers.

CONAN: Well, it was a science fiction novel with which I don't think they're familiar, because it's pretty obscure, but I'm familiar with it. And it was a (unintelligible) device, a tune that he hummed to avoid the mind readers. But Ray, thanks very much. We'll move right along with that.

RAY: Thank you.

CONAN: Anyway, let's see if we can get another question on the line. Jim. Jim's calling from Green Bay in Wisconsin.

JIM (CALLER): Good afternoon.

CONAN: It's a great book, by the way. If you find it, go read it. But it doesn't relate much to this. Go ahead.

JIM: Not allowing the technology to go ahead is infringing on the right of the innocent. If I were going to spend 20 years in prison for something I didn't do, I sure in heck wish I had the right to go take a polygraph, sodium pentothal, or anything that's available at the time to prove my innocence and I would wish it would be put in the court of law.

CONAN: Yeah, Paul Root Wolpe, that's the question that's been raised in this context, too.

Prof. WOLPE: Very much so. And the thing is that in order for that to be valuable, you have to have confidence that your technology works and works well. There are certainly very valuable uses of a reliable lie detector. Proving innocence is one of them and that's one of the reasons why both of these companies that are pursuing this technology now say that they're main goal will be to use this with people who are innocent and trying to prove their innocence.

The question here is not should we use this for those purposes. The question here is, what other purposes should we use this for? If my daught - if I find a dent in my car and my daughter says it's not her, do I have a right to grab her wrists and pull her over to the local lie-detection, you know, company and say I want to know if she was really out past curfew and dented the car? If I want to hire someone to be my bank teller, does bank management have the right to use this technology to screen its employees? Is the military going to start using this technology? Are they going to fly it down to Guantanamo Bay?

These are the questions that I think we need to begin to discuss now. Where is it appropriate to use this and under what circumstances? Assuming that it works properly and well.

CONAN: Daniel Langleben, as I understand it, the military helped fund your research, and I would assume that they are interested in it. Not necessarily the products that are coming out next month, though.

Prof. LANGLEBEN: They have funded the work that has been done so for, though to a very limited extent, I would say. And they - I think their interest was mainly in trying to demonstrate - to test whether there is a feasibility of using this technology. And that we've done. And at this point we do not have further plans for military funding.

CONAN: Let's get Jack on the line. Jack is calling from Petaluma in California.

JACK (CALLER): Yes. I was in the Navy in 1948. A Naval Officer going to school at Treasure Island. Something was stolen from a roommate of mine who was just a temporary roommate and I was called over to take a lie detector test. The military was using it back then. I refused it and walked out of the place. Nothing was ever done about it, but it was a pretty chilling thing.

I knew that I hadn't stolen the man's wristwatch and I would never take it again unless they hogtied me to the place. And I don't - the other gentleman said he might take it to prove his innocence. I think he would be crazy to do that, since it's admittedly not a 100 percent accurate.

CONAN: Mm-hmm.

JACK: That's all I have to say on that.

CONAN: All right. And thanks for the call. Paul Root Wolpe, I guess that's where you get onto questions of coercion. Well, you don't have to take it. Of course, if you don't...

Prof. WOLPE: Right, and that's the issue. With this technology, by the way, with the fMRI lie-detection technology you can't force someone to take this. If they really don't want to cooperate, it's very difficult to get the kinds of readings that you want and to get their - you must have their cooperation. So, in that sense, it's difficult to use this coercively.

But, as the caller just mentioned, there are other kinds of coercion. There's expectation. There's assumption of falsehood when someone refuses to use these technologies. And those kinds of coercive forces can be just as destructive and just as problematic.

CONAN: And we should point out, currently, and it's the only law that we know about, there's now law for this new technology, but for polygraph, it's not admissible in court, it can't be used by companies, but all levels of government can use it to screen employees, including the military.

Prof. WOLPE: That's right.

CONAN: We're talking with Paul Root Wolpe, a senior fellow at Penn's Center for Bioethics, and with Daniel Langleben, who's an assistant professor of psychiatry at the University of Pennsylvania, about a better lie detector. And you're listening to TALK OF THE NATION, from NPR News.

And, Daniel Langleben, as you describe this technology, it does look like it has tremendous promise for your field, for psychiatry.

Prof. LANGLEBEN: That's correct. And my interest in this technology is, I would say, largely guided by that. Because, one of the callers for example, described the concern about interfering with one's memories, which is not exactly lie-detection, but related. Well, same technology can be applied to treatment of post-traumatic stress disorder. In the same way, lie-detection opens the door to a better understanding of unconscious defense mechanisms, one of which is denial. So, in a way, it could open the door to the, how would I say, restoration of the biological basis of psychotherapy, which is a method that has been pretty much on the back burner of modern psychiatry.

CONAN: Mm-hmm. Let's see if we can get another caller on the line. This is Matt. Matt's with us from St. Louis.

MATT (CALLER): Yes. Good afternoon. Actually, I have two questions for both your guests. Specifically, which region of the brain is the fMRI focused on that's determining if it's a truth or a lie? Is it a conscious part of the brain or is unconscious? Can you convince yourself that a lie is actually truth over some type of conditioning?

CONAN: And is it one section or several? Daniel Langleben.

Prof. LANGLEBEN: Well, I'll be grossly oversimplifying, so please forgive me. But what we see is a pattern that looks like what you need to activate if you want to control your behavior. As we originally hypothesized, the activity looks very similar to what you would need to activate in your brain to suppress a response of some sort.

CONAN: Mm-hmm.

Prof. LANGLEBEN: Now, this pattern will change depending on the particular nature and the order of the questions asked, and therefore each change in the pattern of the test would need to be validated and tested and verified. But if I have to answer it in a simple way: it is conscious and it's not just one part of the brain, it's a number of parts of the brain that work together.

CONAN: All right Matt, your other point?

MATT: Yeah, my other question was, and some other disorders have been mentioned, but has multiple personality been taken into account to see if that actually has different truth percentiles, I guess, depending on the personality that the person assumes?

Prof. LANGLEBEN: Well, we don't even know whether this technique will work in criminals, to say nothing about multiple personality disorder, which is an extremely rare condition.

CONAN: Mm-hmm. Ah...

Prof. LANGLEBEN: However...

CONAN: Go ahead, I'm sorry.

Prof. LANGLEBEN: I would expect that people who - as Dr. Wolpe said before, we're talking about you're subjective truth. And in multiple personality disorder, it's going to be pretty hard to find the subjective truth.

CONAN: Matt, thank you.

MATT: Well, thank you.

CONAN: Thanks very much for the call. And again getting back to what Professor Wolpe was talking about earlier, there are obviously degrees of untruths, everything from a bluff in a poker game to the social white lie to, you know, trying to get away with murder. Does the technology, at least as far as you've tested it, does it pick up all those differences?

Prof. WOLPE: This is Paul Wolpe.

CONAN: Go ahead.

Prof. WOLPE: The technology of fMRI looks at the response to a question that is asked. The technique or technology of asking these questions is really no different or little different in the fMRI as it is in polygraph. And for over 80 years, polygraph professionals have tried to perfect a technique whereby these kinds of - any kind of issue or question can be broken down into binary or forced choice answers.

So even if we're talking about something that has shades of truth, you try to ask yes, no or at least limited answer questions that would lead you, one after another, into those shades of truth. So it's part of the artistry of formulating your questions for these types of instruments. The fMRI doesn't change the basic need to have a skilled questioner.

CONAN: We'll take a couple of more questions on this when we get back from a break. I promise that's true. 800-989-8255, if you'd like to join us. 800-989-TALK. E-mail us, Plus, we'll remember TV pioneer Aaron Spelling. I'm Neal Conan. You're listening to TALK OF THE NATION, from NPR News.

(Soundbite of music)

CONAN: This is TALK OF THE NATION. I'm Neal Conan in Washington. And here are the headlines from some of the other stories we're following here today at NPR News.

Warren Buffet has announced he will give most of his $44 billion fortune to charity. The Bill and Melinda Gates Foundation will be the recipient of most of Buffet's generosity, receiving about 1.5 billion dollars per year.

And Israeli Prime Minister Ehud Olmert says he has ruled out bargaining with the captors of an Israeli soldier and has promised a broad and ongoing military offensive. The soldier was seized by Palestinian militants during a guerilla raid in southern Israel on Sunday.

Details on those stories and, of course, much more later today on ALL THINGS CONSIDERED. Tomorrow on TALK OF THE NATION, with tens of thousands of people waiting for an organ donation in this country, the need continues to far outstrip the supply. Some wonder why they can't buy an organ, or sell one. Others are horrified at the prospect. The ethics of organ donation and sales, next TALK OF THE NATION, from NPR News.

In a few minutes, he gave us the Love Boat, Dynasty, 90210, and redefined television drama. We'll remember Aaron Spelling with TV Guide's Matt Roush. But first, we're talking about technology to detect lies. Our guests are Daniel Langleben, an assistant professor of psychiatry at the University of Pennsylvania, and Paul Root Wolpe, who's a senior fellow at Penn's Center for Bioethics.

If you'd like to join us: 800-989-8255, 800-989-TALK. E-mail is And let's get a question in from Howard. Howard's with us from Cleveland.

HOWARD (CALLER): Hi, Neal. Thanks for taking my call. An MRI is a pretty large and fairly expensive piece of equipment that you just can't roll into an office when you need use it. Are any of this type of technology results replicable through an EEG, which is much smaller and more portable?

CONAN: Daniel Langleben?

Prof. LANGLEBEN: Well, again, it's an excellent question. And, in fact, our work is based on prior work by Peter Rosenfeld from Northwestern University in Illinois with EEG. The difference between EEG and fMRI is that fMRI has many more dimensions of measuring brain activity compared to EEG, where the source of brain activity is hard to detect. So the answer is yes, it is very possible that some of the findings we have with fMRI could be repeated with EEG. However, we expect that the fMRI will remain superior.

HOWARD: Just as a follow-up, would it be as easy to read as automated as the fMRI or is it more user subjective?

Prof. LANGLEBEN: I would expect that interpretation of EEG data could be automated to the same extent of fMRI.

HOWARD: Great, thank you very much.

CONAN: Thanks, Howard.

Prof. LANGLEBEN: You're welcome.

CONAN: And let's see if we can talk with Jim. Jim in Oklahoma City.

JIM (CALLER): Hello, guys. I was just curious, I didn't know if anyone had addressed the cost issue versus the polygraph versus an MRI. I know, like the previous caller said, you know, MRIs are cumbersome, they're not portable really and...

CONAN: They cost millions of dollars each, yeah.

JIM: Exactly. And you would have to have, you know, I don't know, you would have to have, obviously, a radiologist to read the results, interpret the results, you couldn't just do it - just anyone interpret the results. So how would - what would the cost factor be compared to just regular polygraphs?

CONAN: Any idea?

Prof. LANGLEBEN: Daniel Langleben. I'll try to answer this. First of all, the cost of the machine itself, the scanner, is indeed very high. However, the -what is important here is an hourly cost of using the MRI. Because once the machine is installed it's essentially pure overhead until it's being used. So what you need to think about is an hourly cost of using an fMRI machine or MRI machine versus an hour of polygraph.

And I think at that point that costs start coming pretty much close together, because it's all about labor involved, more than the actual use of the scanner. There are hundreds of MRI scanners around the country, and many of them have excess capacity. That's part one.

The other issue is radiologists. In fact, interpretation of this data probably could be almost entirely automated and a radiologist probably is not a required part of interpreting this data, though radiologists could be participating in it and probably could contribute.

CONAN: Okay.

JIM (Caller): Okay, thanks guys.

CONAN: Thank you Jim. And the cost and the - as we look ahead to this, Paul Root Wolpe, I mean, clearly the costs are going to come down, the technology is going to improve, and that's really where your concerns begin.

Dr. WOLPE: It is. Then we have to decide, are we going to regulate this technology and say only certain people have access to it. Is anyone going to be able to use it when we get more and more robust lie detectors? And, especially, as we go on, there may be other kinds of technologies outside of fMRI that follow on the heels of fMRI that are more portable.

They're already trying things like functional near-infrared where they shine infrared light into the frontal cortex and then detect the reflection, and they can determine blood flow through that, and there's some evidence that that may be a passable lie detector. That would be extraordinarily portable and easy to use, less even, perhaps --

CONAN: And consent would not be involved. People could do it without your consent.

Dr. WOLPE: Well, the way it right now, you actually have to put a band across the forehead in contact with the skin, but the goal is to eventually develop one that is remote and, perhaps, even covert.

And then you have the additional question, though - I emphasize, we can't do this now - but then you have the additional question of, when you walk through that airport security checkpoint, is it perfectly okay to take someone into a side room and covertly use a lie detection technology with them. These are questions that we're actually going to have to begin grappling with.

CONAN: Thanks gentlemen, both, very much for your time; we appreciate it.

Dr. WOLPE: My pleasure.

Dr. LANGLEBEN: Thank you very much.

CONAN: We heard from Paul Root Wolpe who's a senior fellow at the University of Pennsylvania Center for Bioethics at the Department of Psychiatry, and from Daniel Langleben who's an assistant professor of psychiatry at the University of Pennsylvania, all about the prospect of a better lie detector.

When we come back, Aaron Spelling.

Copyright © 2006 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.