Finding Fault with Forensics A new study says faulty forensic science is a leading cause of wrongful criminal convictions. There's evidence for and against the accuracy of fingerprints, bitemarks, and other forensic techniques.


Finding Fault with Forensics

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript


You're listening to TALK OF THE NATION/SCIENCE FRIDAY. I'm Ira Flatow.

And for the rest of the hour, we're going to take a look at forensic science and scientists. I know all you "C.S.I." fans, you watch those, lots of different forensic science shows on TV. Admit it, you're fantasizing and fantasized about the work in that crime lab. It is an exciting world, isn't it? I like especially, you know, the parts where they bathe everything in that flattering blue light. Where does that light come from? I also like how they're able to solve a really interesting crime in an hour--of course, minus the commercials, and so we're really talking 40 minutes here.

So how close do these Hollywood sets and their sculptured actors really match true forensic scientists in the real world? And how well do real forensic techniques meet the demanding validation required in scientific research? Because according to some critics, it's not well enough. First, a report in the journal Science says forensic scientists may not be getting enough scientific training and that faulty forensic science is a leading cause of wrongful criminal convictions. And adding to that, there isn't enough research to show that many of the common forensic techniques, even fingerprinting or bite mark evidence--there's not enough research to show that these are reliable ways of linking someone to a crime. If they were invented today, they might not stand up to scientific investigation.

So this hour, we're going to take a look at the evidence for and against some of the very famous and most widely used forensic techniques that we take for granted. As I say, fingerprinting, bite marks, others. We'll talk about them. Do these crime-solving methods hold up under scientific scrutiny? And what can be done to improve the quality of science coming out of our crime labs? We're also going to talk about whether those high-tech TV tools they use--as I say, I love that little blue light--can identify anything. They shine it onto something and, `Whoa, there it is, shining. We've got it. We figured out what it is.' What do they really exist--what would--what would forensic scientists who do the real work, what kind of gadgets would they love to have coming from those TV shows?--not to mention the designer wardrobes that the actors are wearing.

So that's what we're talking about this hour. On the--believe it or not, this is the 10th anniversary of the O.J. Simpson trial, where I reckon that most people got their first real education in the high points and the low points of forensic science. And if you'd like to join us, our number is 1-800-989-8255; 1-800-989-TALK.

Jonathan Koehler is a professor of behavioral decision making at the McCombs School of Business at the University of Texas in Austin. He joins us by phone.

Thanks for being with us today, Dr. Koehler.

Dr. JONATHAN KOEHLER (Professor, McCombs School of Business, University of Texas): Thanks for having me, Ira.

FLATOW: You're welcome. Barry Fisher is the crime laboratory director for the Los Angeles County Sheriff's Department. He's also on the line with us.

Thank you for talking with us today.

Mr. BARRY FISHER (Crime Laboratory Director, Los Angeles County Sheriff's Department): Pleasure, Ira.

FLATOW: Let me begin by talking with you, Dr. Koehler, about the graph on the first page of your article. This is very interesting. You look at wrongful convictions using data supplied by the Innocence Project. You looked at 86 cases where people were exonerated and found that in more than 60 percent faulty forensic work was to blame. Were these legitimate mistakes that just happened, you know, innocently in the laboratory or was there evidence for deliberate dishonesty here?

Dr. KOEHLER: Well, most of those mistakes probably were not deliberate. Occasionally, they're deliberate mistakes. Occasionally, there's fraud. But much more commonly, when there are mistakes they are, you know, committed by well-meaning people. And, you know, the kinds of mistakes that were observed were things like matches that were declared in serological tests and blood tests that were later shown to be inconclusive, inconclusive calls that were actually exclusions, meaning that, you know, they were judged to be inconclusive by the analyst, but later it was shown that the suspect really couldn't have contributed whatever material was detected. You know, occasionally there's a bite match that was reported that matched the suspect and that later turned out it wasn't even a bite mark at all. So it's a variety of errors that went into those--that graph.

FLATOW: Were those errors from poor training or what?

Dr. KOEHLER: Well, it's hard to know, you know, and that was the point of the paper, is that it would be nice if there were more--you know, more extensive training, better protocols, more testing to determine how good these analysts are. That's really one of the major points of the paper. And we think that there are some forces conspiring to bring the science back into forensic identification science.

FLATOW: Mm-hmm. Mr. Fisher, what's your reaction to this study?

Mr. FISHER: Well, I agree with a number of things that Jonathan is saying in his paper. Obviously, we have a few differences of opinion. Forensic science has been growing by leaps and bounds. I think, as you pointed out, the O.J. Simpson case 10 years ago was a watershed event and people opened their eyes to the excitement, if you will, that science, brought to bear to help the police investigate crimes, can potentially do. And we've been, for many years, playing catch-up. There is, in many areas of forensic science, a lack of funding for some of these areas--research, for one thing. There is--if you look at the amount of federal funds available to conduct some of the research that Dr. Koehler is talking about, it's just not available. And hopefully--this session in Congress we've been working with a couple of senators who are very supportive, Senator Shelby from Alabama and also Senator Sessions from Alabama, who recognize that there are a lot of needs in forensic science, and we're trying to move those issues forward and...

FLATOW: This would be to train forensic scientists...

Mr. FISHER: Well...

FLATOW: ...better...

Mr. FISHER: of the areas that we keep on talking about is capacity building. We need to--there's been a huge number of people hired in the field, not only in DNA but in other areas. Most laboratories really took off in the early '70s under President Nixon's administration when there was a lot of LEAA, Law Enforcement Assistance Administration, money. And you fast-forward now almost 30 years later, and many of those people are retiring. Plus, the greater demand for more scientists in the area has created a lot more people who are being hired by laboratories. And we need to come up with better ways to train people to come into laboratories and do the quality work that we're talking about.

FLATOW: I would think that the war on terror would loosen up a lot of money for local...

Mr. FISHER: Well, that--that's...

FLATOW: Is it not filtering down to the local level?

Mr. FISHER: Not in the local forensic science labs per se. That's one area that hasn't happened. And, of course, one area that touches on this but was not specifically mentioned is that there's a tremendous amount of research being done in ways to identify individuals through photographs, through facial geometry and whatnot, and many of the same issues that Dr. Koehler are talking about in terms of reliability are faced in these areas, too. What is the research that's out there that says facial geometry, for example, or hand geometry--that some devices that allow you into a secure facility--how valid are these things? So there is a wide amount of research that needs to be done, but, again, it comes down to money. If you want to do the kind of study that, say, the National Research Council did a number of years ago on DNA, they are not self-funding. Somebody has to call for that research.

FLATOW: Right.

Mr. FISHER: Typically, Congress has to say, `OK. This is--these are areas we want to take a look at. Now go ahead and do it.' It's not necessary--not necessarily the case that individuals are saying, `No, no, no. This isn't necessary.' It's more often the case, `Well, how do we go ahead and do that?' And hopefully, some of the things that I've been working on in the area of public policy are to try to educate people in the administration and also the Congress on the importance of the things that we do and the critical role that science has in the administration of justice.

FLATOW: 1-800-989-9255 is our number. We're talking about forensic science. Let me bring up a point with you, Dr. Koehler, that Mr. Fisher raised. He was talking about the idea that we're bringing in facial recognition and other kinds of recognition or other kinds of forensic techniques that have yet to be proved to be valid in a pure scientific way, which is joining other things that have never really scientifically been validated, like the bite marks and even fingerprinting.

Dr. KOEHLER: Well, that's correct. All of these new technologies are very exciting and should be pursued with vigor. And by that, I mean there should be research funding available for testing these techniques, for training analysts to use the techniques. And the courts should implement the legal rules at their disposal for determining whether or not those techniques are good enough to bring into court. So, for example, there's a standard in the 1990s under a Supreme Court case called Daubert which requires that all scientific evidence that comes into court should be derived from the scientific method. And that includes the sort of thing that Barry Fisher knows well, falsifiability, peer review, publication...

FLATOW: Give me an example of one thing that would--you know, how you would work that with the--one of the evidence-collecting techniques you're talking about.

Dr. KOEHLER: Well, for example, let's say, you know, there's a new facial recognition technique that somebody says, `I can--you know, I can recon--I can determine this face based on this new technology that we have.' What I would want to see before the courts admitted that--I'd want to see some sort of peer-reviewed research done on that technique. It wouldn't be good enough for me--and it shouldn't be good enough for court, to have an expert come in and say, `Trust me. I'm trained in this technique. I'm the world's leading expert and I can make this determination based on my experience and training.' What we need is research that validates the technique and that validates the application to the instant case. I'd also want to know what that analyst's error rate was--that is, how often when he said it's a particular face that it turns out he's wrong; it's some other face. And if I can't have the individual analyst's error rate, which might be a little difficult to get at times--it's hard for any individual analyst to take a tremendous number of tests, which is what would be required to identify an individual error rate--I would at least like to know some--an error rate for a little broader group, maybe the laboratory that he works in or the industrywide error rate. I'd want to have some sense of how often when they say `match' that it is, in fact, not a match.

FLATOW: Do we even have that for fingerprinting?

Dr. KOEHLER: We really don't have a very good idea for fingerprinting. And part of the reason is is that the forensic sciences, until recently, have resisted these sorts of tests. This is--even the DNA technology, we've seen some resistance there. There are some proficiency tests in the area of fingerprinting. They're voluntary and they are ongoing. And the news is good or bad, depending on how you look at the error rates that have been found. The error rates seem to be on the order of a couple percent. So, for example, since 1995, I think it's--the data show that about 4 to 5 percent of analysts in these fingerprint proficiency tests make at least one error. And it's usually--they get 10 tries. They're given 10 prints to match, and so most people do a pretty good job. They're 10 for 10. But somewhere between 4 and 6 percent make a mistake. And so, on the one hand, you know, 95 percent is pretty good; one the other hand, most of us wouldn't fly an airplane that has a 95 percent chance of landing safely.

FLATOW: So then you'd want to identify who these people are, individually?

Dr. KOEHLER: Well, I'd want to identify who the people are and what the circumstances are that lead to the errors. You know, sometimes there's a really nice clean print, they might be a little easier to match than, you know, a partial print, for example.

FLATOW: And the same thing would be true I guess for--What?--bite marks, and maybe hair samples, things like that?

Dr. KOEHLER: Yeah, now there have been proficiency tests in those areas and, you know, the results are fairly disastrous, I would say. Fingerprinting is doing a little better than most of the traditional forensic sciences but, for example, in our paper we report that voice identification error rates run as high as 63 percent.


Dr. KOEHLER: Sixty-three percent. Handwriting error rates at 40 percent. Bite marks, almost two-thirds, as well.

FLATOW: And yet we hear all the time this kind of evidence being introduced in court cases.

Dr. KOEHLER: Well, we do because a lot of times this error rate information doesn't make it to the jury. You know, the error rate information that we have is not perfect. You know, they're based on proficiency tests that themselves have some errors, that themselves are not, you now, especially methodologically sound. But it doesn't mean we shouldn't pay some attention to them. I think they're signalling that there's a big problem in some of the traditional forensic sciences.

FLATOW: Well, we're talking about forensic science this hour on TALK OF THE NATION/SCIENCE FRIDAY from NPR News. I'm Ira Flatow with Jonathan Koehler of the University of Texas and Barry Fisher of the LA County Sheriff's Department. Our number: 1 (800) 989-8255. So I guess what you're saying is that a defense counsel could challenge some of the evidence by pointing out some of the errors or the error rate in some of the testing techniques.

Dr. KOEHLER: Well, that's a good point. Traditionally, defense counsel were reluctant to challenge fingerprinting because--for example, because it was so well instantiated in the courts that it seemed ridiculous to assume the judge would take a challenge seriously. In recent years, though, we've seen not only lawyers challenge of the fingerprints, we've seen judges challenge fingerprinting without any motion from the defense attorney. And for the most part, what we're seeing is that those challenges are failing; that is, fingerprinting is still sneaking in, even with the Daubert standard in tact, you know, with the Daubert standard that should be upheld, judges tend to review the available scientific evidence related to fingerprinting, they usually find that it's lacking, but then they find some end around for getting the--for introducing the fingerprints in court. Like, they'll say, `Well, there really hasn't been any serious scientific testing but 100 years of courtroom experience is a kind of a test.' After all...

FLATOW: Right.

Dr. KOEHLER: ...we know that we're putting a lot of guilty men away, and so that's a test, and so we'll count that as testing.

FLATOW: What about witness--you know, eyewitness identification? We know that that's so faulty. Has that become challengeable these days?

Dr. KOEHLER: Well, sure. Eyewitness identification's been challenged all the time. And--but there you see people understand that eyewitnesses make errors. Everybody understands the conditions are often such that you can't discriminate one face from another, even though, by the way, all faces are unique, just as it may well turn out to be that all fingerprints are unique, or all DNA profiles are unique. So the uniqueness--whether there is unique fingerprints or not is debatable, but it's not the uniqueness, per se, of fingerprints or DNA profiles or faces that's really at issue. What's at issue is our ability, in a specific case, to arrive at such conclusion. Are the conditions such that we really can reach conclusion that, you know, it is--this is the print of this particular individual.

FLATOW: I got about a minute before the break. Barry Fisher, is this making your job a little tougher now, making sure you have very good, you know, testimony?

Mr. FISHER: Well, you have a couple of things in play here. The Daubert decision deals with the admissibility of evidence, meaning the court makes a decision whether or not the jury is even going to hear the evidence. Once it gets before the jury, this other information can also be brought forth to diminish the value or the weight of the evidence once it's there. It's very difficult--let's assume--let's just accept that the people who work in crime labs make errors. The information has some degree of probative value that juries need to consider and I think that that's one of the things that we have to grapple with.

FLATOW: All right. Well, stay with that thought. We're going to take a short break and come back and talk lots more about this topic and take your questions so don't go away. We'll be right back.

I'm Ira Flatow. This is TALK OF THE NATION/SCIENCE FRIDAY from NPR News.


FLATOW: You're listening to TALK OF THE NATION/SCIENCE FRIDAY. I'm Ira Flatow.

We're talking this hour about forensic sciences with my guest Barry A.J. Fisher, head of the crime lab at the Los Angeles County Sheriff's Department; Jonathan Koehler, professor of behavior decision making at the University of Texas in Austin. Our number: 1 (800) 989-8255.

Dr. KOEHLER: Ira, I wonder if I could follow up...


Dr. KOEHLER: ...on something that Barry just mentioned?

FLATOW: Well, let me ask Barry if he was done with his thought there...


FLATOW: ...if he wanted to continue.

Mr. FISHER: Well, it's--see, one of the big conundrums that we face is that a lot of these procedures that we're talking about, these scientific methodologies that don't follow traditional science, were developed to solve problems that police investigators faced, that they turned to scientists who were working in crime labs. There's no doctoral degree in handwriting or fingerprints. These procedures developed over a number of years. In 19--around 1993, '94, with the Daubert decision, the way in which the courts looked at scientific and later expert evidence was changed. It was--prior to that, it was the so-called Frye Standard, which was a general acceptance procedure and now a requirement is being made for reliability, among other things.

Well, how do we get to the point where we can demonstrate reliability? That typically means there's research conducted. It's published in peer review journals. In order for that to happen, entities need to fund this kind of research, whether it's National Science Foundation, whether it's National Institute of Justice or whomever, and there hasn't been the investment in this particular area brought forth to address these problems. So in an academic sense, it's fine to talk about the failings of what's happening, but, by the same token, people in the government have to make that investment to deal with some of these problems.

FLATOW: Interesting. Dr. Koehler?

Dr. KOEHLER: Well, we agree completely. Barry Fisher should get all the money that he wants for training his analysts and other people should get the money they want for conducting the proficiency test studies to make sure that his analysts really can do the things they say they do. I also agree with Barry when he said that the Daubert decision is designed to curb the admissibility of so-called junk science. Now I wouldn't put traditional forensic sciences in that category because my sense is there is some probative value to DNA typing, fingerprinting and so on.

The question, though, is: How much probative value is there? And we're not going to know the answer to that until we do this kind of research that Barry's been talking about. But the--assuming that the traditional forensic sciences are admitted into court, one of my concerns is the way it's described by analysts in the courtroom, the way the match evidence is described. And right now there seems to be a kind of culture of exaggeration associated with the traditional forensic sciences. So you very often hear analysts say, you know, `This is a match. It's 100 percent positive match. There's no possibility this fingerprint could have come from anyone else, no possibility this hair could have come from anyone else. We've never made a mistake. There's no such thing as a mistake with this technology. It's theoretically impossible to make a false-positive error.'

These sorts of things are complete nonsense. And unless a defense attorney is up on the literature, and there isn't a lot of it, he or she may not know that that's just a bunch of baloney. Just to give one very brief example, one famous example, the Brandon Mayfield case. This was the Portland, Oregon, attorney who was matched to a fingerprint that was found--that the Spanish police lifted off a bag of detonators near the explosion site after the Madrid, Spain, train bombing back in March 2004. And that bombing killed almost 200 people. And Brandon Mayfield's fingerprint, allegedly, was found on that bag. And one FBI examiner took a look at it and he said, `That's a match, 100 percent positive identification.' And then another FBI examiner took a look at it and he said, `That's a match.' And a third FBI examiner took a look at it, and this was a retired examiner, and he said, `That's a match,' and then another well-known forensic scientist took a look at it and he said, `That's a match.'

And so we had four people saying, `This is a 100 percent positive match to Brandon Mayfield,' only the Spanish police didn't agree, and they ultimately matched it to someone else. And so now we know that it was not Brandon Mayfield's fingerprint and he was arrested and held for a couple of weeks. But had it not been for the Spanish police, we would have had some of our top analysts in the country saying, `One hundred percent positive match.' I don't know what would have happened to Brandon Mayfield. So what...


Dr. KOEHLER: point here is that it's possible--it's not only theoretically possible to make errors, errors have occurred in proficiency tests, errors have occurred in actual cases, and it makes no sense at all to describe matches in terms of 100 percent certainty; that this isn't the scientific way.

FLATOW: Would you agree, Barry?

Mr. FISHER: In general, I do agree. I think there are difficulties in maintaining that point of view. You can--an examiner can say, `In my opinion, I believe it's a match,' and still be wrong. Or not be able to substantiate that it's a 100 percent certainty. But that's pretty much the state of the practice in many of these areas and, of course, on the other hand, these arguments are put--are played out in the courtroom and it's a very strange place to have scientific arguments where you have lawyers who are non-scientists asking scientists or technicians about technical areas and the arbiter of what's going on is a judge who's also not a scientist. Ideally, these issues need to be--are best suited in an academic arena, in the lit--the research literature, to play out and figure out what's going on.

It's--there's a certain amount of partisanship that's bound to crop up in a courtroom debate of this sort and, unfortunately, that's where a lot of this winds up happening. I think that if the right kinds of research designs can be put together, the right research institutions brought to bear on these types of problems, they are solvable, or if they're not solvable, at least we can put some boundaries around a particular area and say, `Well, this is what we mean.' And I think if you go back early on to fingerprints, rather, to DNA testing, we--there was some discussion early on about `Should we say--make a statement that this biological sample is left to the exclusion of all others, or do we have to wrap it around some sort of statistic?' And...

FLATOW: Yeah, they didn't even want to call it DNA fingerprinting for that reason.

Mr. FISHER: Right.


Mr. FISHER: Because there was this notion that it was infallible but there are limits to any measurements that you make in science, first of all, and then, on top of that, you have human error that crops into any endeavor that you're doing. And I don't think it's improper or it's not a reasonable study to do to take a look at this stuff and offer that kind of testimony in court. I still think, at the end of the day, fingerprints are reliable, even though, occasionally, mistakes are made, that they do have probative value, that they do solve crime and exonerate innocent people. There are occasional instances when evidence doesn't do what we had hoped that it would do. And the decision-makers, namely the courts and the juries, probably need to at least know this and have these kinds of questions answered and take that information with them into the jury deliberation room and enter that into the calculus by which they are making their decisions.

FLATOW: All right. We're running out of time. Let me see if I can get a caller or two in here. Chip in Tulsa. Hi, Chip.

CHIP (Caller): Hi. How you doing?


CHIP: Hey, listen, I really enjoy your show. Thank you very much for taking my call.

FLATOW: Thank you.

CHIP: I'm watching all these things on TV and it's all these really cool tests and all this stuff that they're doing and I know--like, I live in Tulsa--Tulsa, Oklahoma--and I go to church with the chief of police and when I asked him about this stuff, he says, `Well, there's just--we don't have any money to do that kind of stuff. It's just cost-prohibitive.' And it's not that they can't do it, it's just that they're not--you know, they just--they can't. It's...

FLATOW: Right.

CHIP: I know, as this stuff gets more out into the limelight, is it going to get more cost-effective or, you know, is it going to continue to be cost-prohibitive for the smaller communities to have access to this type...

FLATOW: Well, let me just ask Barry Fisher first how much of a--what we see on TV actually he has in his lab or actually exists, and how close is your lab to the "C.S.I." lab?

Mr. FISHER: Well, we actually have one of those blue lights, I'm happy to report.

FLATOW: What does it show, the blue light?

Mr. FISHER: The blue light is just a high-intensity light source with that frequency, and certain fluids will fluoresce or luminesce under that, and some fibers do. It makes it easier to see that. The labs are a lot better-lighted than on the set the "C.S.I.," which have this dramatic quality to it. The real difference between the "C.S.I." shows and reality is we can't solve crimes in 40 minutes. And they do take some poetic license. After all, it is entertainment. It's not meant to be a reality show. It's meant to be entertainment and entertaining to the public that watch it. And there are some basis of fact in there but they do exercise a little of that literary license.

FLATOW: There has also been reports recently of juries in real trials acquitting defendants because they saw something on a "C.S.I." show that the prosecutor never introduced. You know, the something that they could find. You know what I'm talking about?

Mr. FISHER: Yeah, I've read--if you go online and look at the news stories that are out there, there's--the current buzzword is `the "C.S.I." effect.' And that's probably something that needs some studying to see how real that actually is. I'm sure that it has an impact on some people's decision-making. What the extent of that is, I don't know.

FLATOW: Right. Well, that's interesting.

Dr. KOEHLER: I would like to say to the caller...

FLATOW: Go ahead.

Dr. KOEHLER: ...who was from Oklahoma that his state had better come up with some funds for testing because in his state Joyce Gilcrest, a well-known forensic scientist, provided false testimony in a case that led to the conviction of a death-row inmate who eventually was freed. And she's been involved in literally thousands of cases, including two dozen in which defendants were sentenced to death, and about half of those have already been executed. And there is an investigation ongoing with Ms. Gilcrest. So these things are going to be expensive. Daubert hearings are expensive; that is, the admissibility hearings that precede a trial to determine whether the proper scientific evidence is in fact scientific enough to be heard by a jury. All of these things are expensive, and I don't know where the funding is ultimately going to come from, whether it's state, county, federal, but it's very, very important.

FLATOW: We're talking about forensic science this hour on TALK OF THE NATION/SCIENCE FRIDAY from NPR News.

Let me ask you, Dr. Koehler, even when science has been tested, how do juries react to it? Do they sometimes not believe what the scientists and the science is telling them?

Dr. KOEHLER: Well, there is an important issue there, as well, because once statistics are introduced, as they are in most DNA cases, very often jurors get confused or misled by those statistics. So if the information is presented simply, jurors are likely to use it. As it becomes more and more complex, jurors are likely to turn towards other evidence that's a little more comprehensible. Studies on juror comprehension right now yield a very pessimistic view, though, unfortunately, of--understand--how jurors understand the DNA match statistics.

So, for example, when they hear that, you know, the body is found in the woods and there's a match on the suspect and one in a million people share that blood type, I've actually seen examples where individual jurors, mock jurors in our studies, have said, `Well, one in a million. That means it could be him--couldn't be the suspect because that's like winning the lottery.' And in fact the point of the very low probability is that it is more probative as the probability gets lower and lower, as--the chance of somebody else would also share that DNA profile gets more and more remote, the evidence becomes more probative. But even that--and this is what makes it difficult to--for juries--even that is subject to the restriction that the probative value of the random match probability statistic, that number is limited by the extent to which the analysts make errors.

FLATOW: Let's go to the phones. Royal in California. Hi. Welcome to SCIENCE FRIDAY.

ROYAL (Caller): Thank you for taking my call. I believe in science but I want to separate it for the human beings who go before the bench. And I believe in law enforcement. Why, when these individuals know that they're wrong, and they, you know, mess up someone's life, are not held accountable? Like, in federal courts, I believe Title 18, 1001, you're not supposed to make false statements in court.

FLATOW: Are you talking about the forensics testimony, the scientists who testify about the fingerprint matching and things like that?

ROYAL: Oh, I'm just talking about--you know, science is a large field. Science is just an accumulation of knowledge.


ROYAL: But we're talking about forensic evidence, right?

FLATOW: Right.

ROYAL: I'm sure there are so many different types so I'm not stereotyping. You know, it involves everybody.

FLATOW: Right.

ROYAL: When you go before the bench, and you just downright lie, how come you can't be held accountable?

FLATOW: Let me ask...

ROYAL: I believe in law enforcement.

FLATOW: All right, let me--Barry Fisher, if someone gives false testimony about DNA, or fingerprint matching, are they held accountable?

Mr. FISHER: That's perjury. That's falsifying evidence. Those are serious felonies that people can be--if convicted, they can be sent to prison on them.

FLATOW: Jonathan, we're running out of time. Any last words on when we might get testing, this proficiency testing?

Dr. KOEHLER: Well, I think--as I--as we wrote in the paper, we think the forces are conspiring to give us this. We think that as the public becomes more aware of the errors associated with proficiency testing, as the public becomes more aware of the fact that many of the errors in these DNA exoneration cases were due to faulty proficiency test--faulty forensic science testing, as judges get the courage to implement the Daubert standard, we think the funding is going to come. We think the science is going to get put back into forensic science and we actually are taking a much more optimistic view than many of our fellow academics who have looked at the area and have said, `It's a disaster.' We don't think it's a disaster at all. We're very optimistic.

FLATOW: All right, Jonathan Koehler, professor of behavioral decision making, University of Texas at Austin; Barry Fisher, crime laboratory director for the LA County Sheriff's Department, thank you both for taking time to talk with us today.

Mr. FISHER: Thank you.

Dr. KOEHLER: Thank you.


FLATOW: If you'd like to write us, please send your letters to SCIENCE FRIDAY, 55 West 45th Street, Fourth Floor, New York, New York 10036. We also now podcast SCIENCE FRIDAY. So just go to our Web page. It's And you can click on the podcast there. Also you can leave us e-mail and get back editions of SCIENCE FRIDAY. You can also look for educational files and make free teaching curricula out of SCIENCE FRIDAY for our Kids' Connection.

I'm Ira Flatow in New York.

Copyright © 2005 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.