Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-22T05:59:22.283Z Has data issue: false hasContentIssue false

After the DNA Wars: A Mopping up Operation*

Published online by Cambridge University Press:  04 July 2014

Get access

Extract

The “DNA Wars” we are told are over. Two of the key, and at times most effective, participants in the battle, the FBI's Bruce Budowle and the leading early scientific skeptic, Eric Landers, have declared their own private truce and suggested that all are included. The controversial report of the first National Research Council Committee on DNA Evidence has in crucial respects been replaced by a second National Research Council Committee Report on DNA Evidence that has debuted to far better reviews than its predecessor and is likely to erase the earlier report's influence as courts grapple with DNA evidence. Yet all is not quite as peaceful as it appears; DNA2, despite its virtues does not adequately resolve all questions about the significance of DNA evidence, and some veterans of the DNA wars are not yet content to lay down their arms. We can better understand why the Landers-Budowle truce and DNA2 may not resolve all conflict if we first understand why disputes over the population genetics and statistical issues raised by the forensic use of DNA identification evidence became so heated that people came to speak of the “DNA Wars”.

Type
Research Article
Copyright
Copyright © Cambridge University Press and The Faculty of Law, The Hebrew University of Jerusalem 1997

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

**

Francis A. Allen Professor of Law and Professor of Sociology, The University of Michigan.

References

1 Lander, E.S. & Budowle, B., “DNA Fingerprinting Dispute Laid To Rest”, (1994) 371 Nature 735738 CrossRefGoogle Scholar.

2 National Research Council Committee on DNA Technology in Forensic Science, DNA Technology in Forensic Science. Washington DC: National Academy Press (1992)Google Scholar (hereinafter, DNA1). I should disclose at the outset that I served on the panel that produced this report, but I have also been critical of certain of its aspects. Lempert, R., “DNA, Science and the Law: Two Cheers for the Ceiling Principle”, (1993) 34 Jurimetrics J. 4157 Google Scholar (hereinafter, Two Cheers).

3 National Research Council Committee on DNA Forensic Science: An Update, The Evaluation of Forensic DNA Evidence. Washington, DC: National Academy Press (1996)Google Scholar (hereinafter, DNA2).

4 See Weir's, B.S. invited editorial, “The Second National Research Council Report on Forensic DNA Evidence”, (1996) 59 Am. J. Hum. Genet. 497500 Google Scholar. Although Weir quarrels with some aspects of the report, he concludes: “[T]he 1996 report will be a valuable resource to the forensic and legal communities. The authors of the 1996 report are to be congratulated on their efforts to make recommendations on the basis of scientific arguments. They have done much to help the proper calculation and presentation of DNA-profile statistics, and the day on which DNA profiles are employed with the same trust as are fingerprints has surely been brought forward by their report” (500). An early news report in Science notes that DNA2 is distinguished from DNA1 because its release seems to have set off no fierce debates. It notes that “DNA forensics experts like prosecutor Rockne Harmon of Alameda County, California, have embraced these guidelines as ‘reasonable’” and concludes: “But for the most part forensics experts say, the new NRC rules offer a rationale for practices that the courts are already adopting”. See Marshall, E., Academy's About-Face on Forensic DNA. (National Research Council Report on DNA Fingerprinting), 272 Google Scholar; No.5263 Science 803 (1996).

5 The distinguished geneticist Richard Lewontin reportedly criticized the composition of the second NRC Committee and quarrels with their treatment of error; See E. Marshall, supra n. 4. See also Koehler, J., Why DNA Likelihood Ratios Should Account for Error (Even When A National Research Council Report Says They Should Not), unpublished manuscript, December 6, 1996 Google Scholar.

6 However, the very fact that scientists came regularly to oppose each other in court or to write articles that might be used against each other in court may explain some of the conflict's intensity.

7 Arguments of this type have come to be known as “the prosecutor's fallacy”. See Thompson, W.C. & Schumann, E.L., “Interpretation of Statistical Evidence in Criminal Trials: The Prosecutor's Fallacy and the Defense Attorney's Fallacy”, (1987) 11 Law Hum. Behav. 167187 CrossRefGoogle Scholar. To appreciate why the argument is fallacious consider a man who is in jail at the time a rape occurs in the community. His DNA may match the DNA extracted from the rapist's semen and only one in a million people may have similar DNA, but if we are sure that the man was in jail at the time of the rape, it is certain that someone other than he committed the crime.

8 For collected examples see Koehler, J., “Error and Exaggeration in the Presentation of DNA Evidence”, (1993) 34 Jurimetrics J. 21 Google Scholar; Koehler, J. et al. , “The Random Match Probability (RMP) in DNA Evidence: Irrelevant and Prejudicial?”, (1995) 35 Jurimetrics J. 201 Google Scholar.

9 Even this ratio, which reflects the question the jury confronts, is not quite right in RFLP testing, for it incorporates binning procedures that treat closely matching DNA samples as having the same evidentiary weight as more distantly matching samples so long as in both cases the evidence and suspect samples fall within a prespecified range — Evett, I.W. et al. , “An Efficient Statistical Procedure for Interpreting DNA Single Locus Profiling Data in Crime Cases”, (1992) 32 J. Forensic Sci. Soc. 307326 CrossRefGoogle ScholarPubMed; Evett, I.W. et al. , “An Illustration of the Advantages of Efficient Statistical Methods for RFLP Analysis in Forensic Science”, (1993) 52 Am. J. Hum. Genet. 498505 Google ScholarPubMed. On binning see DNA2 at pp. 142-148.

10 See Koehler, J., “On Conveying the Probative Value of DNA Evidence: Frequencies, Likelihood Ratios, and Error Rates”, (1996) 67 Colo. L.R. 859886 Google Scholar, at 861.

11 The recent American scandal about biased or misleading testimony from FBI scientists makes this possibility appear more plausible than I imagined when I first noted it.

12 Officer Fuhrman's denial at trial that he ever used the word “nigger” was apparently a lie as was Officer Vannatter's statement in a preliminary hearing that O.J. was not a suspect when the police went to his house the night of the killing.

13 Arguments 2, 3, and 4 are taken from Thompson, W.C., “DNA Evidence in the O.J. Simpson Trial”, (1996) 67 Colo. L.R. 827 Google Scholar. Professor Thompson is a Ph.D. psychologist who has acquired considerable expertise in and written a number of fine articles on DNA evidence. He was a member of the O.J. Simpson defense team.

14 I am framing the matter this way rather than, as in likelihood ratio (3), in terms of O.J.'s innocence because when police malfeasance rather than laboratory error is the possible cause of a spurious match, the likelihood of a defendant's innocence may affect the probability of malfeasance. However, a jury may not consider as evidence of guilt, the fact that a police officer thought a person was guilty.

15 Thompson, supra n. 13.

16 Reading only the defense theories and supporting evidence, as presented by Thompson, supra n. 13, the set of defense explanations for the blood evidence seem to me sufficiently likely that if O.J. could somehow prove his innocence, (e.g. proof emerged that the killings occurred while O.J. was on the plane) few would be puzzled about how the mass of blood evidence came to incriminate O.J.

17 I overstate a bit here to avoid awkward writing. Some random match probabilities presented to the jury were quite high because the analyzed DNA was relatively uninformative. J. Koehler, supra n. 10, at 861, Table 1. In these cases the match probabilities given jurors were, in my view, as large or larger than the police malfeasance probabilities. The textual discussion fits best the multi-locus matches, especially those based on RFLP technology.

18 The likelihood that sloppy evidence handling by the police caused the DNA match is also probably higher than the probability of intentional police malfeasance, but we know even less about this probability than we do about the likelihood of laboratory error.

19 Peterson, J., Fabricant, E. & Field, K., Crime Laboratory Proficiency Testing Research Programs: Final Report (1978) 251 Google Scholar (Table 89).

20 Lempert, R., “The Honest Scientist's Guide to DNA Evidence”, (1995) 96 Genetica 119124 CrossRefGoogle ScholarPubMed.

21 Two Cheers, supra n. 2.

22 J. Koehler et al., supra n. 8. The Koehler study used a highly simplified stimulus to elicit probability estimates. It is a typical first step in investigating how jurors behave, but the study should be replicated with a videotaped trial and mock jurors who deliberate in order to confirm Koehler's finding.

23 The extent to which knowledge exists will depend on laboratory practice. A laboratory does not have to tell its DNA analysts that particular samples are test samples, and if a laboratory routinely keeps crime information from its analysts until the analysis is concluded and a report filed, casework quality samples known by the laboratory to be test samples may adequately substitute for truly blind test samples. Since, as discussed below, there are other good reasons for keeping crime-related information from DNA analysts, this should become standard laboratory practice. A laboratory which did this would still have to resist the temptation to assign its most capable analysts to the test samples, as apparently happened in some early California proficiency tests, and to leak information that the samples are tests. Since a laboratory's business may be harmed by a single proficiency test error, the temptation to alert analysts to tests may be hard to resist.

24 The testing was conducted by CACLD, the California Association of Crime Laboratory Directors and involved 3 laboratories tested over two years. Another laboratory also made a false positive error and several other reports were questionable. See California Association of Crime Laboratory Directors, Report to the Directors (1988).

25 Roeder, K., “DNA Fingerprinting: A Review of the Controversy”, (1994) 9 Statistical Sci. 222278 CrossRefGoogle Scholar. (Comments by Balding, Berry, Lempert, Lewontin, Sudbury, Thompson, and Weir and rejoinder by Roeder.)

26 If DNA evidence were presented in a Bayesian framework, following the suggestions of ProfessorKaye, in his article, “DNA Evidence: Probability, Population Genetics and the Courts” ((1993) 7 Harv J. L. & Tech. 101172)Google Scholar, the problem would be theoretically, and to some degree in practice, alleviated because juror estimates of prior probabilities of guilt are estimates of base rate guilt probabilities for individuals, who, apart from a reported DNA match, are implicated by a certain amount of other evidence. Realistically, these estimates may be quite inaccurate.

27 DNA2, supra n. 3.

28 This is not, in a given case, the same as an “error rate”. This difference, between the rate at which a laboratory has historically reported false positives and the probability it is reporting a false positive in a specific current case, is offered by DNA2 as a justification for avoiding basic issues. DNA2, supra n. 3, at 85-86.

29 DNA2, at 4, 85-87.

30 DNA1, supra n. 2.

31 DNA2, at 185.

32 Even so they could often be expected to yield random match probabilities less than one in 100,000 or even one in 1,000,000, figures which are almost certainly smaller than any laboratory's false positive error rate.

33 See Two Cheers, supra n. 2.

34 See e.g., Devlin, B. et al. , “Statistical Evaluation of DNA Fingerprinting: A Critique of the NRC's Report”, (1993) 259 Science 748–749, at 837 CrossRefGoogle ScholarPubMed; Devlin, B. et al. , “Comments on the Statistical Aspects of the NRC's Report on DNA Typing”, (1994) 39 J. Forensic Sci. 2840 CrossRefGoogle ScholarPubMed; Weir, B.S., “Forensic Population Genetics and the National Research Council (NRC)”, (1993) 52 Am. J. Genet. 437440 Google Scholar; Cohen, J., “The Ceiling Principle Is Not Always Conservative in Assigning Genotype Frequencies for Forensic DNA Testing”, (1992) 51 Am. J. Genet. 11651168 Google Scholar.

35 Because many courts applied the Frye test, interpreted so as to require a scientific consensus before novel scientific evidence could be introduced, DNA1's recommendation for the application of an interim ceiling principle carried great weight. Even though scientists involved in the forensic use of DNA evidence did not believe its application gave a valid random match probability, all could agree that the random match probability was no greater than the result that ceiling principle calculations yielded. Thus there was a scientific consensus that ceiling principle figures were a conservative upper bound but, because of the report, an apparent lack of scientific consensus about the validity of the probabilities yielded by product rule calculations.

36 DNA2, at 49.

37 Thompson, supra n. 13, at n. 35.

38 Jackson, R.L. & Savage, D.G., “FBI Warns of Possible Flaws in Lab Evidence; Courts, Prosecutors, Defense Counsel Nationwide are Told of Potential Problems Due to Alleged Misconduct”, Part A, p. 1 Google Scholar, Los Angeles Times Jan. 31, 1997.

39 DNA2, at 86.

40 In DNA2's example the true error rate of each of two laboratories is .10%. Determining the 95% confidence interval estimates an error probability of .30% for the laboratory that makes no errors in 1000 proficiency test trials and .47% for the laboratory that makes one error in 1000 proficiency tests. Either of these figures is likely to be far closer to the true probative value of a reported match than the extremely low random match probabilities that the jurors would otherwise hear.

41 DNA2, at 87.

42 This is because the larger the match window, the greater the proportion of the reference population that will have alleles matching those in the tested DNA.

43 “Race” as used by DNA analysts is more of a sociological than a biological concept. Most races DNA analysts recognize are socially constructed gross categorizations like black and white.

44 Lempert, R., “The Suspect Population and DNA Identification”, (1993) 34 Jurimetrics J. 17 Google Scholar.

45 See Equations 4.8 and 4.9 in DNA2, at 113.

46 The probability is less than 50% because of the chance of false positive error.

47 Thompson & Schumann, supra n. 7.

48 DNA2, at 6, 113.

49 Since the evidence the state is required to produce relates to a preliminary question of admissibility — whether the state is required to limits its statistical evidence to evidence showing the likelihood that at least one named relative had matching DNA — it is the judge rather than the jury who would evaluate the state's evidence.

50 DNA1, at 124.

51 DNA2, at 161.

52 In certain rare circumstances, the quantity of DNA may counsel against DNA1's recommendation, but with the development of PCR technology this is unlikely to be a substantial problem. Moreover, the original identification can and should be made using the minimum number of alleles required to select someone uniquely from the data base. Finally not all information from the selection match is lost since a jury should be able to appreciate the additional probative value that accrues when a person is selected on the basis of a match on some alleles and the selection is confirmed by a match on other alleles.

53 Two interesting issues I shall ignore are the implications for this analysis of the fact that many DNA data banks consist only of convicted criminals or convicted sex criminals and how specifically incriminating evidence discovered after the DNA identification should qualify my argument. In both situations the specific details of the crime and the additional information are likely to affect the analysis.

54 The frequency count may be based on floating bins which involves searching a data base and counting all alleles whose lengths vary by a certain amount from the size of the evidence allele examined or, more commonly, on fixed bins, which have in advance grouped data base alleles into bins based on size and determined the proportion of the data base population with alleles in each bin. DNA2 endorses floating bin based counts where their use is feasible. DNA2, at 161.

55 The match window is the range within which a DNA analyst will call evidence and suspect DNA samples the same. Thus if a match window is + 2.5%, any time the length of suspect DNA measured in base pairs is within 2.5% of the length of the evidence DNA, a match will be called.

56 See, e.g. Evett et al., (1992) supra n. 9.

57 For a detailed treatment of this issue which emphasizes the subjectivity that can exist see Thompson, W.C. & Ford, S., “The Meaning of a Match: Sources of Ambiguity in the Interpretation of DNA Prints”, in Farley, M. & Harrington, J. (eds.) Forensic DNA Technology (Chelsea, MI: Lewis, 1991)Google Scholar.

58 One might argue that even this figure is not accurate. The jury would have learned of a match had the bands in one lane been shifted by various different amounts, provided all such displacement might plausibly be attributed to band shifting. So the measure of the likelihood that the jury would be told that the evidence and suspect DNA would match if someone other than the defendant had left the evidence DNA should in theory take account of the likelihood that a person might by chance possess any of the possible combinations of alleles that would lead to a match report. Moreover, this is the figure that should be given by a match-binning analyst even if the suspect and evidence DNA are perfectly aligned, for a match would be reported to a jury if either the perfect match or any of a number of patterns of constant displacement were found. However, the chance of other perfect alignments that would have been called matches may be so small that as a practical matter they can be ignored. What is crucial is the match window the analyst (implicitly) applies in spotting band shifting. If allowing for allele size effects the analyst requires the displacements of each allele pair to be almost identical, the problem I have identified may in practice be de minimis. If, however, displacement judgments are made by eye and considerable variation allowed, then DNA statistics should take account of the different ways matches might have been called.

59 This is a conservative suggestion since it might not be all seven band matches that the analyst would call matches but only those in which the eighth band in the evidence sample was faint or otherwise seemed attributable to something other than the evidence DNA. However, in a setting where, as the recent FBI lab scandals indicate, forensic scientists tend to make discretionary calls so as to favor the state, statistical conservatism in interpreting those calls appears justified.

60 For ease of exposition I am assuming here, as I have throughout this paper, that the evidence DNA is DNA containing evidence left by a criminal at a crime. It is, however, also common to finger criminals by showing that blood or other DNA containing evidence from a crime victim is present on the criminal's clothes or on other possessions or in places the criminal has special assess to. The points I make apply regardless of whether the evidence DNA is attributable to the criminal or the victim.

61 Usually appropriate populations are considered to be populations of people who share the same racial heritage, in its broadest sense (e.g. black, white, southwest Hispanic, etc.) as the person suspected of leaving the evidence sample. The control made for race is often not scientifically required, but it is usually benign as the defendant is more likely to be helped than hurt by the control. R. Lempert, supra n. 44.

62 While I have identified the core questions in each category, there are other questions that are somewhat differently treated depending on one's perspective. One can, for example, treat questions pertaining to the likelihood that a relative of the defendant will have DNA matching the evidence DNA as a science question and give a scientific answer to it. (See Evett, I.W., “Evaluating DNA Profiles in the Case Where the Defense is it was my Brother”, (1992) 30 J. Forensic Sci. Soc. 514 CrossRefGoogle Scholar; DNA2, at 113) But there remains the forensic science question which has no determinate answer, for it depends on how plausible it is to consider a defendant's relatives as suspects. Often the indeterminacy of the forensic science question means the science question is never asked and the possibility a relative left the evidence DNA never affects what the jury learns of the evidence's probative value. This is so even if the chance a relative left the evidence DNA is substantial relative to the a priori chance the defendant left the evidence DNA.

63 The NRC Committee that wrote DNA1 considered ethical issues relating to DNA evidence, the construction of DNA data banks, the future monitoring of the technology and other issues that the Committee that wrote DNA2 saw as beyond its charge. The latter committee saw its role as limited to determining how the uncertainty of laboratory findings can be reduced, how the risk of error can be minimized, how to take account of population substructure (including relatives) and what statistical theory and empirical observations allow us to say about the probability of DNA matches. The second committee also discussed some legal issues, focusing particularly on the reception by the courts of the earlier NRC report.

64 The Committee made a misjudgment here, for even by the Committee's recommended calculation procedures, random match probabilities are often several of magnitude less than the likely chance of laboratory error.

65 The Committee writes, “The number of loci and the degree of heterozygosity per locus that are needed to meet the criteria illustrated above [for uniqueness] do not seem beyond the reach of forensic science, so unique typing (except for identical twins) may not be far off”. DNA2, at 138. This remark is scientifically justified in the sense that people, except for identical twins, have unique DNA profiles and science is now at a point where enough information about an individual's profile can be extracted so that there is good ground for thinking no one else would be the same on even the small fraction of the whole profile we can observe. But chances of error mean one cannot make the claim which seems to follow naturally from the uniqueness claim; namely, that DNA found at a crime scene must be the defendant's because tests show this to match the defendant's DNA and we can be confident that no one else has DNA that would match. It is, however, possible that uniqueness claims will not be as misleading as low random match probabilities. Juries may be less confused about how to use error probabilities to deflate the likelihood of a true identification when the DNA evidence is said to be unique than when it is claimed that DNA like that of the defendant has a chance of one over some very large number of characterizing a random person. Possibilities like this are one reason why it is important to implement DNA2's wise recommendation that behavioral research be carried out to identify conditions that will cause fact finders to misinterpret DNA evidence. (DNA2, at 42) Also uniqueness claims might be the spur courts need to make DNA experts cease proclaiming, as some have, that false positive errors are impossible. (J. Koehler, supra n. 8)

66 DNA2, at 123.

67 Even in the error-filled Castro case, (People v. Castro, 545 N.Y.S. 2d 985 (Sup. Ct. 1989)) which revealed serious problems in the way DNA tests were then conducted, it appears that the DNA profiling was in fact accurate and the defendant was guilty.