Editors' Picks | U.S. | Books & Ideas

Forensic Pseudoscience

The Unheralded Crisis of Criminal Justice

November 16, 2015


Photograph: [email protected]
 

This past April, the FBI made an admission that was nothing short of catastrophic for the field of forensic science. In an unprecedented display of repentance, the Bureau announced that, for years, the hair analysis testimony it had used to investigate criminal suspects was severely and hopelessly flawed.

The Innocence Project’s M. Chris Fabricant and legal scholar Tucker Carrington classify the kind of hair analysis the FBI performs as “magic,” and it is not hard to see why. By the Bureau’s own account, its hair analysis investigations were unscientific, and the evidence presented at trial unreliable. In more than 95 percent of cases, analysts overstated their conclusions in a way that favored prosecutors. The false testimony occurred in hundreds of trials, including thirty-two death penalty cases. Not only that, but the FBI also acknowledged it had “trained hundreds of state hair examiners in annual two-week training courses,” implying that countless state convictions had also been procured using consistently defective techniques.

But questions of forensic science’s reliability go well beyond hair analysis, and the FBI’s blunders aren’t the only reason to wonder how often fantasy passes for science in courtrooms. Recent years have seen a wave of scandal, particularly in drug testing laboratories. In 2013 a Massachusetts drug lab technician pled guilty to falsifying tests affecting up to 40,000 convictions. Before that, at least nine other states had produced lab scandals. The crime lab in Detroit was so riddled with malpractice that in 2008 the city shut it down. During a 2014 trial in Delaware, a state trooper on the witness stand opened an evidence envelope from the drug lab supposedly containing sixty-four blue OxyContin pills, only to find thirteen pink blood-pressure pills. That embarrassing mishap led to a full investigation of the lab, which found evidence completely unsecured and subject to frequent tampering.

There have also been scores of individual cases in which forensic science failures have led to wrongful convictions, the deficiencies usually unearthed by the Innocence Project and similar organizations. In North Carolina, Greg Taylor was incarcerated for nearly seventeen years thanks to an analyst who testified that the blood of a murder victim was in the bed of his truck. But later investigation failed to confirm that the substance was blood, or even of human origin. Forensics experts have used “jean pattern” analysis to testify that only a certain brand of blue jeans could leave their distinctive mark on a truck, as occurred in the trial of New Yorker Steven Barnes, who spent twenty years in prison for a rape and murder he didn’t commit.

Some wrongful convictions can never be righted—for example, that of Cameron Todd Willingham, who was convicted by a Texas court of intentionally setting the fire that killed his three young daughters. After the state executed Willingham, an investigative team at the Texas Commission on Forensic Science concluded that the arson science used to convict him was worthless, and independent fire experts condemned the investigation as a travesty. But those findings came too late to do Willingham any good.

The mounting horror stories, and the extent of corruption and dysfunction, have created a moment of crisis in forensic science. But the real question is not just how serious the problems are, but whether it is even possible to fix them. There are reasons to suspect that the trouble with forensics is built into its foundation—that, indeed, forensics can never attain reliable scientific status.

• • •

Some of the basic problems of forensic science are hinted at in the term itself. The word forensics refers to the Roman forum; forensics is the “science of the forum,” oriented toward gathering evidence for legal proceedings. This makes forensics unusual among the sciences, since it serves a particular institutional objective: the prosecution of criminals. Forensic science works when prosecutions are successful and fails when they are not.

That purpose naturally gives rise to a tension between science’s aspiration to neutral, open-ended inquiry on the one side and the exigencies of prosecution on the other. Likewise, while true understanding is predicated on doubt and revision, the forum must reach a definitive result. The scientist’s tentativeness is at odds with a judicial process built on up-or-down verdicts, a point the Supreme Court has emphasized in order to justify allowing judges wide deference as the gatekeepers of evidence.

It shouldn’t be controversial to point out that forensic science is not really a science to begin with, not in the sense of disciplines such as biology and physics. Forensic science covers whatever techniques produce physical evidence for use in law. These may be derived from various actual scientific disciplines, including medicine, chemistry, psychology, and others, but they are linked less by their inherent similarity than by their usefulness during investigation and prosecution. Law enforcement agencies themselves have invented a number of the techniques, including blood-spatter and bite-mark analysis.

Law is a poor vehicle for the interpretation of scientific results.

Much forensic knowledge has thus developed by means unlike that of ordinary scientific research. Comparatively few major universities offer programs in forensic science; joint training in forensic sciences and policing is common. Forensic laboratories themselves are a disparate patchwork of public and private entities, with varying degrees of affiliation with police and prosecutors. The accountability of some subfields such as “forensic podiatry” (the study of footprints, gait, and other foot-related evidence) can be dubious, with judges taking the place of accreditation boards. In such a decentralized system, it can be difficult to keep track not only of whether forensic investigation is working well but also of how it even works in the first place.

The close association between forensics and law enforcement is particularly controversial. According to Frederic Whitehurst, a chemist and former FBI investigator, forensic scientists can “run into a sledgehammer” when they contradict prosecutors’ theories. “What we seem to know in the world of science is that there are some real problems in the world of forensic science,” Whitehurst told a reporter from the journal Nature. “We’d rather work on something cleaner.” It is easy to see why a chemist might consider forensics “unclean”; criminal investigations regularly flout scientific safeguards against bias. Analysts often know the identity of the suspect, potentially biasing results in favor of police’s suspicions. Even more concerning, some crime labs are paid not by the case but by the conviction, creating a strong incentive to produce incriminating evidence.

Whitehurst’s comments echoed a major report in 2009 by the National Academy of Sciences (NAS), which painted a damning portrait of forensic practices. “Many forensic tests—such as those used to infer the source of tool marks or bite marks—have never been exposed to stringent scientific scrutiny,” the report concluded.

One serious problem with those tests is that they allow for high levels of subjectivity. The NAS authors wrote that fingerprint analysis, for example, is “deliberately” left to human interpretation, so that “the outcome of a friction ridge analysis is not necessarily repeatable from examiner to examiner.” I saw this up close while working at the public defender’s office in New Orleans. Explaining his procedure for determining a match, a fingerprint examiner said in court that he would look at one, look at the other, and see if they match. When asked how he knew the two prints definitely matched, the examiner merely repeated himself. That very logic leads the FBI to claim fingerprint matches are “100 percent accurate.” Of course they are, if the question of a match is settled entirely by the examiner’s opinion. Without any external standard against which to check the results, the examiner can never be wrong.

The NAS faulted a number of methods for this kind of shortcoming. Tool-mark and firearm analysis, for example, suffer the same weaknesses as fingerprint evidence, in that they depend strongly on unverified individual judgment. The report ultimately reached the forceful determination:

With the exception of nuclear DNA analysis . . . no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source.

That sentence should give any honest forensic examiner some sleepless nights.

But what about DNA? The report affirms that DNA maintains its place of integrity, the pinnacle of sound forensic science. It is not hard to see why DNA has long been the gold standard, deployed to convict and to exonerate the unfortunate defendants victimized by faultier methods of identification. DNA also has the advantage of producing falsifiable results; one can actually prove an interpretation incorrect, in contrast to the somewhat postmodern, eye-of-the-beholder sciences such as tool-mark and fingerprint analysis.

Yet forensic science involves both knowledge and practice, and while the science behind DNA is far from the prosecutorial voodoo of jeans and bite marks, its analysis must be conducted within a similar institutional framework. Analysts themselves can be fallible and inept; the risk of corruption and incompetence is no less pronounced simply because the biology has been peer-reviewed.

Such risk isn’t merely theoretical. While Florida exoneree Chad Heins had DNA to thank for the overturning of his conviction, DNA was also responsible for the conviction itself, with an analyst giving faulty testimony about DNA found at the site where Heins’s sister-in-law was murdered. Josiah Sutton was wrongfully convicted after a Houston analyst identified DNA found on a rape victim as an “exact match” for Sutton, even though one in sixteen black men shared the DNA profile in question. Earlier this year in San Francisco, thousands of convictions were thrown into doubt after a DNA technician and her supervisor were found to have failed a proficiency exam. In preparing evidence for a trial, the two had also covered up missing data and lied about the completeness of a genetic profile, despite having been disciplined internally for previous faulty DNA analyses.

DNA failures can border on the absurd, such as an incident in which German police tracked down a suspect whose DNA was mysteriously showing up every time they swabbed a crime scene, from murders to petty thefts. But instead of nabbing a criminal mastermind, investigators had stumbled on a woman who worked at a cotton swab factory that supplied the police. That case may seem comical, but a 2012 error in New York surely doesn’t. In July of that year, police announced that DNA taken off a chain used by Occupy Wall Street protesters to open a subway gate matched that found at the scene of an unsolved 2004 murder. The announcement was instantly followed by blaring news headlines about killer Occupiers. But officials later recanted, explaining that the match was a result of contamination by a lab technician who had touched both the chain and a piece of evidence from the 2004 crime. Yet the newspapers had already linked the words “Occupy” and “murder.” The episode demonstrates how the consensus surrounding DNA’s infallibility could plausibly enable government curtailment of dissent. Given the NYPD’s none-too-friendly disposition toward the Occupiers, one might wonder what motivated it to run DNA tests on evidence from protest sites in the first place.

The high degree of confidence placed in DNA is especially worrying because successful DNA analysis requires human institutional processes to function smoothly and without mistakes. The four authors of Truth Machine: The Contentious History of DNA Fingerprinting (2008) describe how DNA actually comes to be used in criminal proceedings: as “an extended, indefinitely complicated series of fallible practices through which evidence is collected, transported, analyzed, and quantified.” There are endless ways in which analysts can bungle their task. Furthermore, in the courtroom itself, DNA evidence must be contextualized and given significance. Even with well-conducted testing, poor explanation to a jury can enable a situation in which, as the geneticist Charalambos Kyriacou says, “Human error and misinterpretation could render the results meaningless.” A cautious approach is therefore valuable, even where DNA is concerned.

• • •

It would be unreasonable to expect any human endeavor to be completely without error, and one might wonder just how systemic the problems of forensic science truly are. The claim of crisis is far from universally shared. Forensic scientist John Collins calls this “a fabricated narrative constructed by frustrated defense attorneys, grant-seeking academics, and justice reform activists who’ve gone largely unchallenged.” Those who defend current practices say that the scandals are exceptions, that the vast majority of forensic scientists are diligent practitioners whose findings stand up under scrutiny. For every person exonerated, hundreds of convictions remain untouched.

But this defense actually points to one of the key problems with evaluating forensic science. The measures of its success are institutional: we see the failures of forensics when judges overturn verdicts or when labs contradict themselves. There is a circularity in the innocence cases, where the courts’ ability to evaluate forensic science is necessary to correct problems caused by the courts’ inability to evaluate forensic science. At no point, even with rigorous judicial review, does the scientific method come into play. The problem is therefore not that forensic science is wrong, but that it is hard to know when it is right.

Breaking the cycle of uncertainty has therefore been a key part of reform proposals. The NAS report recommended numerous steps to introduce objectivity and accountability, including the adoption of consistent standards in every subfield and the creation of a unified federal oversight entity. One can hear in the lengthy recommendations of the NAS committee members pleas for the introduction of basic quality control.

But so far changes have been sluggish. In fact, in some labs quality may be declining as state budget cuts have reduced resources available for forensics. In Congress, the Forensic Science and Standards Act, which would massively overhaul the field and introduce unprecedented scrutiny and coordination, has repeatedly stalled. Last year, in keeping with the NAS’s recommendations, the Department of Justice and the National Institute of Standards and Technology finally put together a forensic science commission to oversee the field and set protocols. But the commission is still in its infancy, and its effects remain to be seen.

The Supreme Court attempted to elucidate some standards in Daubert v. Merrell Dow Pharmaceuticals (1993) and two subsequent cases, which govern the admissibility of scientific evidence. The court ruled that evidence must be generally accepted in the field and open to empirical testing. But even as the Court ostensibly limited testimony to that which is sound and reliable, it undercut the ruling’s effectiveness by offering lower courts a high level of flexibility in their decision-making. Ironically, that hands-off approach may have helped to create the very nightmare that the Daubert court feared, in which “befuddled juries are confounded by absurd and irrational pseudoscientific assertions.”

Nobody can state with certainty the degree of pseudoscience that clogs the American courts. But even if forensic science largely faces a “bad apples” problem, it may still be in bad shape. As legal scholar and forensic science specialist Daniel Medwed notes, “An absence of careful oversight can allow rogue scientists to flourish.” Even if there is no reason to doubt forensic podiatry itself, there might still be good reason to doubt forensic podiatrists. The localized, disparate, and unmonitored nature of so much forensic practice makes for massive nationwide inconsistency.

In fact, so long as forensic science remains forensic—i.e., conducted to meet the demands of the forum rather than those of the scientific method—it is hard to see how it can warrant confidence. For countless reasons, law is a poor vehicle for the interpreting of scientific results. That people’s lives must depend on the interpretive decisions of judges and juries is in some respects unsettling to begin with. The chaotic state of forensic science—in theory and practice—and the possibility that unsupported flimflam is passing itself off as fact make the everyday criminal justice process even more alarming.

Thus even as we try various fixes, rooting out bad apples and introducing oversight, a systemic and elementary problem remains: a science of the forum can never be science at all.

Comments

The problem is even more pervasive than the article documents: science and statistics involve different mindsets from "justice" and "law". The incentives in law are all screwed up - lawyers and cops largely write their own rules, whicb of course favor them, not truth or fairness or societal benefit. The whole system is adverserial, and focussed on "winning",  not on finding out what happened, and why. Lawyers etc get no training in statistics, which is the science of drawing reliable conclusions from evidence, and should be central to our "justice" system. But law evolved earlier and independently of science, and never caught up. 

It appears that Mr Robinson should spend his time finishing his Sociology PhD before he tries to write broad brushed articles about Sociology. And clearly he should never be writing articles about science.  The 2009 report he quotes is nothing new, nor are his far left leaning opinions of it. But what is scary is that he believes he, again, a sociology student, has the qualifications to say Science in the Courtroom is Bad because He says it is Not Science.  What are his scientific qualifications? Where is his physics degree that shows he understands the fluid mechanics behind blood splatter to be qualified to harpoon it? Where is his geology or chemistry degree to understand how trace elements can link, or unlink, recovered evidence with a crime scene? Just because he doesn't understand it does not make it pseudoscience.  Very disappointing article. It links good facts with An Agenda that is far too limited by his meager background in science but bolstered only by his desire to make a name before he's even finished his research.  

The author is not arguing that "Science in the Courtroom is Bad because He says it is Not Science," as you claim. He is arguing that science in the courtroom is bad because it has repeatedly been PROVEN to lead to wrongful convictions, and because there is a fundamental misalignment between the goals of science and law. Further, criticizing the author because he is a sociology PhD student (who, if you do your research, also holds a JD from Yale and has worked in a public defender's office), and does not hold doctorates in physics, geology, and chemistry, is ridiculous. Just as is the case with all science - natural or social - the author relies upon the work of others, and focuses his commentary on the law and society, which are his areas of expertise. 

Your response especially given your handle is concerning if you fact are a PhD. I would have thought by now you have recieved some level of conversation regarding the dangers of science being used as a tool that the general public does not understand how it works. Whether it was Physics gurus building derivatives for bankers who never took calc 2, or scientist building weapons for me who did not "read the fine print" we have ample incidents across numerous fields to support the writers ultimate point.
Too often science is presented as an absolute, and even worse in law, it is applied to making very serious decisions. 
The psuedoscience term here is assigned in that there is not absolute hard rule across the entire community for how these items are carried out, studied, interpreted, or decided.
Statistics while having many rules and systems for collecting and inputing data, these constraints lessen greatly for interpretation and honestly should do their best to not speak in absolute actions for the future.
I am just beginning my terminal degree in Econometrics, so maybe my response is also devoid and biased.

This is one of the clearest examples of an ad hominem logical fallacy I've ever seen. You have no rebuttals for the facts listed by the author - all based on work done by qualified experts in their field - so you attack him personally, accusing him of having an agenda. Your comment is garbage. By the way I really doubt an actual PhD would be randomly capitalizing words throughout their writing the way you did.

In response to the NAS report, the latent print discipline asked a respected team of statisticians to conduct a study on the accuracy of latent print decisions. Published in the Proceedings of the National Academy of Science, the study shows a false positive rate of 0.1%. With verification (which is standard for the field) no erroneous identifications would have been made in over 16,000 test comparisons. (All tests had known ground truth.) 
http://www.pnas.org/content/108/19/7733.full
The NAS report was written by a panel of non-experts that had a very limited understanding of the issues. The hard data currently available confirms the vast majority of forensic analysis as valid.
The author has clearly shown a bias by ignoring scientific data and only quoting opinions. Sadly, the very thing that he criticizes about forensics.

As a criminalist in one of California's forensic laboratories, I'm curious which labs across the country pay their technicians or scientists based on convictions. I've seen this claim in various articles before and they all fail to provide the specific laboratories.
I'm wondering if the claim was originally created on unfounded and unsupported evidence. Now every journalist attacking forensic science just pulls this claim into their article without even knowing its basis.

I have often wondered the same thing.  I read paper linked by another responder and it sounds like some jurisdictions assess lab fees in addition to court fees upon conviction.  The leap being made is that these lab fees pay salaries and thus forensic scientists are being paid based on convictions. The money seems very removed from the scientist and they aren't getting any extra compensation (or having compensation withheld) strictly based on convictions. So it might not be a completely false statement but it is definitely misleading. 

Do read the report and see the bios of panel members and their expertise. The committee included forensic scientists, forensic pathologists, chemistry and statistics professors, a federal judge, a defense attorney, a federal judge, and others. They were particularly qualified to assess presentations by the numerous and various forensic disciplines and draw their own conclusions.  The committee's major criticism was the absence of scientific underpinnings for many of the disciplines largely due to the disinterest of academic circles to undertake the research and testing needed to document the validity of the disciplines.  Some disciplines lend themselves more easily to statistical validation than others but all need to try to support the conclusions they draw from training and experience by research and experiment.   

There are some isolated cases of mistakes or frauds. But you find these only a fraction if you compare with other services.
I am not justifying Forensic scientists which needs to be improved in relibility, quality and effectiveness but just citing the overall scenerio in science world over. 
We should continue to deliver best scietific aid to society as a societal mission

Has there ever been any scientific study of the accepted status that no two fingerprints are alike?  In considering this, due weight would be given to any human interpretation in determining likeness. Fingerprinting, like DNA analysis, seems to be a subject beyond criticism i.e. that it is an infallible test and conclusive as to guilt. 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions. CAPTCHA is not case sensitive.
Image CAPTCHA
Enter the characters shown in the image.