ÇďżűĘÓƵ

Using Imaging to Identify Deceit: Scientific and Ethical Questions

Chapter 5: Neural Lie Detection in Courts

Back to table of contents
Authors
Emilio Bizzi, Steven E. Hyman, Marcus E. Raichle, Nancy Kanwisher, Elizabeth Anya Phelps, Stephen J. Morse, Walter Sinnott-Armstrong, Jed S. Rakoff, and Henry T. Greely

Walter Sinott-Armstrong

Getting scientists and lawyers to communicate with each other is not easy. Getting them to talk is easy, but communication requires mutual understanding—and that is a challenge.

LEGAL LINES ON SCIENTIFIC CONTINUA

Scientists and lawyers live in different cultures with different goals. Courts and lawyers aim at decisions, so they thrive on dichotomies. They need to determine whether defendants are guilty or not, liable or not, competent or not, adult or not, insane or not, and so on. Many legal standards implicitly recognize continua, such as when prediction standards for various forms of civil and criminal commitment speak of what is “highly likely” or “substantially likely,” but in the end courts still need to decide whether the probability is or is not high enough for a certain kind of commitment. The legal system, thus, depends on on-off switches. This generalization holds for courts, and much of the legal world revolves around court decisions.

Nature does not work that way. Scientists discover continuous probabilities on multiple dimensions.1 An oculist, for example, could find that a patient is able to discriminate some colors but not others to varying levels of accuracy in various circumstances. The same patient might be somewhat better than average at seeing objects far away in bright light but somewhat worse than average at detecting details nearby or in dim light. Given so many variations in vision, if a precise scientist were asked, “Is this particular person’s vision good?” he or she could respond only with “It’s good to this extent in these ways.”

The legal system then needs to determine whether this patient’s vision is good enough for a license to drive. That is a policy question. To answer it, lawmakers need to determine whether society can live with the number of accidents that are likely to occur if people with that level of vision get driver’s licenses. The answer can be different for licenses to drive a car or a school bus or to pilot a plane, but in all such cases the law needs to draw lines on the continua that scientists discover.

The story remains the same for mental illness. Modern psychiatrists find large clusters of symptoms that vary continuously along four main dimensions.2 Individual patients are more or less likely to engage in various kinds of behaviors within varying times in varying circumstances. For therapeutic purposes, psychiatrists need to locate each client on the distinct dimensions, but they do not need to label any client simply as insane or not.

Psychiatrists also need not use the terms “sane” and “insane” when they testify in trials involving an insanity defense. One example among many is the Model Penal Code test, which holds that a defendant can be found not guilty by reason of insanity if he lacks substantial capacity to appreciate the wrongfulness of his conduct or to conform his conduct to the requirements of the law. This test cannot be applied with scientific techniques alone. If a defendant gives correct answers on a questionnaire about what is morally right and wrong but shows no skin conductance response or activity in the limbic system while giving these answers, does that individual really “appreciate” wrongfulness? And does this defendant have a “capacity” to appreciate wrongfulness if he does appreciate it in some circumstances but not others? And when is that capacity “substantial”? Questions like these drive scientists crazy.

These questions mark the spot where science ends and policy decisions begin. Lawyers and judges can recognize the scientific dimensions and continua, but they still need to draw lines in order to serve their own purposes in reaching decisions. How do they draw a line? They pick a vague area and a terminology that can be located well enough in practice and that captures enough of the right cases for society to tolerate the consequences. Where lawmakers draw the line depends both on their predictions and on their values.

Courts have long recognized that the resulting legal questions can be confusing to psychiatrists and other scientists because their training lies elsewhere. Scientists have no special expertise on legal or policy issues. That is why courts in the past usually did not allow psychiatrists to testify on ultimate legal issues in trials following pleas of not guilty by reason of insanity. This restriction recently was removed in federal courts, but there is wisdom in the old ways, when scientists gave their diagnoses in their own scientific terms and left legal decisions to legal experts. In that system, scientists determine which dimensions are predictive and where a particular defendant lies on those continua. Lawyers then argue about whether that point is above or below the legal cutoff that was determined by judges or legislators using policy considerations. That system works fine as long as the players stick to their assigned roles.

This general picture applies not just to optometry and psychiatry but to other interactions between science and law, including neural lie detection. Brain scientists can develop neural methods of lie detection and then test their error rates. Scientists can also determine how much these error rates vary with circumstances, because some methods are bound to work much better in the lab than during a real trial. However, these scientists have no special expertise on the question of whether those error rates are too high to serve as legal evidence. That is a policy question that depends on values; it is not a neutral scientific issue. This is one reason why neuroscientists should not be allowed to testify on the ultimate question of whether a witness is or is not lying.

Lying might appear different from insanity because insanity is a normative notion, whereas lying is not normative at all. A lie is an intentional deception without consent in order to induce reliance. Does the person who lies really believe that what he or she said is false? Well, he or she ascribes a probability that varies on a continuum. Does the speaker intend to induce belief and reliance? Well, that will not be clear if the plans are incomplete, indeterminate, or multiple. Does mutual consent exist, as in a game or some businesses? Well, varying degrees of awareness exist. Some cases are clear—maybe most cases. Nonetheless, what counts as a lie is partly a normative question that lies outside the expertise of scientists qua scientists. That is one reason why scientists should not be allowed to testify on that ultimate issue of lying. Their testimony should be restricted to their expertise.

FALSE NEGATIVES VERSUS FALSE POSITIVES

Although scientists can determine error rates for methods of lie detection, the issue is not so simple. For a given method in given circumstances, scientists distinguish two kinds of errors. The first kind of error is a false positive (or false alarm), which occurs when the test says that a person is lying but he or she really is not lying. The second kind of error is a false negative (or a miss), which occurs when the test says that a person is not lying but he or she really is lying. The rate of false positives determines the test’s specificity, whereas the rate of false negatives determines the test’s sensitivity.

These two error rates can differ widely. For example, elsewhere in this volume Nancy Kanwisher cites a study of one method of neural lie detection where one of the error rates was 31 percent and the other was only 16 percent. The error rate was almost twice as high in one direction than in the other. When error rates differ by so much, lawmakers need to consider each rate separately. Different kinds of errors create different problems in different circumstances. Lawmakers need to decide which error rate is the one that matters for each particular use of neural lie detection.

Compare three legal contexts: In the first a prosecutor asks the judge to let him use neural lie-detection techniques on a defense witness who has provided a crucial alibi for the defendant. The prosecutor thinks that this defense witness is lying. Here the rate of false positives matters much more than the rate of false negatives, because a false positive might send an innocent person to prison, and courts are and should be more worried about convicting the innocent than about failing to convict the guilty.

In contrast, suppose a defendant knows that he is innocent, but the trial is going against him, largely because one witness claims to have seen the defendant running away from the scene of the crime. The defendant knows that this witness is lying, so his lawyer asks the judge to let him use neural lie detection techniques on the accusing witness. Here the rate of false negatives matters more than the rate of false positives because a false negative is what might send an innocent defendant to prison.

Third, imagine that the defense asks the judge to allow as evidence the results of neural lie detection on the accused when he says that he did not commit the crime. Here the rate of false positives is irrelevant because the defendant would not submit this evidence if the results were positive for lying.

Overall, then, should courts allow neural lie detection? If the rates of false positives and false negatives turn out to differ widely (as I suspect they will), then the values of the system might best be served by allowing some uses in some contexts but forbidding others uses in other contexts. The legal system might not allow prosecutors to force any witness to undergo lie detection, but it still might allow prosecutors to use lie detection on some willing witnesses. Or the law might not allow prosecutors to use lie detection at all, but it still might allow defense attorneys to use lie detection on any witness or only on willing or friendly witnesses. If not even those uses are allowed, then the rules of evidence deprive the defense of a tool that, while flawed, could create a reasonable doubt, which is all the defense needs. If the intent is to ensure that innocent people are not convicted and if the defense volunteers to take the chance, then why the law should categorically prohibit this imperfect tool is unclear.

That judges would endorse such a bifurcated system of evidence is doubtful, although why is not clear. Some such system might turn out to be optimal if great differences exist between the rates of false negatives and false positives and also between the disvalues of convicting the innocent and failing to convict the guilty. Doctors often distinguish false positives from false negatives and use tests in some cases but not others, so why should courts not do the same? At least this question is worth thinking about.

BASE RATES

A more general problem, however, suggests that courts should not allow any neural lie detection. When scientists know the rates of false positives and false negatives for a test, they usually apply Bayes’s theorem to calculate the test’s positive predictive value, which is the probability that a person is lying, given a positive test result. This calculation cannot be performed without using a base rate (or prior probability). The base rate has a tremendous effect on the result. If the base rate is low, then the predictive value is going to be low as well, even if the rates of false negatives and of false positives seem reasonable.

This need for a base rate makes such Bayesian calculations especially problematic in legal uses of lie detection (neural or not). In lab studies the nature of the task or the instructions to subjects usually determines the base rate.3 However, determining the base rate of lying in legal contexts is much more difficult.

Imagine that for a certain trial everyone in society were asked, “Did you commit this crime?” Those who answered “Yes” would be confessing, so almost everyone, including the defendant, would answer “No.” Only the person who was guilty would be lying. Thus, the base rate of lying in the general population for this particular question is extremely low. Hence, given Bayes’s theorem, the test of lying might seem to have a low predictive value.

However, this is not the right way to calculate the probability. What really needs to be known is the probability that someone is lying, given that this person is a defendant in a trial. How can that base rate be determined? One way is to gather conviction rates and conclude that most defendants are guilty, so most of them are lying when they deny their guilt. With this assumption, the base rate of lying is high, so Bayes’s theorem yields a high predictive value for a method of lie detection with low enough rates of false negatives and false positives. However, this assumption that most defendants are guilty violates important legal norms. Our laws require us to presume that each defendant is innocent until proven guilty. Thus, if a defendant is asked whether he did it and he answers, “No,” then our judicial system is legally required to presume that he is not lying. The system should not, then, depend on any calculation that assumes guilt or even a high probability of guilt. But without some such assumption, one cannot justify a high enough base rate to calculate a high predictive value for any method of neural lie detection of defendants who deny their guilt.

CONCLUSION

A crystal ball would be needed to conclude that neural lie detection has no chance of ever working or of being fair in trials. But many details need to be carefully worked out before such techniques should be allowed in courts. Whether the crucial issues can be resolved remains to be seen, but the way to resolve them is for scientists and lawyers to learn to work together and communicate with each other.

ENDNOTES

1. This point is generalized from Fingarette (1972, 38–39).

2. These dimensions are standardized in American Psychiatric Association (2000).

3. For more on this, see Nancy Kanwisher’s paper elsewhere in this volume.

REFERENCES

American Psychiatric Association. 2000. The diagnostic and statistical manual of the American Psychiatric Association, fourth edition, revised. Arlington, VA: American Psychiatric Publishing.

Fingarette, H. 1972. The meaning of criminal insanity. Berkeley: University of California Press.