Introduction: Imaging DeceptionBack to table of contents
Emilio Bizzi and Steven E. Hyman
Can the relatively new technique of functional magnetic resonance imaging (fMRI) detect deceit? A symposium sponsored by the American Academy of Arts and Sciences, the McGovern Institute at the Massachusetts Institute of Technology (MIT), and Harvard University took on this question by examining the scientific support for using fMRI as well as the legal and ethical questions raised when machine-based means are employed to identify deceit.
Marcus Raichle, a professor at Washington University in St. Louis, opens the discussion with a clear description of fMRI, its physiological basis, the methodology underlying the extraction of images, and, most important, the use of image averaging to establish correlations between the “images” and aspects of behavior. While averaging techniques are highly effective in the characterization of functional properties of different brain areas, images obtained from a single individual are “noisy,” a fact that clearly touches upon the reliability of the extracted data and a fortiori makes detecting deceit a questionable affair.
Nancy Kanwisher, a professor at MIT, discusses papers that present supposedly direct evidence of the efficacy of detecting deceit with fMRI, but dismisses their conclusions. Kanwisher notes that there is an insurmountable problem with the experimental design of the studies she analyzes. She points out that by necessity the tested populations in the studies consisted of volunteers, usually cooperative students who were asked to lie. For Kanwisher this experimental paradigm bears no relationship to the real-world situation of somebody brought to court and accused of a serious crime.
Kanwisher’s conclusions are shared by Elizabeth Phelps, a professor at New York University. Phelps points out that two cortical regions—the para-hippocampal cortex and the fusiform gyrus—display different activity in relation to familiarity. The parahippocampal cortex shows more activity for less familiar faces, whereas the fusiform gyrus is more active for familiar faces. However, these neat distinctions can unravel when imagined memories are generated by subjects involved in emotionally charged situations. Phelps points out that the brain regions important to memory do not differentiate between imagined memories and those based on events in the real world. In addition, the perceptual details of memories are affected by emotional states.
Phelps’s compelling description of how imagination, emotions, and misperceptions all play a role in shaping memories can be briefly expressed as “brains do not lie: people do.” This point is echoed by Stephen Morse, who begins his presentation by stating “Brains do not commit crimes. Acting people do.” Morse, a professor at the University of Pennsylvania, takes a skeptical view of the potential contributions of neuroscience in the courtroom. He believes that behavioral evidence is usually more useful and informative than information based on brain science, and that when neuroimaging data and behavioral evidence conflict, the behavioral evidence trumps imaging. Morse worries that admitting imaging in the courtroom might sway “naive” judges and jurors to think that the brain plays a “causal” role in a crime. He repeatedly warns that if causation excuses behavior then no one can ever be considered responsible.
Walter Sinnott-Armstrong, a professor at Dartmouth College, is also unenthusiastic about the use of fMRI to detect deceit. His concern is that the error rates in fMRI are significant and that determining error rates is not a simple task. For this reason he believes that evidence from neural lie detection efforts should not be allowed in court.
Jed Rakoff, a U.S. district judge, shares Sinnott-Armstrong’s concern about error rates and finds that fMRI-based evidence may be excluded from trials under the Federal Rules of Evidence. Rakoff argues that the golden path to discovering truth is the traditional one of exposing witnesses to cross-examination. He doubts that meaningful correlations between lying and brain images can be reliably established. In addition, he notes that the law recognizes many kinds of lies—for example, lies of omission, “white lies,” and half-truths—and asks whether brain imaging can come close to distinguishing among these complex behavioral responses. Clearly not, he concludes, but traditional cross-examination might do the job.
Henry Greely, a professor at Stanford Law School, discusses the constitutional and ethical issues raised by fMRI lie detection. He cites as concerns the problems related to the scientific weakness of some fMRI studies, the disagreement among the investigators about which brain regions are associated with deception, the limitations of pooled studies, and the artificiality of experimental design.
The authors of these seven essays express a dim view of lie detection with fMRI. They also consider the widely used polygraph and conclude that both it and fMRI are unreliable.
Often in science when a new technique such as fMRI appears, the scientists who promote its use argue that, yes, problems exist but more research will, in the end, give us the magic bullet. Perhaps. In the case of lie detection through fMRI, however, two sets of problems seem insurmountable: 1) problems of research design, which Kanwisher argues no improvement in imaging technology is likely to address; and 2) problems of disentangling emotions, memory, and perception, which, Phelps notes, are processed in the same region of the brain and thus are commingled.