Using Imaging to Identify Deceit: Scientific and Ethical Questions

Chapter 1: An Introduction to Functional Brain Imaging in the Context of Lie Detection

Back to table of contents
Authors
Emilio Bizzi, Steven E. Hyman, Marcus E. Raichle, Nancy Kanwisher, Elizabeth Anya Phelps, Stephen J. Morse, Walter Sinnott-Armstrong, Jed S. Rakoff, and Henry T. Greely

Marcus E. Raichle

Human brain imaging, as the term is understood today, began with the introduction of X-ray computed tomography (i.e., CT as it is known today) in 1972. By passing narrowly focused X-ray beams through the body at many different angles and detecting the degree to which their energy had been attenuated, Godfrey Hounsfield was able to reconstruct a map of the density of the tissue in three dimensions. For their day, the resultant images of the brain were truly remarkable. Hounsfield’s work was a landmark event that radically changed the way medicine was practiced in the world; it spawned the idea that three-dimensional images of organs of the body could be obtained using the power of computers and various detection strategies to measure the state of the underlying tissues of the body.

In the laboratory in which I was working at Washington University in St. Louis, the notion of positron emission tomography (PET) emerged shortly after the introduction of X-ray computed tomography. Instead of passing an X-ray beam through the tissue and looking at its attenuation as was done with X-ray computed tomography, PET was based on the idea that biologically important compounds like glucose and oxygen labeled with cyclotron-produced isotopes (e.g., 15O, 11C, and 18F) emitting positrons (hence the name positron emission tomography) could be detected in three dimensions by ringing the body with special radiation detectors. The maps arising from this strategy provided us with the first quantitative maps of brain blood flow and metabolism, as well as many other interesting measurements of function. With PET, modern human brain imaging began measuring function.

In 1979, magnetic resonance imaging (MRI) was introduced. While embracing the concept of three-dimensional imaging, this technique was based on the magnetic properties of atoms (in the case of human imaging, the primary atom of interest has been the hydrogen atom or proton). Studies of these properties had been pursued for several decades in chemistry laboratories using a technique called nuclear magnetic resonance. When this technique was applied to the human body and images began to emerge, the name was changed to “magnetic resonance imaging” to assuage concerns about radioactivity that might mistakenly arise because of the use of the term nuclear. Functional MRI (fMRI) has become the dominant mode of imaging function in the human brain.

At the heart of functional brain imaging is a relationship between blood flow to the brain and the brain’s ongoing demand for energy. The brain’s voracious appetite for energy derives almost exclusively from glucose, which in the brain is broken down to carbon dioxide and water. The brain is dependent on a continuing supply of both oxygen and glucose delivered in flowing blood regardless of moment-to-moment changes in an individual’s activities.

For over one hundred years scientists have known that when the brain changes its activity as an individual engages in various tasks the blood flow increases to the areas of the brain involved in those tasks. What came as a great surprise was that this increase in blood flow is accompanied by an increase in glucose use but not oxygen consumption. As a result, areas of the brain transiently increasing their activity during a task contain blood with increased oxygen content (i.e., the supply of oxygen becomes greater than the demand for oxygen). This observation, which has received much scrutiny from researchers, paved the way for the introduction of MRI as a functional brain tool.

By going back to the early research of Michael Faraday in England and, later, Linus Pauling in the United States, researchers realized that hemoglobin, the molecules in human red blood cells that carry oxygen from the lungs to the tissue, had interesting magnetic properties. When hemoglobin is carrying a full load of oxygen, it can pass through a magnetic field without causing any disturbance. However, when hemoglobin loses oxygen to the tissue, it disrupts any magnetic field through which it passes. MRI is based on the use of powerful magnetic fields, thousands of times greater than the earth’s magnetic fields. Under normal circumstances when blood passes through an organ like the brain and loses oxygen to the tissue, the areas of veins that are draining oxygen-poor blood show up as little dark lines in MRI images, reflecting the loss of the MRI signal in those areas. Now suppose that a sudden increase in blood flow locally in the brain is not accompanied by an increase in oxygen consumption. The oxygen content of these very small draining veins increases. The magnetic field in the area is restored, resulting in a local increase in the imaging signal. This phenomenon was first demonstrated with MRI by Seiji Ogawa at Bell Laboratories in New Jersey. He called the phenomenon the “blood oxygen level dependent” (BOLD) contrast of MRI and advocated its use in monitoring brain function. As a result researchers now have fMRI using BOLD contrast, a technique that is employed thousands of times daily in laboratories throughout the world.

A standard maneuver in functional brain imaging over the last twenty-five years has been to isolate changes in the brain associated with particular tasks by subtracting images taken in a control state from the images taken during the performance of the task in which the researcher is interested. The control state is often carefully chosen so as to contain most of the elements of the task of interest save that which is of particular interest to the researcher. For example, to “isolate” areas of the brain concerned with reading words aloud, one might select as the control task passively viewing words. Having eliminated areas of the brain concerned with visual word perception, the resulting “difference image” would contain only those areas concerned with reading aloud.

Another critical element in the strategy of functional brain imaging is the use of image averaging. A single difference image obtained from one individual appears “noisy,” nothing like the images usually seen in scientific articles or the popular press. Image averaging is routinely applied to imaging data and usually involves averaging data from a group of individuals. While this technique is enormously powerful in detecting common features of brain function across people, in the process it completely obscures important individual differences. Where individual differences are not a concern, this is not a problem. However, in the context of lie detection researchers and others are specifically interested in the individual. Thus, where functional brain imaging is proposed for the detection of deception, it must be clear that the imaging strategy to be employed will provide satisfactory imaging data for valid interpretation (i.e., images of high statistical quality).1

LESSONS FROM THE POLYGRAPH

In 2003, the National Academy of Sciences (NAS) made a series of recommendations in its report on The Polygraph and Lie Detection. Although theses recommendations were primarily spawned by a consideration of the polygraph, they are relevant to the issues raised by the use of functional brain imaging as a tool for the detection of deception.2

Most people think of specific incidents or crimes when they think of lie detection. For example, an act of espionage has been committed and a suspect has been arrested. Under these circumstances the polygraph seems to perform above chance. The reason for this, the NAS committee believed, was something that psychologists have called the “bogus pipeline”: If a person sincerely believed a given object (say a chair attached to electrical equipment) was a lie detector and that person was wired to the object and had committed a crime, a high probability exists (much greater than chance) that under interrogation the person would confess to the crime. The confession would have nothing to do with the basic scientific validity of the technique (i.e., the chair attached to electrical equipment) and everything to do with the individual’s belief in the capability of the device to detect a lie. However, contrary to the belief that lie detection techniques such as the polygraph are most commonly used to detect the lies of the accused, by far the most important use of these techniques in the United States is in employee screening, pre-employment, and retention in high-security environments. The U.S. government performs tens of thousands of such studies each year in its various security agencies and secret national laboratories. This is a sobering fact given the concerns raised by the NAS report about the use of the polygraph in screening. As a screening technique the polygraph performs poorly and would likely falsely incriminate many innocent employees while missing the small number of spies in their midst. The NAS committee could find no available and properly tested substitute, including functional brain imaging, that could replace the polygraph.

The NAS committee found many problems with the scientific data it reviewed. The scientific evidence on means of lie detection was of poor quality with a lack of realism, and studies were poorly controlled, with few tests of validity. For example, the changes monitored (e.g., changes in skin conductance, respiration, and heart rate) were not specific to deception. To compound the problem, studies often lacked a theory relating the monitored responses to the detection of truthfulness. Changes in cardiac output, peripheral vascular resistance, and other measures of autonomic function were conspicuous by their absence. Claims with regard to functional brain imaging hinged for the most part on dubious extrapolations from group averages.

Countermeasures (i.e., strategies employed by a subject to “beat the polygraph”) remain a subject clouded in secrecy within the intelligence community. Yet information on such measures is freely available on the Internet! Regardless, countermeasures remain a challenge for many techniques, although one might hold some hope that imaging could have a unique role here. For example, any covert voluntary motor or cognitive activity employed by a subject would undoubtedly be associated with predictable changes in functional brain imaging signals.

At present we have no good ways of detecting deception despite our very great need for them. We should proceed in acquiring such techniques and tools in a manner that will avoid the problems that have plagued the detection of deception since the beginning of recorded history. Expanded research should be administered by organizations with no operational responsibility for detecting deception. This research should operate under normal rules of scientific research with freedom and openness of communication to the extent possible while protecting national security. Finally, the research should vigorously explore alternatives to the polygraph, including functional brain imaging.

ENDNOTES

1. For a more in-depth explanation of functional brain imaging, see Raichle (2000); and Raichle and Mintun (2006).

2. I was a member of the NAS committee that authored the report.

REFERENCES

National Academy of Sciences. 2003. The polygraph and lie detection. Washington, DC: National Research Council.

Raichle, M. E. 2008. A brief history of human brain mapping. Trends in Neuroscience (S0166-2236(08)00265-8[pii]10.1016/j.tins.2008.11.001).

Raichle, M. 2000. A brief history of human functional brain mapping. In Brain mapping: The systems, ed. A. Toga and J. Mazziotta. San Diego: Academic Press.

Raichle, M. E., and M. A. Mintun. 2006. Brain work and brain imaging. Annual Review of Neuroscience 29:449–476.