On May 15, 2014, at the Academy’s 2008th Stated Meeting, five experts discussed how institutions protect against the threat of nuclear terrorism. Scott D. Sagan (Cochair of the Academy’s Global Nuclear Future Initiative; Caroline S.G. Munro Professor of Political Science, the Mimi and Peter Haas University Fellow in Undergraduate Education, and Senior Fellow at the Center for International Security and Cooperation and the Freeman Spogli Institute at Stanford University) moderated a panel discussion with Thomas Hegghammer (Director of Terrorism Research at the Norwegian Defence Research Establishment), Paul N. Stockton (Managing Director of Sonecon, LLC, and former Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs at the U.S. Department of Defense), Jessica Stern (Fellow at the FXB Center for Health and Human Rights and Lecturer in Government at Harvard University), and Matthew Bunn (Professor of Practice at Harvard Kennedy School). The following is an edited transcript of the discussion.
Scott D. Sagan
Scott D. Sagan is the Caroline S.G. Munro Professor of Political Science, the Mimi and Peter Haas University Fellow in Undergraduate Education, and Senior Fellow at the Center for International Security and Cooperation and the Freeman Spogli Institute for International Studies at Stanford University. He was elected a Fellow of the American Academy of Arts and Sciences in 2008 and serves as Cochair of the Academy’s Global Nuclear Future Initiative.
The 9/11 attacks were a wake-up call, not only to the general danger of global terrorism, but also to the specific danger of nuclear terrorism. Osama Bin Laden had earlier said that he felt that it was the duty of all Muslims to try to get a nuclear weapon. Khalid Sheikh Mohammed admitted that the 9/11 attackers had considered, in their planning before September 11, 2001, the option of attacking a U.S. nuclear reactor to try to create a Chernobyl-type event. Both those threats were taken far more seriously after 9/11. In the years since the 2001 attacks, the United States and its allies and friends have significantly degraded the capability of Al-Qaeda, by attacking, killing, and capturing the leadership and by damaging the organizational structure. The United States has also significantly tightened security around nuclear materials, although more can be done in this arena.
In the Academy’s Global Nuclear Future project, we recognized that one of the most insidious dangers is that of an insider threat. In these scenarios, an individual of malicious intent, or somebody that has been coerced, could do something inside a nuclear facility to sabotage it, create a vulnerability, or make a terrorist attack possible. Unless we address the insider threat, efforts to reduce the outsider terrorist danger will be inadequate.
In order to address these problems, we have gathered together a remarkably diverse and experienced group of people, including senior military commanders, psychiatrists, biologists, nuclear specialists from the Department of Energy, and political scientists and historians. Our guiding principle is that we should not only learn from our past mistakes, but also practice what I call “vicarious learning.” We should learn not only from our own past mistakes but also from those of others in order to minimize future insider threats.
Thomas Hegghammer is Director of Terrorism Research at the Norwegian Defence Research Establishment and author of “Jihad in Saudi Arabia.”
How interested have terrorists been in the past in attacking nuclear installations through insiders? The motivation for this question is, of course, that we know how damaging the insider tactic can be. But there has been little research into how serious terrorists are about carrying such operations out. My colleague Andreas Daehli and I set out to examine what terrorists have written about this in their literature, as well as any nuclear insider recruitment that has actually been attempted or carried out. We discovered that, in fact, based on the evidence that we have, they have not been very interested in infiltrating nuclear facilities so far. We discovered no elaborate texts about the nuclear insider tactic in the vast jihadi literature or the far-right literature. There are references to insider tactics, but there are no detailed manuals of the kind that exist for many other tactics.
As far as plots and attempts are concerned, we found only one confirmed serious nuclear insider attack involving terrorists, occurring in South Africa in 1982 at a plant under construction and with lax security. Of course, we cannot be sure that there haven’t been other attempts, and we are uncertain about some of the data. But we think that at worst, there have only been a handful of other such incidents – and this is out of over 100,000 instances of terrorist activity over several decades.
There have been several attempts to attack nuclear facilities, but not with insider tactics. Terrorists have used other methods, typically armed assault or attempted bombings. We see these same methods discussed in the texts as well, but again, comparatively little about using insiders. This is very interesting, because much of the literature so far seems to take for granted that terrorists will want to use insiders to get into nuclear installations, because it seems the most logical thing to do.
This finding begs for an explanation. My coauthor and I do not know the answer, but we propose that terrorists may see insider recruitment as so difficult and so unlikely to succeed that they do not even try; they opt for other methods instead. Why is insider recruitment so difficult? One of the key factors is surveillance. Active terrorist groups expect to be under close surveillance, which makes it very difficult for them to insert an existing operative into a nuclear facility or to recruit someone already on the inside. It is easier, of course, if someone on the inside reaches out to the group and makes him- or herself available. But presumably, this happens so rarely that the terrorist planners cannot count on it as a reliably available option.
Despite the apparently low use of insider tactics, the insider threat is real and growing. One of the primary concerns we have is the enormous increase in the amount of online radical propaganda over the last decade. In our view, this increases the likelihood that employees at nuclear facilities may radicalize after they are hired, and either act alone or reach out to a group.
In light of this, we offer three policy recommendations. The first is to develop good monitoring systems to identify radicalizing insiders early. The second is for governments to develop a strategy to undermine trust between terrorists on the outside and radicalizing employees on the inside, so that when an insider contacts a terrorist organization, that terrorist organization is reluctant to cooperate with him or her. Sting operations and information-gathering operations are just two possible methods of undermining that trust.
A third recommendation is for governments to release more data on insider crimes. The data on this phenomenon that we and other academics have had to work with are very patchy, and we believe that releasing more information would produce more robust research and better advice against this very important threat.
Paul N. Stockton
Paul N. Stockton is Managing Director of Sonecon, LLC. Before joining Sonecon, he served as Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs. In September 2013, Secretary of Defense Chuck Hagel appointed him to co-chair the Independent Review of the Washington Navy Yard Shootings.
I recently had the honor of being appointed to co-chair the Independent Review of the Washington Navy Yard Shootings, which were the result of an insider threat of a different sort than we have been focusing on tonight. But in the course of the committee’s work, we identified some larger gaps in security that apply to all types of insider threats.
One particular vulnerability lies in our flawed security clearance system. I believe there are three problems with the system: first, too many people have security clearances, including personnel who don’t need them; second, too many of those who have security clearances have dangerously broad access to classified material; and third, too much information is inappropriately classified. Together, these issues create significant gaps in security that leave us more open to insider threats than we ought to be.
In the aftermath of 9/11, the number of people in the Department of Defense and the federal government who have secret and top-secret security clearances ballooned. By some measures, over three million DOD personnel now have these clearances. Many personnel have no need for them. Those joining the U.S. military automatically are adjudicated to receive secret clearances shortly after they sign up, whether their jobs require it or not. Under law, the Department of Defense and other departments are supposed to make sure that those who get security clearances have a demonstrated need-to-know the classified information to which they will have access. However, the Department has effectively dropped that as a guideline since 9/11. The new norm is that everybody gets security clearances, and people who never should have had clearances become trusted insiders – including Aaron Alexis, the Washington Navy Yard shooter.
The U.S. government also needs to ensure that once appropriate people are granted security clearances, they are vetted more carefully when they are serving the Department, and are evaluated on a continuous basis. At a time when budgets in the Defense Department are going down, the only way to afford the creation of such a continuous evaluation system is to reduce the size of the cleared population. We can do this by making sure that only those who genuinely need security clearances will receive them.
Second, too many of those granted security clearances have overly broad access to classified information. The broad access that personnel have today stems in part from lessons learned from 9/11. The primary lesson of 9/11 was that we failed to connect the dots. People in the intelligence community, in the FBI, and elsewhere were not able to share information well enough to understand the nature of the threat. However, I believe we have over-learned this lesson. Today, personnel can share classified information across a broad array of domains. Relatively low-level employees, such as system administrators, can use this broad access to information to damage U.S. security. Individuals such as Edward Snowden and Chelsea Manning exploited this situation to do immense harm. We need to reevaluate who needs access to what.
Finally, too much information is inappropriately classified. If everything is classified, but security clearances are commonplace, how are we to protect the genuine secrets that really are our crown jewels? If everything is secret nothing is. Let’s focus on protecting information that is truly vital to national security. If information is not genuinely in need of being classified, I would suggest it should be out in the open to help inform public debate.
Jessica Stern is a Fellow at the FXB Center for Health and Human Rights, a Lecturer in Government at Harvard University, and a member of the Hoover Institution Task Force on National Security and Law.
I am going to talk about another insider actor, this one at a government bioweapons research lab. A week after September 11, 2001, letters containing anthrax spores were delivered to the offices of NBC News, the New York Post, and the National Enquirer. Over the next month, contaminated letters were sent to then – Senate Majority Leader Tom Daschle and Senator Patrick Leahy, among others. By the end of the year, anthrax-contaminated letters had infected at least 22 people, five of whom died. To allow for a complete sweep of its offices, the House stopped operations for five days. The Hart Building was closed for several months while it was fumigated and cleaned. Operations at the Supreme Court were disrupted. In retrospect, considering these attacks outside of the shadow of 9/11, they had an impressive impact.
The letters, which were dated September 11, 2001, contained the words, “Death to America. Death to Israel. And Allah is Great.” So, given the text of the letters and the timing, officials initially assumed that the letters were part of a second-wave assault by Al-Qaeda or Iraq, or possibly the two working together. This theory was bolstered by the fact that Iraq had admitted to an enormous biological weapons program, informing the United Nations that it had 8,500 liters of anthrax. The CIA had warned that Saddam could deliver biological or chemical agents clandestinely using special forces, civilian government agents, or foreign tourists in an attempt to take out as many of his enemies as he could. Iraq had also repeatedly threatened to smuggle anthrax and other weapons of mass destruction into Britain, in one case threatening to put anthrax in duty-free bottles of alcohol, cosmetics, cigarette lighters, and perfume sprays.
Immediately after 9/11, when our government was considering invading Iraq, many of my colleagues here in Cambridge were skeptical. They thought the Bush administration was just making up evidence of Iraq’s possession of weapons of mass destruction out of whole cloth. But the truth is that Iraq had admitted to possessing an enormous biological weapons program in the 1990s. Moreover, there was confusion about the presence of silica and the erroneous identification of bentonite (an additive known to be used by Iraq in its anthrax). So, it was reasonable to assume that Iraq was involved in the anthrax attack.
Nonetheless, that assumption was wrong. The strain used in the attacks was the Ames strain, isolated from a sick cow in Texas, and reportedly distributed to only five labs in the United States, Canada, and the United Kingdom. One of those locations was USAMRIID, which is the Department of Defense’s own biological laboratory in Frederick, Maryland. The identification of the strain led authorities to focus on government scientists – in other words, insiders – as the likeliest perpetrators.
The government initially focused on a physician named Steven Hatfill, who had worked at USAMRIID. Hatfill was eventually exonerated. Years after the attack, another potential insider was identified, and that was Bruce Ivins. I will briefly tell you what we know about him.
Dr. Ivins, who worked at USAMRIID, was a complex man who claimed to have two sides to his personality. His public face was a pillar of the community. He was involved in developing anthrax vaccines and was responsible for preparing anthrax to test them. In fact, he was a member of the team that investigated the attacks. He received an Outstanding Civilian Employee award from the Department of Defense in 2003. He was active in the Catholic Church; he played keyboard at masses and other functions. He wrote poems and sent them to his colleagues. He was seen as a very kind man, if a bit eccentric.
But there was another side of Dr. Ivins that was largely unknown, even though this hidden side should have been investigated, and should have prevented him from getting a security clearance or working with biological agents. What did government authorities not know that they should have known? First, Dr. Ivins was mentally ill. There were discrepancies in his security-clearance forms that were not followed up on. Had anyone spoken with his clinicians, authorities would have known that Dr. Ivins had been involved in criminal activities and had a history of serious mental illness. Indeed, one of his clinicians described him as the scariest patient she had ever treated. He drank to excess. He was on a cocktail of medications (including antipsychotics), some of which were prescribed to him fraudulently. When he asked a girl out on a date in college and she declined, he became obsessed with her sorority – a serious obsession that lasted the rest of his life. The obsession led him to stalk members of the sorority and to break into sorority buildings. His first therapist said that when she heard about the anthrax attack, she immediately thought of him.
I will read you one of the poems that he wrote to one of his colleagues.
I’m a little dream self
Short and stout
I’m the other half of Bruce
When he lets me out.
When I get all steamed up
I don’t pout.
I push Bruce aside
Then I’m free to run about.
Bruce and this other guy
Sitting by some trees
It’s like having two-in-one
Actually, it’s rather fun.
The colleague to whom Ivins sent this poem did not inform the authorities. All the information about his psychotherapy, the psychotropic medications he was on, and his discussion about his criminal activities were available if people had bothered to investigate discrepancies in his security clearance applications.
Another interesting feature of this case is that Ivins deliberately deflected the investigation away from himself. He hinted that Iraq might have acquired the Ames strain. He hinted that Al-Qaeda might have been involved. He mislabeled samples so that it looked like the anthrax used in the attack was actually from another facility. He refused to turn over the samples until somebody actually came right into the lab and took them from him.
Ivins’s case shows that red flags can be ignored to an astonishing degree. People do not want to rat out their colleagues. It is not enough to have good procedures on the books: regulations must be followed. For example, Ivins working at night in the “hot suites” just before the anthrax mailings should have been noticed, but wasn’t. Another lesson is that we need to take the clearance process much more seriously than we do now. We shouldn’t hire private firms that have a financial incentive to rush clearances through, which is current practice.
But what was his motivation? Briefly, he had a financial incentive. He was working on anthrax vaccines, and if people were really frightened of anthrax, his vaccine might have more interest from buyers. He wanted to bring attention to inadequate preparation for a biological attack, it seems.
I agree with Paul Stockton that the security-clearance process is broken. I know from personal experience that the people who do the field work for security clearances are inadequately trained for the job. They don’t do their homework. The clearance process is still too focused on Cold-War-era threats. In the case of Snowden and Ivins, it seems to me that narcissism might be a risk factor. I think that narcissism and other psychological factors should be more closely examined.
Matthew Bunn is Professor of Practice at the Harvard Kennedy School and Co-Principal Investigator of Harvard’s Managing the Atom Project. He is the author of more than twenty books and major technical reports, including “Securing the Bomb” series.
About two decades ago at a nuclear fuel fabrication facility about an hour outside of Moscow, a man named Leonid Smirnov began stealing weapons-grade highly enriched uranium. He was having severe financial problems; the Russian ruble was collapsing and his salary was not keeping up. He was part of the accounting system for the facility, so he knew that as long as output was within 3 percent or so of input, the missing uranium would be written off as normal losses to waste. So he stole little bits at a time over several months. He ultimately stole over a kilogram and a half of weapons-grade highly enriched uranium. And, lest you think that there is not much threat of insiders, this is only one of about twenty confirmed cases of seizure of stolen highly enriched uranium or plutonium that are in the public record. However, none of them were clearly connected to terrorists, as Thomas Hegghammer has pointed out. The majority of cases we have found so far are of opportunistic insiders who steal the material and then go looking for someone to sell it to.
But we do not know the circumstances of many of these thefts. It is a disturbing fact that only a few were ever noticed before the material was later seized, which strongly suggests that there are more thefts that were, in fact, never noticed. But all the thefts whose circumstances we do know were done by insiders or with the help of insiders, and all the others look like they were likely done by insiders. That is why we are focusing so intently on this issue.
As Scott Sagan mentioned, he and I offered “worst practices” – lessons from disasters – because disasters often offer more vivid and memorable lessons than somebody just telling you a useful thing to do. We quote Bismarck on the subject of learning from other people: only a fool learns from his mistakes; a wise man learns from the mistakes of others. But I have another favorite quote, this one from the American humorist Will Rogers. To paraphrase: “Some men are able to learn from other people’s mistakes, but most people have to pee on the electric fence for themselves.” So we are hoping to help people learn from other people’s mistakes without suffering the consequences themselves. We have ten lessons; we might call them the Bunn and Sagan Ten Commandments, or as a Unitarian might say, “The Ten Suggestions.” I will discuss three of these.
First, we found that there are a lot of cognitive and organizational biases that undermine most organizations’ focus on the significance of the insider threat, leading organizations to fail to accurately address it. The first of these biases, I think, is overconfidence. Just as people say, “not in my backyard,” so there is also “not in my organization.” People really believe: “All the people in my organization are trustworthy. I know them. They would never be an insider problem.” The example we use is the tragic assassination of Indian Prime Minister Indira Gandhi. After the Golden Temple violence, her security manager recommended to her that she remove the Sikhs from her security detail, at least temporarily, and she said, “No, for political reasons, I need to be seen as the Prime Minister of all the people, including the Sikhs, and I trust these particular people.” And the man who first pulled the trigger, the man who gunned her down, was a Sikh who was her most trusted guard. You have to avoid believing that it will never happen in your organization.
Second, do not assume that background checks or ongoing monitoring are going to catch every threat. Often, you don’t see the red flags when they are there. Sometimes the red flags simply are not there. Sometimes the insiders genuinely are trustworthy, but are coerced. We use the example of a bank in Northern Ireland, which had a perfectly sensible security system that required two senior officers of the bank to turn their keys in order to open the vault. A gang kidnapped the families of two of the senior officers of the bank, the officers turned their keys, and the gang went off with millions of pounds.
Finally, don’t assume that there will only be one insider, that there will not be a conspiracy. If you look at thefts from heavily guarded facilities (not nuclear facilities, but just as an analogy), what you find is that overwhelmingly, they are the result of conspiracies. It is not just one individual who overcomes the security system, it is a conspiracy of people. They happen all the time, but they are very difficult to guard against. It is very tricky to design your security system to cope with the possibility that two or three of the people you are relying on for the security might be the very people attempting to bypass it. And so administrators tend to discount the possibility and brush it away.
The fundamental lesson underlying all of our advice is, “Don’t assume.” We urge people: “Don’t assume. Evaluate. Assess. Test. Collect real data to the extent that you can.” So one of our recommendations parallels one of Thomas’s, which is that governments ought to put together analyses of the real incidents that have taken place, and then they ought to share them and make them more broadly available. Stories are very effective teaching tools.
© 2014 by Scott D. Sagan, Thomas Hegghammer, Paul N. Stockton, Jessica Stern, and Matthew Bunn, respectively