Government nontransparency in mass security screening programs like polygraphs hampers independent research. Unfavorable research into some of these programs appears to have been suppressed. Together, these facts make it difficult to count the cost of false negatives — serious security problems like spies and terrorists (low-prevalence problems) who are not caught through lower-accuracy mass security screenings (like polygraphs).
As National Academy scientists warned Congress in 2003, the math shows that those mass surveillance programs hurt security while claiming to advance it according to Bayes’ Rule. These programs include polygraph, insider threat, dragnet telecommunications surveillance, and “stop and frisk” policing programs. Of these, polygraphs are the smallest programs in terms of likely budget and personnel — although the best independent researchers can do on both counts is estimate.
Three prominent, recent intelligence failures are possible examples of false negatives in polygraph programs. Their potential costs include the largest compromise of cleared personnel data in history, the most lethal attack on the CIA in over 25 years, billions of taxpayer dollars, millions displaced, and hundreds of thousands dead. If I’m wrong, all the government has to do to prove it is release damage reports they should’ve already produced — if only to the Senate Select Committee on Intelligence that should be holding hearings on documented lying to Congress about these programs anyway. If I’m right, Congress should shut down all federal polygraph programs because they threaten national security.
The Broken Glass Curtain
If the Soviet Union was an iron curtain behind which a strong centralized state lawlessly surveilled its citizens, the American surveillance state is a broken glass curtain. A federal database including personal information on millions of federal affiliates — including decades of info about cleared personnel — was recently compromised by what government sources insist was a Chinese hack even though they’re still investigating.
Three sorts of explanations for the breach are possible. It could be the result of an IT error in contracting. Think back to the 2008 mortgage crisis. Some analysts traced it back to spin-offs of The Producers. Others traced the crisis largely to a high-level telephone game problem called mortgage-backed derivatives. Financial institutions took bits of bad assets — mortgage-backed securities — and glommed them together with other, better bits to make new financial instruments. They did this so many times and with so much technical power that finally, no one could keep track of the paper trail to know what was really worth what anymore. Similarly, in the case of the Broken Glass Curtain, so too is it possible that the government simply lost track of who it was contracting with for what. Third-party IT providers are often overseas, even in cases where federal security standards prohibit the use of overseas servers for sensitive information storage.
So in this possible world, one third-party contractor might third-party contract with another IT service provider that’s overseas for administration. One Chinese IT contractor of a contractor might have wound up with a literal or figurative gun to his head, and given out the compromised information. You could call it a hack. Or you could call it outsourcing.
It also could have been a hack. There’s not much we can do about that class of vulnerability. Other than maybe stop destroying our tech thought leaders. Manufacture more of our own tech gear. And keep sensitive information — about people with access to sensitive compartmentalized information — itself compartmentalized, in spite of the analytical and operational advantages of creating big databases. (TS/SCI is the most common security clearance, and that’s what the SCI stands for.)
Or it could have been a mole. One of the security problems mass surveillance generates is that it makes a mole in communications systems and other large databases much more potentially damaging and difficult to identify.
We don’t know yet what happened here. The investigation is ongoing, and if it was an IT outsourcing screw-up or a mole, you would expect government sources to privately insist to media that it was a hack. That’s what they’re doing, and nobody is reporting the other two possibilities.
If it was a mole, security agencies that continue using low-accuracy, mass screenings for serious, low-prevalence problems — like basically all federal security agencies use polygraphs — are partly to blame. National Academy scientists warned that polygraph screening programs create more security risk than they mitigate according to Bayes’ Rule in 2003. The agencies didn’t listen. They probably have false negatives — spies who pass polygraphs — that they can’t spend appropriate resources identifying with good, old-fashioned investigative work like forensic accounting and talking to people.
Polygraph proponents often argue that those mass screening programs are less accurate than specific-incident polygraphs. But in at least two other possible false negative cases illustrate that spending limited resources on more accurate, truly independent source verification methods is probably a much better bet, even — or perhaps especially — in specific, high-stakes investigations.
On December 30, 2009, Humam Khalil Abu-Mulal al-Balawi — an al-Qaeda triple agent — conducted the most lethal attack against the CIA in 25 years. He blew himself up with a 30-pound bomb in a CIA facility inside Forward Operating Base Chapman, killing ten and wounding six.
The CIA was warned al-Balawi was a mole weeks before his suicide attack. Then they bought him a birthday cake — literally — and had him over. Without conducting a security check to catch a 30-pound bomb.
It sounds like the Agency might have conducted a test they thought independently verified his credibility following a credible warning. Like the sort of polygraph test they routinely subject normal employees to.
The Executive and Congress could probably find that out. I probably can’t. But I asked former CIA Director Jim Woolsey soon after the attack if al-Balawi had been polygraphed.
“If there was one used,” he said, “I don’t know whether it was given by the Americans or the Jordanians. So there are a lot of imponderables here. But one of the things one needs to be careful of is cultural differences between the polygrapher and the subject. Polygraphers often base some of their judgments anyway on observing people, and picking up cues. There are a number of cues that for a lot of people, not everybody, go along with lying — certain body postures, and ways of using your hands and expressions, and so forth. There are not clear indicators, but are more data for an experienced individual to assess and sometimes, those are very different from one culture to another.”
It is sounding less and less to me like polygraphs are useful in specific-incident investigations versus large screenings, and more like police and security agents need discretion to talk to people and listen to their intuition when they’re pretty sure something is up. That doesn’t sound very scientific. That doesn’t make it inherently bad — although there’s much to be said for evidence-based checklists saving lives. But as a matter of security policy, we shouldn’t pretend polygraphy is a science and let people die as a result of dependence on it.
How many people are we talking about? That depends on hard-to-know information, like what the other false negative cases are that we don’t hear about. And whether one of them helped get us into Iraq in 2003.
The U.S. Senate Select Committee on Intelligence released the Senate Report on Iraqi WMD Intelligence in July 2004, finding a bunch of intelligence-gathering and analysis failures linked by confirmation bias, that together created inaccurate materials that in turn misled government policy-makers and the American public. But the report doesn’t record how polygraphs might have figured into how that happened. It doesn’t say whether or not we polygraphed Curveball.
Curveball was the single source on whom the CIA based its conclusions about Iraqi biological weapons. Chief CIA weapons inspector leader David Kay assessed Curveball’s credibility in 2003. Tim Weiner writes in Legacy of Ashes: The History of the CIA, that Iraqi defectors who wanted to destroy the Hussein regime knew the U.S. was worried about WMD. “ ‘So they told us about weapons in order to get us to move against Saddam. It was basic Newtonian physics: give me a big enough lever and a fulcrum, and I can move the world,’ ” Kay told Weiner.
Meanwhile, CIA operatives on the ground weren’t getting much information, and analysts were generating reports confirming what President wanted to hear. “Absence of evidence was not evidence of absence for the agency,” Weiner later wrote. This is classic confirmation bias. Precisely the sort of bias independent evidence is supposed to protect against. And perhaps the primary mission of intelligence agencies according to some experts. As Kay explained to Weiner, “ ‘Wars are not won by intelligence. They’re won by the blood, treasure, courage of the young men and women that we put in the field…What intelligence really does when it is working well is to help avoid wars.’”
Good investigators and intelligence operatives — like good citizens, like good people — wage peace, not war. We all assume goodness and check our biases, check our blind spots. That’s hard. We all fail often. That’s unavoidable.
But doing better independent credibility verification in the future is important for avoiding unnecessary data breaches, deaths, and even wars. Efforts to assess Curveball’s credibility found plenty of potential disconfirming evidence against his credibility. It didn’t sound on the face of it like we could trust what he was saying. Curveball did time in Iraq for embezzlement, finished last in his university class but claimed to have headed Iraq’s bioweapons program, his German handlers thought he was crazy, and his friends called him a liar. It sounds like perhaps somebody credible was insisting he had independent scientific evidence verifying this guy’s credibility. Like a polygraph.
If only we had a machine that could tell us whether that person was lying.