Why I Watch Airplane Disaster Documentaries

One of the few television shows I watch religiously is Air Disasters. In the US it currently airs on the Smithsonian Channel and many episodes are available on YouTube. Many if not most of the episodes were produced in Canada, where it goes by the name of Mayday.

I don’t watch it out of morbid curiosity. Indeed, my favorite episodes are those where everyone, or at least some individuals, survive the accident. I watch Air Disasters because they are fantastic lessons in causality—almost every airplane crash is a concatenation of unusual circumstances. And because they offer fantastic insights on human performance under stress and human error.

Almost every accident contains an element of human error—not necessarily pilot error but nevertheless some type of human miscue caused by stress, exhaustion, unfortunate personal characteristics, or a poorly engineered process or system. The aviation industry to its great credit—spurred on by the usually stout oversight and regulation provided by government agencies responsible for aviation—has continuously revised its practices in light of the many lessons learned. As a result, airlines around the world have implemented strict rules governing the work and rest patterns of pilots and the interpersonal dynamics of cockpit crews.

The Intelligence Community could learn a lot about the performance of their workforce from the aviation industry. Indeed, watching the documentaries has led me to “appreciate” the considerable flaws of the IC’s work assumptions and practices. As the aviation industry has learned over the last 100 years, humans perform much better when they are positioned for success. So here are some lessons and concepts from the aviation industry that the IC should pay attention to. These are in fact relevant for anyone involved in difficult and/or risky work that must be reliably performed at a consistent high level.

  1. The Startle Factor. Almost all flights are routine. But often when something goes wrong it starts with a surprise—something happens to the plane or during the flight that the pilots had never previously experienced. The Startle Factor refers to how the human body and brain respond to such a surprise. Your heart races, your palms sweat, and your rational brain slows down. Instincts—for good or bad—may take over. Boeing had made an assumption with the 737 MAX advanced aviation system that the average pilot crew would only need a few seconds to realize when the system was malfunctioning and turn it off. But in the two crashes that grounded the 737 MAX, the crews were startled by the unexpected performance of the plane, their responses were delayed or incorrect, and hundreds lost their lives.

    Intelligence officers can often find themselves in surprising predicaments. Does the IC take the startle factor into account when estimating the risk of certain operations? Even in the admittedly less dangerous work of the intelligence analyst, officers can be startled by new, unexpected information, leading them to misinterpret or ignore it.
  • The Importance of Sleep and Good Rest. Commercial airlines have strict rules about how many hours flight crews can work before they must rest. I imagine most of us have experienced a flight being cancelled because the crew has “timed out.” These rules reflect hard lessons learned about how poor rest and lack of sleep can degrade the cognitive performance and judgment of pilots. Every time I watch an episode where crew exhaustion was a factor, I think about how my old agency CIA ran task forces during crises. !2-hour shifts were common. I remember during the first Iraq war having to work 6 12-hour shifts per week. The aviation industry learned long ago that “people just have to tough it out” is not a useful strategy. IC agencies need to rethink the protocols associated with working during periods of crisis.
  • Hierarchy Can Kill You. Traditionally the captain and the first officer in commercial aviation were in a command and obey-orders relationship. But captains are not infallible and there are several fatal accidents that could have been avoided if the first officer had been listened to. Oftentimes the captain would have had a hard time “hearing” the other view because the first officer actually never verbalized his concern. The respect for hierarchy was so paralyzing that first officers have deferred to wrongheaded captains even when it led to certain death. These types of accidents became so concerning for the aviation industry that airlines instituted mandatory crew resource management procedures that emphasize the importance of collaboration and teamwork in the cockpit.

    When I started at CIA, it seemed to me that many of the most legendary leaders celebrated in agency lore were known for their authoritarian styles. Ugh! Strong leaders did not second guess themselves, always knew exactly what to do, and never tolerated backtalk. Somehow, we managed to do good things despite a flawed leadership tradition, and I’m happy to report that the agency’s leadership approach evolved while I was there. But there is still much more that could be done to improve our “crew resource management.”
  • Assholes Can Kill You. One of the most compelling and tragic airplane disasters is the story of Northwest Airlinks Flight 4719, which crashed in Minnesota in 1993 killing 18 people. In this crash, the captain was known to have a temper, often lashing out at airline employees, and belittling and intimidating his first officers. The investigators surmised that the first officer, who had been mocked throughout the flight, did not speak up to correct the captain about his too-steep descent. Toxic leaders are so harmful and intimidating that a person can choose death rather than confrontation.
  • Even the Smartest Person in the Room Can Screw Up. Korean Airlines Flight 007 was shot down in 1983 after it strayed over Soviet airspace in the north Pacific Ocean. I was at CIA at the time, and I remember how incredulous we were and how scary the incident was during a period of heightened Cold War tensions. The actual cause of the accident was a mystery for more than ten years because the black boxes were not made available to investigators until 1992; the Soviets had recovered them and kept them locked away. When the flight data and voice recordings were analyzed, investigators concluded the veteran crew failed to correctly set the plane’s navigation system, leading the 747 to drift north of its flight plan and into Soviet territory. Navigational and communication issues occurred during the flight that should have alerted the crew to their error, but they apparently didn’t pay attention. The captain was a respected and experienced veteran. And he made a fatal mistake.

    Expertise-driven organizations have to appreciate that expertise carries its own blinders and is not foolproof. Long and tedious routine—such as what occurs during a long flight–can also numb the intellect of even the smartest individual.
  • Checklists are Useful. One way to guard against the various blind spots of expertise and the inevitability of mental errors is to incorporate mandatory checklists into flight procedures. Too many airplane accidents have been caused by a cockpit crew overlooking or forgetting an essential step for flight, such as setting the flaps. When something goes wrong with a plane, crews consult extensive checklists  although until recently they were printed on paper resulting in an increasingly frantic crew member paging through a binder trying to find the right section. (Luckily these are automated on newer planes)

    When I was still at CIA I would imagine what an analysts’ checklist would look like. Perhaps even a “Turbo Tax’ application that would make sure the analysts considered all the wrinkles when producing an analytic product. I thought we could come up with a workable model, although it did worry me that, in an unintended consequence, analysts might react by behaving more like automatons than thinking human beings. With the arrival of ChatGPT and other artificial intelligence engines, my idea has perhaps been overtaken by events.
  • Distraction. Even the most competent cockpit crews can make egregious mistakes when they are distracted. Humans just aren’t that good at dealing with multiple tasks. A classic example is Eastern Airlines 401, which crashed in the Florida Everglades in 1972 when the pilots, trying to determine if their landing gear was properly down, failed to notice they had disengaged the autopilot and were rapidly losing altitude.

    Many organizations, not just the Intelligence Community, have the habit of piling additional responsibilities onto teams without taking any away. This piece of advice was popular when I was at CIA: if you want something done, ask a busy person to do it.

  • Human/Technology Interaction. Technological advances have made commercial aviation the safest way to travel. And yet, as the Boeing 737 MAX crashes show, technologies that make ill-informed assumptions about how humans will react in unusual circumstances can create more and deadlier accidents. As planes become more advanced, the possibility of misjudging human interaction with technology grows. Another dynamic is that growing cockpit automation can lead pilots to lose touch with their ”analog” flying skills. Some airlines have lowered the requirements for flying experience to address the pilot shortage, reasoning in part that advanced cockpit automation now handles most piloting duties.

    These are dangerous trends. There’s no doubt in my mind that advanced technologies will continue to replace human labor in many scenarios, including some of the more difficult tasks that humans perform. But as this process unfolds, we have to be clear about how reliance on technology can dull human talent and senses to the point that we become incapable of dealing with the unexpected concatenation of circumstances on which the software was never trained.

  • Who’s Accountable? The final lesson I’ve learned is how to think about “accountability” in complex systems. As airline crash investigators know, many airplane accidents involve a chain of unlikely events, any one of which would rarely occur. A supervisor decides to pitch in and help his overworked maintenance team by removing a set of screws. The maintenance team isn’t able to finish the job but don’t know to replace the screws. Nevertheless, the plane makes many safe landings and takeoffs until a pilot decides to make an unusually fast descent. The pilot and all the passengers die.

    Who exactly is accountable here? Is it the supervisor who tried to be helpful? Or the airline management that under-resourced its maintenance operations? Or the pilot? In many organizations, holding someone “accountable” is the signature move of “strong leaders”. But what often happens is that some unfortunate individual is held to blame for what was a systemic failure of an organization—often driven by complacency, expediency, and/or greed.

The aviation industry’s motivation to eliminate airplane crashes has created a strong safety and lessons-learned culture, but as the experience with the 737 MAX shows, positive outcomes depend upon persistent vigilance. The Intelligence Community has long claimed that what it does is unique and that lessons learned from other industries are not always applicable. But the human being remains the same: we don’t employ unicorns but rather just normal folk, who can make mistakes, who need sleep, and who perform best when they’re positioned for success.

In Search of Insight

When I was a manager of analysis at CIA, I would hear our customers, often senior policymakers, demand more INSIGHT in our analysis. And I would go back and tell the analysts they needed to produce more insight. Until one day an analyst asked me “Well, what is INSIGHT?” And I thought to myself, that’s a good question; a “good question” is ipso facto a question for which you do not have a ready answer.

I put on my tattered thinking cap and worked to come up with some type of answer—a “formula for INSIGHT” that was reproducible and generated a set of actions that analysts could actually perform. I asked many colleagues to describe how they thought. This is another good question. Almost nobody could describe their own thinking process.

“I read and then I write.”

What happens in between?

“I shake my head until some ideas fall out,” one analyst offered.

Eventually I came up with a formula—the steps of analysis—that I thought enough of to share with others. Like everything I do, it’s imperfect but hopefully it offers a starting point.

  1. COMPREHENSION. When we’re thinking about a problem, first we try to comprehend it. We assemble relevant information and consume it however we prefer.

  2. CATEGORIZATION. Once we’ve achieved some comfort in our level of understanding, the next step is categorization. We sort what we know into various categories and patterns. (Actually, this starts to happen organically during the Comprehension stage. This is unavoidable and can be the place where cognitive biases take root. Some information you consume early on colors how you think about every subsequent report, and you fall victim to the anchoring bias. I’ve always wanted to run an experiment where the same 100 pieces of information were presented to analytic teams, but in different orders. Would their analysis differ? My bet is yes!)

    The categories can be as simple as Old Information, New Information, but they eventually evolve into a complex taxonomy that forms the backbone of your Analytic Lines. These Analytic Lines are powerful beings and resist change. This is usually very bad.

  3. PRIMARY INSIGHT. INSIGHT occurs when you see things you’ve never seen before or in ways that are new to you. When an individual takes an item of information and argues that it belongs in a different category, you have produced a moment of INSIGHT. Recategorization of information is a way of generating INSIGHT. Is President Xi’s third term in China an indicator of his strength or of China’s weakness? The conventional wisdom probably is to categorize the event as the former but making a credible argument that it is the latter generates INSIGHT. The INSIGHT argument doesn’t have to be convincing; just provoking others to take a second look is useful.

  4. PROFOUND INSIGHT. A harder but more powerful way to generate INSIGHT is to renovate and/or rebuild your categorization schema. For example, analysts realize that a significant amount of information remains uncategorized—it doesn’t easily fit the current taxonomy. Do you ignore it, or do you begin to experiment with new categories that might better explain the information? And at some point you can experiment with rethinking your categorization scheme—your Analytic Line—from scratch. To return to the China example, how best should we think of it—as an emerging Superpower, as a declining power, or as a country destined for the middle-income trap?  Each of these options can generate significantly different categorization schemes. (When your Analytic Line is long in the tooth, lots of information will no longer easily fit your existing categories. This is a “sell signal” for how you currently think about your problem, but not enough analysts recognize it as such.)

Analytic teams need to be hawk-like in policing their categorization schemes because they often sneakily embed themselves in the back-office processes of the organization. Take, for example, the reading profiles of an analytic team—the algorithms that determine which information flows into their inboxes. Ask your analysts how often these reading profiles are updated. You will not be happy with their answers.

What inspired me to natter on about analysis and insight on this beautiful fall day? Reading Adrian Wolfberg’s excellent monograph In Pursuit of Insight: The Everyday Work of Intelligence Analysts Who Solve Real World Novel Problems. It’s not a quick read but luckily there’s a shorter version available here. Based on extensive interviews with analysts of varying experience, Wolfberg seeks to unpack how insight actually happens from a cognitive, neurological perspective. It tackles the all-important step that my all-too-neat process completely ignores: how does the idea for new categories enter your brain? What leads to its emergence?

Wolfberg writes that the insight process begins with a trigger phase, “an initiating event that, seemingly by chance, brings the analyst’s attention to a particular problem to address; alternatively, after an analyst has been working on a given problem, a random event contributes to focusing their attention more intently on the problem. Entering into and navigating through the trigger phase takes cognitive and emotional courage on the analyst’s part.”

After the trigger phase, Wolfberg identifies emergence as the next step. Two activities promote the emergence of insight: internalized tensions and priming. Quoting from the shorter paper:

Internalized Tensions: As analysts start working on a novel problem, they become aware of inconsistencies that can be cognition-based (i.e., inconsistencies between pairs of thoughts or ideas) or emotion-based (i.e., inconsistencies between an analyst’s action and interpretation of others’ reactions). Tensions induced by these inconsistencies can originate within the individual (i.e., self-initiated) or in the individual’s social environment (i.e., related to organizational structure and the behavior of others). An analyst who concludes that exploring a diversity of ways to represent a problem would lead to the most accurate assessment, while others judge that a standardized process would be best, is an example of cognition-based tension at the individual level. An analyst who presents a unique methodology in a detailed, transparent way to address skeptical concerns, while worried this could lead to being discredited in a production system that values standard product lines, is an example of emotion-based tension at the social level.

Priming: Analysts draw upon memories of past experiences unrelated to their present novel problem in order to make progress toward overcoming tensions and reaching insight. Priming sources also occur across the intersection of the emotion-cognition and individual-social dimensions. In an example of cognition-based priming at the individual level, an analyst who studied in graduate school how physical factors in the environment could trigger social or political outcomes applies that cause-and-effect knowledge to a national security novel problem. In an example of emotion-based priming at the social level, an analyst who had lived in a foreign country appreciates that even in countries where the same language is spoken, cultures can be very different.

What’s clear is that insight emerges from a rich casserole of experiences, emotions, and feelings in an analyst’s mind. Our intuition, what Daniel Kahneman calls our System 1, is the primary custodian of these insight-generating prompts. Wolfberg notes that “although these past experiences were unrelated to the problem at hand, an aspect of these past experiences brought forth a combination of emotional and cognitive meaning that informed how the analysts thought…” Every analyst interviewed by Wolfberg reflected on past experiences unrelated to the problem.

Clarity and INSIGHT are the most sought-after products from intelligence analysts. Clarity is getting the facts straight, sorting out complicated situations. Intelligence organizations usually do clarity well, but not always (think Iraq WMD). INSIGHT requires going beyond the facts to consider hidden causes and motivations, anticipate unexpected changes in trajectory, and appreciate the nonlinearity of events. The work processes of most intelligence teams are suited more to producing clarity than generating INSIGHT. Analysts often describe having to buck the established way of doing things to explore their emerging INSIGHT.

As Wolfberg notes, leaders of intelligence organizations need to appreciate the conditions necessary for the generation of INSIGHT and work to allow the time and space necessary for its emergence. Many of the work processes of the Intelligence Community emphasize order and consistency over thoughtfulness and contemplation. Working 8+ hours a day in a cubicle is also not ideal. As the science writer Annie Murphy Paul notes in her excellent book The Extended Mind, human brains evolved to think best when we’re physically active. My favorite “structured analytic technique” has always been to take a walk, preferably with a favorite thinking partner.


Wolfberg’s study has many other insights about INSIGHT. It’s a rewarding read for anyone wanting to make intelligence analysis better.

And Another Thing…

Last week Zachery Tyson and I published in Foreign Affairs our indictment of the current Intelligence Community. There was a lot that Zach and I wanted to say and some of it ended up in the clipboard buffer. So let me make a couple of those points here.

One of the reasons why the IC should modernize and become more open is because it has a part to play in addressing the information and truth crisis eroding American democracy. I’m not exactly sure what that part looks like, other than it is a supporting role. But I’m persuaded that a more open, collaborative approach to making sense of the world would help citizens have just a bit more confidence in the information practices and decisions of its government.

The internet has all but destroyed the ability of so-called experts and elites to claim they know better. Actually let me fix that sentence. The failure of organizations, governments, scientists, businesses, and academics to adjust their processes to the reality of the internet has undermined their credibility. So many organizations, not just the Intelligence Community, have persisted with closed, antiquated processes that just sow suspicion, distrust, and conspiracy theories. When so much other information is available, and not all of it is junk, closed information networks and decision processes no longer inspire confidence.

Obviously many parts of any intelligence process could not be made open to the public, but some parts could be. In fact the Intelligence Community already does some of that with its Global Trends Project, but instead of a study being published every few years, imagine if the Intelligence Community maintained a dynamic, real-time information service for the American public, and the world. One issue that could be extensively covered on such a platform is the worldwide COVID-19 crisis. Would its existence have prevented or ameliorated some of the information controversies we’re still living with? I think so if, for example, the platform was interactive, allowed for a moderated but vibrant debate, permitted users to up- and down-vote information, and adapted to changing user preferences. Perhaps such a platform could be a collaboration among government, business, and nonprofits. Imagine if the Gates Foundation and the Koch Foundation could both support such a platform and a diverse group of citizens served as its board of directors?

Fanciful, right? But for democracies to prosper in the future, more of these types of approaches will be necessary.

Some folks commenting on our Foreign Affairs piece have noted that we didn’t say enough about the role Artificial Intelligence will play in the future. Fair enough. But I don’t think that reengineering the current intelligence process with Artificial Intelligence will make it that much better, only presumably a little less human. Using AI to summarize thousands of documents will only manufacture the same uninspiring type of work that many human analysts produce today. Policymakers will find both equally useful…or not. Using enhanced processing networks to pursue new ideas in sensemaking holds much more promise; imagine if AI, by scouring millions of images, could identify subconscious “tells” that would give us some insight as to the mood or veracity of Vladimir Putin or Xi Jinping. Now that might catch a decisionmaker’s attention.

Many of the comments on the piece focus on the Open Source angle but tend to overlook another important recommendation–to supplement formal intelligence products with a more dynamic, less formal, and less hierarchical approach toward assessing breaking events and new information. Some of my favorite information and analysis sites on the internet operate that way: often the most informative and provocative content lives in the comments sections attached to articles. And of course that’s the strength of Twitter: threaded discussions where hundreds of individuals comment and provide perspective on issues of common concern. Ensuring quality on such a platform would be key but there are many useful approaches here, from moderation to some type of certification model before intelligence officers could participate. If the medical profession has been able to make a similar model work, I would hope that committed national security professionals could succeed as well.

But at a minimum, efforts to reform the Intelligence Community have to avoid the Athena complex: the tendency for reformers to overengineer their change proposals as if they had perfect vision on which new ideas will work best in the future. (Like the Goddess of Wisdom Athena who emerged fully formed from the forehead of Zeus.) As Zach and I suggest, we’ll be better off starting small and letting the user community determine where the platform goes.

Who knows where fate will take us!

In Normal Times…

I’ve been thinking about how White Houses in the past would have prepared for the events of last week. I know that’s a stretch given that it’s hard to imagine any other administration but Trump’s contesting an election past all legal and reasonable recourse and/or encouraging a demonstration against Congress (and a Vice-President) performing their constitutional duties. Nevertheless, if you compare what might have happened in normal times with what actually appears to have happened last week, you get a sense of a dangerously dysfunctional administration.

During my time in government, the FBI Director had at least a weekly time slot with POTUS during the morning security briefings during which he would brief on internal security issues. As I remember it, the Director of National Intelligence and the POTUS briefer would also attend, although I can imagine a topic so sensitive that the room would be cleared.

The FBI Director arguably should have been aware of the reports of criminal plotting by some planning to demonstrate at the Capitol on January 6. We know that at least one FBI officer had warned of the possibility of violence and that the warning was shared with other law enforcement agencies. We also know that law enforcement officials had advised known troublemakers not to go to the DC event and that they had enough information in advance to arrest a Proud Boys leader as he arrived in the District. I haven’t seen any reporting, however, on whether or not the FBI Director was also directly told of this assessment. (This is a common problem/failing of warning intelligence; it isn’t always shared with everyone who needs to know. And even when it is shared appropriately, many people don’t take it seriously.)

Assuming the FBI Director was aware of the reporting, then it would have been his duty to inform the White House, if not the President, about the possibility of criminal activity at the Stop the Steal rally. If they still occur, the weekly briefing would have been the appropriate setting for the FBI Director to bring up the issue, although I doubt the briefing is still a regular event. It would have been a sharp “speak truth” moment but a necessary one for the President’s own safety. Informed by the FBI briefing, POTUS and/or his advisers could have chosen to cancel his speech or more likely explicitly warn the crowd not to act unlawfully.

So that’s how the process would have worked in a more normal administration. My guess would be that this process has decayed or been completely abandoned. I’ve always been opposed to process for its own sake, but I have to admit that this scenario highlights the importance of having a reliable, rigorous approach to crucial issues, such as national security.

In addition to highlighting the importance of a consistent approach to national security, the consideration of how the scenario would have unfolded in a more normal administration reveals several other questions that need asking.

First is how aware was the FBI Director of the threats that his officers were picking up on social networks prior to January 6? If he wasn’t aware, then he needs to reexamine how information flows in the Bureau. If he was aware, did he forward the warning to other parts of the government? Did he for example inform the Secret Service, responsible for the security of the President and Vice-President? (One would hope so.) Might that be the reason the President did not accompany the marchers to the Capitol, after saying he would? Would a desire to avoid having to answer such questions explains the FBI Director’s lack of public comment to date?

But if the President and/or White House were in fact warned about the potential for violence and did not alter their plans, then their complicity appears clear, even if they were not involved in the planning beforehand. If they weren’t informed about the threats, then they are probably to blame for creating an environment where government officials don’t want to deliver bad news or see no purpose in speaking truth to power. A dangerously dysfunctional administration.

Political instability in US likely to continue even after Trump’s departure

(What follows is a mock analysis piece written from the perspective of an intelligence officer in a more or less neutral country, such as Switzerland or Norway. They’ve been asked the question by the policymaker: Is it over? I’ve written it in the style of intelligence analysis I was trained in and propagated for several decades: Make your main point in short paragraphs and then provide supporting data or amplification in bullets. The idea being that a reader should be able to get your main points even if they only had time to glance at the piece.)

Just a few days after the violent occupation of the US Capitol, American politicians have returned to the partisan squabbling that fails to address the country’s widening social, political, racial and economic fault lines.

  • Twitter’s permanent ban of Donald Trump was necessary given the possibility he could again move to incite supporters, but Republicans have used it to pivot to a more popular topic: defense of “free speech.”
  • Democratic Speaker Pelosi’s move to impeach the President again, intended to demonstrate that Trump’s reckless, if not premeditated, behavior demands consequences, nevertheless serves to divert attention from the declining legitimacy of the American democratic system.

Public opinion polls indicate the overwhelming majority of Americans disapproved of the attack, but nevertheless just under 10% expressed support for a violent effort to overturn democratic elections. Analysis of posts on social media platforms reveals the assault on the Capitol had been planned for weeks; recent monitoring suggests that more protests are likely in the run up to and during Inauguration Day on January 20

  • In addition to Inauguration Day, protesters are declaring January 17 as a day of “armed marches” on all 50 US State Capitols and again in Washington, D.C.
  • The recent purge by Twitter and other social media companies of hundreds of thousands of extremists and QANON supporters from their platforms is intended to disrupt extremists’ planning efforts. However, extremists likely will migrate to fringe sites and closed messaging applications to communicate, platforms that are harder for authorities to access and monitor.

President-elect Biden believes he can calm the political turmoil and restitch the union, but he faces significant obstacles.

  • Polling from December indicated that 75% of Republicans rejected the election results. This is a historically high number; in 2016 most Democrats (65%) accepted the legitimacy of Trump’s victory. The skepticism of the Republican base will embolden GOP legislators to obstruct Biden’s agenda.
  • Ending the COVID19 pandemic is Biden’s highest priority, but efforts to do so, such as encouraging mask mandates and restricting social gatherings, will only further antagonize extremist groups, many of whom have staked their “freedom” agendas on opposing COVID-19-related restrictions.

A Puerto Rican Story

President-elect Biden’s nomination today of Miguel Cardona to be Secretary of Education led me to recall a story from my college days at Catholic University in Washington DC. It involves Cardona’s home state of Connecticut, where Cardona has served as Commissioner of Education. Cardona is originally from Puerto Rico.

I had come to Catholic as a junior transfer straight from El Paso, Texas, where I had lived for more than ten years. El Paso is one big Anglo/Chicano mix, spiced up by the multicultural ingredients of a massive army base–Fort Bliss. Of course everyone in El Paso naturally assumed you were Mexican-American and so I often had to correct that impression, proudly pointing out that I am Puerto Rican. But when I got to the East Coast (on a full tuition college debate scholarship–more on that at the end) I wasn’t sure which of my two “identities” to emphasize. I felt then (and still do) close to Texas and soon learned that many of my classmates at Catholic–which draws heavily from the mid-Atlantic states and New England–had never traveled west of the Mississippi. So I wasn’t sure of the best way to answer the question “Where are you from?”

I started working at one of the university dining halls and the staff–many of them students–would gather to eat before the shift started. I was making conversation and talking about my dilemma of how to introduce myself–as a Texan or as a Puerto Rican. One young woman glares at me and advises: “Well, if I were you, I’d say I was a Texan. Because where I come from in Connecticut, Puerto Ricans are all lazy and dirty.”

This is me probably around fifth grade.

Thus revealed the reason why I have never been able to root for anything from Connecticut, even those fine women’s UConn basketball teams.

And about that full tuition debate scholarship. I would never have made it to the East Coast, I’m sure, if not for debate. And the only reason I ever got into debate was because of the advice of an eighth grade teacher in El Paso, Mrs. Bunsen. I wasn’t particularly fond of Mrs. Bunsen, she wasn’t touchy-feely or charming. (Although I wonder now as an adult if I would have enjoyed her more. She had a fine singing voice.) But on one of the last days of class before we went off to high school, Mrs. Bunsen pulled me aside and said: “Carmen, when you get to high school take speech and drama as your elective. You have the same problem of all Latins: you speak too loud and too fast. Speech and Drama will help you.”

I took her advice and speech and debate came to dominate my high school and early college years. I wonder whether today a teacher would offer such advice to a student. I certainly think they would look for a more diplomatic way of broaching it. I remember thinking at the time that I had never known that about myself–but I took her advice to heart.

Going Forward

I was asked recently whether the Intelligence Community, and CIA specifically, would be able to go back and return to normal in a Biden presidency.

My answer was NO!

You might think that I was blaming the damage done to the CIA’s credibility and claim to authority in the past four years.

And there is that. But my real point was that the IC and the CIA should not WANT to go back to the way things were. The “way things were” wasn’t optimal then and has become less so in the last four years. 

What would an optimal Intelligence Community look like?

First, it would not default to secret information, usually expensive to gather and narrow in its scope, to answer the most important questions of our policymakers and about our world. The legislation that established the Director of National Intelligence asked the Intelligence Community to explore more seriously the potential that Open-Source information had for meeting our sense-making needs. Fifteen years later, the space still begs to be charted. The analytic product that is prepared for policymakers still relies on secrets collected by the intelligence-industrial complex. The policymakers usually have to be in secure facilities to access this intelligence and the professionals who prepare it aren’t able to work from home. These restrictions have proven problematic during the pandemic.

The reliance on secrets was the founding vector of the Intelligence Community. And it made sense then. We were the victors in a World War where we had gained essential advantage by uncovering other countries’ secrets. And then our fickle ally, the Soviet Union, became a dangerous opponent who controlled all essential information. The priority for national security was to discover what Moscow and later Beijing wanted to keep hidden. And no amount of reading of Pravda or the People’s Daily would tease out everything we needed to know. The Intelligence Community’s first directive had to be the collection and analysis of secrets.

But whether you think that should remain the first directive depends upon what you understand to be the “engine” that runs the world. Is it the actions of humans and national governments conspiring to gain advantage over others, plotting secret maneuvers and surprise attacks? Or is it social forces and planetary dynamics that evolve over time but can erupt when you least expect them? Like populism, technology shifts, thawing permafrost and yes…pandemics. (and there is likely to be a relationship between climate change and new diseases.) In the first scenario we desperately need to know what the leaders and elites are thinking—and they become our primary targets for clandestine collection. In the latter category, such leaders and elites either don’t exist or emerge with little warning. And the phenomena themselves defy most of our collection methods.

The answer is obvious. Both engines power human society. Some governments remain enigmatic, unpredictable, and dangerous. Our secret collection efforts must remain focused on them.  But social forces and planetary dynamics are becoming more important as human complexity grows—certainly modern society produces more unintended consequences. Unfortunately, the historic methods of the Intelligence Community have not provided us with enough insight on these less elite-driven forces. Thinking back on the last ten years, events such as the Great Recession, the Arab Spring, Syrian refugee flows, Brexit, colored revolutions, resurgent populism, and the coronavirus have all caught intelligence agencies and national governments less prepared than they would have wanted. And no amount of secret intelligence collection would have improved their prospects.

What would have improved their chances? Perhaps smarter and more committed use of Open-Source information. Taiwan’s ability to prepare early for the coronavirus is illustrative. On December 31, 2019 a doctor posted a warning on Taiwan’s version of Reddit that a nasty disease was exploding in China. Taiwan’s health officials saw the warning. On New Year’s Day, Taiwan began inspecting flights coming from Wuhan and a year later Taiwan leads the world in controlling the disease.

The Taiwan story tells us that we can use Open-Source information to help defend the nation, but its details also point to potential problems. Presumably few people mind if health officials monitor social media to help detect disease outbreaks (although there are some who do), but lots of people get kinda sore when they think of government intelligence agencies routinely monitoring Twitter and Reddit for useful information, even when that information is posted publicly for all to see.

Which connects to the second reason why the Intelligence Community can’t just go back to the way things were. Our information climate has changed, irrevocably, in ways that challenge the work of intelligence agencies and even the legitimacy of national governments. Individuals are able to sluice and direct information streams–however they want–to construct whatever narrative suits their biases and preferences. What results are hundreds of “Truth Networks” that self-perpetuate and resist authoritative rebuttals. Conclusions drawn by intelligence agencies are no longer the final or convincing word. Consider the recent finding of the Cybersecurity and Infrastructure Security Agency that the 2020 Presidential Election was the most secure in history. This finding proved irrelevant to the tens of millions of Americans who believe the opposite and can find hundreds of “facts” to prove their case. And transparency, rather than helping, actually ends up abetting the work of conspiracy manufacturers, who scan thousands of hours of videotaped vote processing to find moments of apparent skullduggery.

Let’s play out the national security implications of this information climate. Imagine that the Biden administration discerns the need to deploy US forces to some new crisis zone—or perhaps just to return to Afghanistan to ward off a resurgent terrorist threat. However legitimate the reason, a counter-narrative will immediately emerge, supported by slick videos featuring pseudo-experts. QAnon will drop some cryptic couplets. Critics will demand the release of intelligence justifying the military action. When the government proves unable to do so for security reasons, it loses credibility and flexibility, and eventually the ability to wage successful military operations.

The new administration somehow has to reconceptualize the way government, the public, and information interact. Yikes, that’s one tall order! The way out of our current predicament will be messy, featuring false starts and no doubt bonehead ideas. But there’s no going back. Normal has disappeared and something new must be created. And the Intelligence Community will need to be part of it.

I’m not at all certain how it happens or what it would entail. I think a first step is for intelligence agencies to file for divorce from over-classification. The DNI should audit key national security issues to determine which really require intensive secret collection. The Intelligence Community’s work on social forces and planetary dynamics should be easily accessible to policymakers and when appropriate to the general public—not once a year but on a continuous basis. As acknowledged earlier, transparency often can be manipulated by conspiracy-prone individuals, but there doesn’t appear to be any other way. The goal should be to create a new culture of sense-making collaboration among intelligence officers, policymakers, and yes the public. The public’s ability to contribute to the sensemaking process would be one way of rebuilding trust.

Given that it may be just too hard for existing agencies to embrace such a radical model, a new enterprise may have to be created for Open-Source sensemaking and collaboration. (It could build on the National Intelligence Council’s Global Trends project, for example, but with a much more dynamic and inclusive approach.)  Such an agency might begin with a narrow mandate—perhaps exploring just a few less controversial issues, if such exist. It could then grow as it gained experience and confidence with its sensemaking processes.

One of the traps that befall changemakers is the Athena complex. The birth myth of Athena, the Goddess of Wisdom, is that she emerged fully formed from the forehead of Zeus. And so new ideas are expected to emerge fully formed from the foreheads of change agents. But that’s not how difficult new things get started. They begin unevenly, nervously, saddled with objections and reservations. But the key thing is to take the first step, to move on with the new, because there is no going back.

For the Intelligence Community there can be only one direction: Forward.

American Exceptionalism

I look forward to a return to civil political discussions. I do not demonize those with whom I disagree politically. In fact my views encompass many parts of the political spectrum, and I suspect this is true for many of us. Also, political and social views are constantly emerging and evolving.

Today’s snapshot:

1. More often than not government (all) regulations do not entirely achieve their intended effects. Their unintended effects can be positive or negative. This is due to the world’s and society’s infinite complexity. Thus, I am skeptical of most grand efforts to “fix a problem”.

2. Immigration is a net plus for societies and nations. Most closed and static systems wither and die. Illegal immigration is unfortunate but the individuals involved are humans. If you can’t stop the illegal immigrants at some point the only good option is to normalize their status. The world’s population is stabilizing and will begin to decline in the second half of this century. This will undermine societies and economies whose systems assume population growth. All countries need vibrant young populations to pay taxes and support programs such as welfare safety nets. By 2050 all countries will be actively competing for immigrants.

3. Abortion is horrible. Forcing women to be prisoners of their bodies is horrible. Given that I cannot achieve clarity on this issue I think it’s best left to personal choice.

4. Climate change is real and it is currently driven by humans. Given that regulatory approaches are often flawed, solutions should be emergent and market and locally-based. (See point 1) Thirty years ago I was debating pollution and energy with a friend in an English pub. He was advocating a large government program. I asserted that the first successful electric car would be created by a private company.

5. Humans form associations to solve problems or take advantage of opportunities that can only be done at scale. Governments and large corporations are examples of such associations. Epidemic diseases need to be tackled at scale.

6. Human ingenuity and innovation are everywhere. So is human fecklessness, selfishness, and evil. Large organizations and societies try to maximize the first category and suppress the second. It’s hard. But the key point is that both large organizations and governments are staffed by the same species–adorably imperfect humans.

7. Decent health care should be available and/or affordable for everyone. Unfortunately designing an equitable health care system is particularly complex. Some nationalized health systems–such as the NHS in the UK–suffer from chronic underfunding because once you nationalize health care it has to be funded through taxes and you know how people feel about taxes.

8. Altruism is advantageous from an evolutionary perspective. Communities with altruistic individuals do better than communities without. The Golden Rule is just about the only philosophy we need.

9. Ideologies are unhelpful. They narrow your options and simplify complex reality.

10. Skin color is irrelevant to human performance.

11. America is the world’s most multicultural nation. That is its only true exceptionalism. We will prove to be either a successful example or a tragic one.

Thinking in the Time of Coronavirus–Part 2

20200402_172635The previous post discussed three important thinking dynamics relevant to our analysis of coronavirus:

Actions reveal intentions and motivations
Ideology often colors how we think; and
Worst case scenarios are always considered unlikely
(I’d amend that now to say almost always.)

…but there are many more.

Since the crisis began in January, I’ve come across many commentators—scientists, non-scientists, experts in other fields, jacks-of-all-trades—speculating about coronavirus and attempting to predict its course. Many statements were similar to this one by Dr. Fauci on January 26: “It’s a very, very low risk to the US.” I could not comprehend at the time the evidentiary or logical basis for such statements. Did the individuals making these statements believe the Chinese Government was engaged in some weird overreaction or that the virus would only uniquely prosper in China? Did they assume that the hundreds of thousands of people who would come to the US in 2020 after visiting China (or in a few weeks time Italy) would all be free of the disease or that we would somehow detect them as they entered the country? Were they just making a linear projection from the miniscule number of cases then in the US?

One cognitive pathology at work here is that INDIVIDUALS, EVEN TRAINED SCIENTISTS, ARE REALLY BAD AT DRAWING APPROPRIATE CONCLUSIONS FROM AVAILABLE EVIDENCE. Because I worked as an analyst at CIA for 32 years, I am familiar with this phenomenon. Policymakers are always demanding judgments from analysts, and we often feel obliged to provide them even when the evidentiary basis is insufficient. At any moment regarding any situation, how accurately does the evidence available to us reflect reality? Today as I write this, how much do we really know about coronavirus: 50% of reality, 30%, 10%? The answer at this point is unknowable. Therefore, predictions concerning its future course are tenuous.

Two other realities about thinking are worth mentioning here. First, OUR ABILITY TO KNOW IS A FUNCTION OF OUR TOOLS FOR KNOWING. We can only know what our tools reveal to us. Breakthroughs, revolutions in thinking in so many fields have been the result of inventions/discoveries of new knowledge tools. In cosmology, for example, our understanding of the universe expanded when we learned to build great observatories and combined cameras with telescopes. The deployment of orbital platforms such as the Hubble have further revolutionized our knowledge.

Our understanding of coronavirus has been diminished not just by its novelty but also because China may not have revealed all it has learned about the disease. Another tool problem is the lack of comprehensive testing of populations. Some of my Texas friends have claimed that Texas must be doing a great job containing coronavirus (or that there really isn’t a threat) because of the relatively low rates of infections and deaths. But Texas, as of April 15, has one of the three lowest rates of testing in the country. We don’t really know what’s going on there. And we won’t comprehend critical attributes of the virus, such as fatality and contagion rates, until we have tested a large and random sample of our population. This inherently incomplete nature of our knowledge should make us more humble about our predictions and expectations concerning the course of the disease. For many questions, we still do not have sufficient information to make a firm determination and thus need to err on the sides of caution and resilience.

But instead we have a tendency when confronted with limited information to succumb to THE STREETLIGHT EFFECT. The joke is that a policeman runs across an individual, usually described as inebriated, looking for car keys under a street lamp. When the policeman asks if this is where the keys were lost, the seeker answers “No, but this is the only place I can see.”

When we make confident predictions based on insufficient or flawed evidence, we are succumbing to the streetlight effect. One vivid example is how people jumped on the  hydroxychloroquine bandwagon after just a couple of positive reports. At the start of the pandemic, many (and some still do) argued that covid-19 would be no worse than a bad seasonal flu. Those arguments were based on deaths up to that point (a few hundred or thousands) and I’m not exactly sure what else. So many flaws in that argument it’s hard to know where to begin. First, the number of flu deaths are totals for an entire year while the number of covid-19 deaths are just for a few weeks; we are assuming a lot about how the disease (and  people…) will perform during the course of an entire year. Second, the statement assumed linear growth which of course is not what happens during uncontrolled epidemics. Third, this argument implied that the Chinese stupidly and inexplicably closed down their economy because of the seasonal flu. (Actions reveal intentions and motivations.)

Another flaw in the argument that covid-19 is just another flu is captured by the aphorism: QUANTITY HAS A QUALITY ALL ITS OWN. Mistakenly attributed to Joseph Stalin, the observation appears to have become popularized instead by the US military-industrial complex. It attacks the logic behind linear projections—it’s just more of the same thing and therefore we can handle it. At some point, more of the same thing evolves into a different plant; we can pull out a few weeds by hand but not an entire yard-full. And quantity is not the only factor in play; pacing and tempo have significant impacts as well. One million cases of covid-19 during the course of a year may be manageable but half a million cases in 8 weeks not so much.

When I’m asked to recommend a book for aspiring intelligence analysts, I always mention Daniel Kahneman’s Thinking Fast and Slow. One of his famous findings is that humans are bad at comprehending exponential numbers. (If you start with a penny and double it every day, at the end of the month you will have more than $5 million; actually if the month has 31 days you end up with more than $10 million.)

I like to extend that idea by observing that HUMANS FIND IT HARD TO DEAL WITH EXPONENTIAL CAUSALITY. Exponential causality is one of the characteristics of complex systems. Any one event can have a cascade of consequences in unpredictable directions and time frames. Feedback can even travel backwards in time in the sense that a development today can reveal the unappreciated causal importance of some past event. Because exponential causality confounds humans, we like to pretend it doesn’t exist; a popular way to do that these days is by subscribing to conspiracy theories. So many factors contribute to today’s reality that there’s always a stray thread or two that can be pulled to create a conspiracy-based explanation. If you yearn for a simpler, linear world, then you’re happy to accept that Bill Gates and 5G technology have combined to cause the coronavirus. It’s a particularly dangerous cognitive trap.

One of my first bosses at CIA, John, impressed me with a story from his early days as an analyst. He was following a particular insurgent group in southeast Asia in the 1960s, and had calculated that because of supply disruptions the group would literally use up its ammunition by a date certain. John’s boss advised him to rethink his analysis because YOU NEVER RUN OUT OF BULLETS. In other words, linear predictions are always flawed because 1. our knowledge of any situation is incomplete 2. we never know the exact dimensions of our ignorance; and 3. shit happens.

 

Which brings us to the topic of coronavirus models. I’m sure statisticians will beat me up for this but I often think of models as compilations of hundreds of linear projections. The modeler tries to include every possible variable in her model and stipulates the tens of thousands of relationships among the variables—which is like really hard.  As the model runs every possible combination of variables is instantiated. This can be helpful to policymakers by representing in a more digestible fashion a complex set of possibilities. But models always simplify the complex—they make more linear that which is random. In my experience, models are particularly bad at accounting for the variations and peculiarities of human psychology—one of the most important factors determining the course of covid-19. Indeed, the failings of models will luckily keep human intelligence analysts employed for years to come.

Another useful aspect of models is that they bring into focus the most dangerous, possible outcomes and identify the levers policymakers and individuals can pull to avoid them. Which brings us to the PARADOX OF WARNING. The world has moved smartly to limit the worst consequences although the ones we’re left with are still pretty dire; it turns out the Chinese were not crazy to lock down entire cities to prevent further spread of the disease. But as we succeed in lowering the final number of deaths and infections, we start hearing from critics who claim the crisis was exaggerated from the start. Aaaargh! The only point of warning is to avoid the bad outcomes. No one should be rooting for maximum coronavirus. Effective warners always want to be wrong.

The coronavirus pandemic illustrates that good thinking is more than an academic exercise. It can be a matter of life and death. I’ve seen too many friends on social media using poor arguments to justify bad decisions. Please everyone, just put on your thinking caps.

 

 

Thinking in the Time of Coronavirus–Part 1

I’ve been wanting to comment on all the examples of bad thinking and cognitive traps that I’ve seen regarding coronavirus for a while now, well since early February for sure, but I’ve hesitated to put them down in writing because there is already too much content drawing spurious links to this horrible pandemic. But as we see signs that the infection curves are beginning to flatten in some countries (although certainly not all), it strikes me that good thinking will be just as critical as we work to recover our economies and manage the continuing threat of disease. So what follows is a compilation of some of the best and worst thinking practices revealed so far this year. (There are many so expect at least two posts.)

I was convinced the reports of a new, SARS-like disease in China were significant by mid-January. On 16 January I spoke at a conference that had a sizable contingent of attendees from Seattle and I remember fretting that Seattle would likely be one of the first American cities to get hit by coronavirus given the Chinese population on the West Coast and the travel patterns associated with Lunar New Year. I started tweeting and posting on Facebook about the disease in the second half of January and by late February it dominated my posts. Friends have asked me why I was so sure the disease would pose such a threat and I answered with one of my favorite heuristics from my CIA years: ACTIONS REVEAL INTENTIONS AND MOTIVATIONS.

When you’re trying to figure out a government or actor’s intentions, it’s always best to start with their actions. Pay attention to what they are doing. Given China’s obsession with economic growth and how the Communist Party’s legitimacy rested on delivering prosperity, I could not imagine why China would have closed down one of its most important cities out of an “abundance of caution”—a good name for a new rock band. The coronavirus had scared the shit out of the Chinese Government and the most reasonable explanation was that it was contagious and dangerous.

Whe20200411_144242n we began to see reports of massive disinfection campaigns and attacks on Chinese doctors who issued first warnings, I began to wonder what Beijing was trying to hide, if anything. Of course there was immediate speculation that coronavirus was some type of bioweapon; I’m no expert on this issue so I have to accept the judgment that the virus is not man-made. But the possibility that coronavirus leaked because of an industrial mishap or accidental discharge remains credible to me. Recent reports that the Chinese Government is controlling research into the origins of coronavirus just further pique my suspicions. Actions reveal intentions and motivations.

When I actually shared this view on social media a few weeks ago, several friends criticized me for going there. Why, I wondered. It wasn’t like the Chinese Government was known for its transparency and complete honesty. Why couldn’t these ideas be entertained? My answer in part is that IDEOLOGY OFTEN COLORS HOW WE THINK. There are so many examples of this dynamic spanning the ideological spectrum.

  • Advocates of globalization loathe to admit that China might have deceived other countries.
  • Supporters of the international system reluctant to criticize the World Health Organization.
  • Proponents of American exceptionalism insisting, against a lot of evidence, that the US has had the best response to the coronavirus.
  • Backers of the President condemning any suggestion that the US could have acted more quickly to contain the disease.
  • Critics of the President attacking his decision to limit travel from China in late January, although it was clearly the right thing to do. The more valid criticism is that it didn’t go far enough and there were too many loopholes.

And countless other examples we could mention. Because this is such a terrifying disease, it’s natural for people to fall back upon their values and ideological beliefs to interpret events. It’s natural but not helpful. In fact, it’s dangerous. Our beliefs lead us to ignore facts that don’t fit our ideology and overamplify developments that do. Unfortunately this thinking weakness will haunt our recovery efforts, particularly in the US where our politics have become exceptionally poisonous.

One important caveat: our ideology and values will play an unavoidable role going forward as we think about levels of acceptable risk. To my knowledge there is no objective way to measure the value of a human life. In the months to come we will be trading hundreds if not thousands of lives for decimals of economic growth. Your values are what will determine how you solve that equation. Less-polarized societies will find it easier to agree on the solution. The math will be difficult for the US. (And let me add that the very idea that this can be thought of as a math problem is anathema to many.)

I spoke at a conference in D.C. on 6 February about cognitive traps and used the emerging disease for my examples. The one cognitive bias that was most evident then is that WORST-CASE SCENARIOS ARE ALWAYS CONSIDERED UNLIKELY. In early February few people were expecting the disease to ravage Western Europe and the US and painted any such thinking as worst-case scenarios. Indeed, the first deaths did not occur in Italy until the last week of February. And yet it was reasonable to assume, I thought, that the disease could easily flare up in any country with connections to China, which was basically any place on the planet.

If you’re an analyst responsible for warning, remember that when you paint the most dangerous scenarios as worst-case, you make it easier for the decision-maker to dismiss them. And that’s what appears to have happened in the US government. Impact and probability need to be thought of as independent variables. Some category of “worst-case” scenario happens every year; the only “unlikely” aspect of “worst-case” scenarios is the ability to predict their timing. We are unable to know with precision when a dangerous development will occur, but we are sure to experience several in our lifetimes.

Humans have been flourishing on this planet for tens of thousands of years, solving many problems (and, of course, creating others). We can assume that almost all the easy problems have been solved and many of the hard ones as well. Going forward, most of our problems will be difficult to handle and few, if any, will have clear-cut solutions. Only good thinking will help.