Category Archives: Intelligence

In Search of Insight

When I was a manager of analysis at CIA, I would hear our customers, often senior policymakers, demand more INSIGHT in our analysis. And I would go back and tell the analysts they needed to produce more insight. Until one day an analyst asked me “Well, what is INSIGHT?” And I thought to myself, that’s a good question; a “good question” is ipso facto a question for which you do not have a ready answer.

I put on my tattered thinking cap and worked to come up with some type of answer—a “formula for INSIGHT” that was reproducible and generated a set of actions that analysts could actually perform. I asked many colleagues to describe how they thought. This is another good question. Almost nobody could describe their own thinking process.

“I read and then I write.”

What happens in between?

“I shake my head until some ideas fall out,” one analyst offered.

Eventually I came up with a formula—the steps of analysis—that I thought enough of to share with others. Like everything I do, it’s imperfect but hopefully it offers a starting point.

  1. COMPREHENSION. When we’re thinking about a problem, first we try to comprehend it. We assemble relevant information and consume it however we prefer.

  2. CATEGORIZATION. Once we’ve achieved some comfort in our level of understanding, the next step is categorization. We sort what we know into various categories and patterns. (Actually, this starts to happen organically during the Comprehension stage. This is unavoidable and can be the place where cognitive biases take root. Some information you consume early on colors how you think about every subsequent report, and you fall victim to the anchoring bias. I’ve always wanted to run an experiment where the same 100 pieces of information were presented to analytic teams, but in different orders. Would their analysis differ? My bet is yes!)

    The categories can be as simple as Old Information, New Information, but they eventually evolve into a complex taxonomy that forms the backbone of your Analytic Lines. These Analytic Lines are powerful beings and resist change. This is usually very bad.

  3. PRIMARY INSIGHT. INSIGHT occurs when you see things you’ve never seen before or in ways that are new to you. When an individual takes an item of information and argues that it belongs in a different category, you have produced a moment of INSIGHT. Recategorization of information is a way of generating INSIGHT. Is President Xi’s third term in China an indicator of his strength or of China’s weakness? The conventional wisdom probably is to categorize the event as the former but making a credible argument that it is the latter generates INSIGHT. The INSIGHT argument doesn’t have to be convincing; just provoking others to take a second look is useful.

  4. PROFOUND INSIGHT. A harder but more powerful way to generate INSIGHT is to renovate and/or rebuild your categorization schema. For example, analysts realize that a significant amount of information remains uncategorized—it doesn’t easily fit the current taxonomy. Do you ignore it, or do you begin to experiment with new categories that might better explain the information? And at some point you can experiment with rethinking your categorization scheme—your Analytic Line—from scratch. To return to the China example, how best should we think of it—as an emerging Superpower, as a declining power, or as a country destined for the middle-income trap?  Each of these options can generate significantly different categorization schemes. (When your Analytic Line is long in the tooth, lots of information will no longer easily fit your existing categories. This is a “sell signal” for how you currently think about your problem, but not enough analysts recognize it as such.)

Analytic teams need to be hawk-like in policing their categorization schemes because they often sneakily embed themselves in the back-office processes of the organization. Take, for example, the reading profiles of an analytic team—the algorithms that determine which information flows into their inboxes. Ask your analysts how often these reading profiles are updated. You will not be happy with their answers.

What inspired me to natter on about analysis and insight on this beautiful fall day? Reading Adrian Wolfberg’s excellent monograph In Pursuit of Insight: The Everyday Work of Intelligence Analysts Who Solve Real World Novel Problems. It’s not a quick read but luckily there’s a shorter version available here. Based on extensive interviews with analysts of varying experience, Wolfberg seeks to unpack how insight actually happens from a cognitive, neurological perspective. It tackles the all-important step that my all-too-neat process completely ignores: how does the idea for new categories enter your brain? What leads to its emergence?

Wolfberg writes that the insight process begins with a trigger phase, “an initiating event that, seemingly by chance, brings the analyst’s attention to a particular problem to address; alternatively, after an analyst has been working on a given problem, a random event contributes to focusing their attention more intently on the problem. Entering into and navigating through the trigger phase takes cognitive and emotional courage on the analyst’s part.”

After the trigger phase, Wolfberg identifies emergence as the next step. Two activities promote the emergence of insight: internalized tensions and priming. Quoting from the shorter paper:

Internalized Tensions: As analysts start working on a novel problem, they become aware of inconsistencies that can be cognition-based (i.e., inconsistencies between pairs of thoughts or ideas) or emotion-based (i.e., inconsistencies between an analyst’s action and interpretation of others’ reactions). Tensions induced by these inconsistencies can originate within the individual (i.e., self-initiated) or in the individual’s social environment (i.e., related to organizational structure and the behavior of others). An analyst who concludes that exploring a diversity of ways to represent a problem would lead to the most accurate assessment, while others judge that a standardized process would be best, is an example of cognition-based tension at the individual level. An analyst who presents a unique methodology in a detailed, transparent way to address skeptical concerns, while worried this could lead to being discredited in a production system that values standard product lines, is an example of emotion-based tension at the social level.

Priming: Analysts draw upon memories of past experiences unrelated to their present novel problem in order to make progress toward overcoming tensions and reaching insight. Priming sources also occur across the intersection of the emotion-cognition and individual-social dimensions. In an example of cognition-based priming at the individual level, an analyst who studied in graduate school how physical factors in the environment could trigger social or political outcomes applies that cause-and-effect knowledge to a national security novel problem. In an example of emotion-based priming at the social level, an analyst who had lived in a foreign country appreciates that even in countries where the same language is spoken, cultures can be very different.

What’s clear is that insight emerges from a rich casserole of experiences, emotions, and feelings in an analyst’s mind. Our intuition, what Daniel Kahneman calls our System 1, is the primary custodian of these insight-generating prompts. Wolfberg notes that “although these past experiences were unrelated to the problem at hand, an aspect of these past experiences brought forth a combination of emotional and cognitive meaning that informed how the analysts thought…” Every analyst interviewed by Wolfberg reflected on past experiences unrelated to the problem.

Clarity and INSIGHT are the most sought-after products from intelligence analysts. Clarity is getting the facts straight, sorting out complicated situations. Intelligence organizations usually do clarity well, but not always (think Iraq WMD). INSIGHT requires going beyond the facts to consider hidden causes and motivations, anticipate unexpected changes in trajectory, and appreciate the nonlinearity of events. The work processes of most intelligence teams are suited more to producing clarity than generating INSIGHT. Analysts often describe having to buck the established way of doing things to explore their emerging INSIGHT.

As Wolfberg notes, leaders of intelligence organizations need to appreciate the conditions necessary for the generation of INSIGHT and work to allow the time and space necessary for its emergence. Many of the work processes of the Intelligence Community emphasize order and consistency over thoughtfulness and contemplation. Working 8+ hours a day in a cubicle is also not ideal. As the science writer Annie Murphy Paul notes in her excellent book The Extended Mind, human brains evolved to think best when we’re physically active. My favorite “structured analytic technique” has always been to take a walk, preferably with a favorite thinking partner.


Wolfberg’s study has many other insights about INSIGHT. It’s a rewarding read for anyone wanting to make intelligence analysis better.

In Normal Times…

I’ve been thinking about how White Houses in the past would have prepared for the events of last week. I know that’s a stretch given that it’s hard to imagine any other administration but Trump’s contesting an election past all legal and reasonable recourse and/or encouraging a demonstration against Congress (and a Vice-President) performing their constitutional duties. Nevertheless, if you compare what might have happened in normal times with what actually appears to have happened last week, you get a sense of a dangerously dysfunctional administration.

During my time in government, the FBI Director had at least a weekly time slot with POTUS during the morning security briefings during which he would brief on internal security issues. As I remember it, the Director of National Intelligence and the POTUS briefer would also attend, although I can imagine a topic so sensitive that the room would be cleared.

The FBI Director arguably should have been aware of the reports of criminal plotting by some planning to demonstrate at the Capitol on January 6. We know that at least one FBI officer had warned of the possibility of violence and that the warning was shared with other law enforcement agencies. We also know that law enforcement officials had advised known troublemakers not to go to the DC event and that they had enough information in advance to arrest a Proud Boys leader as he arrived in the District. I haven’t seen any reporting, however, on whether or not the FBI Director was also directly told of this assessment. (This is a common problem/failing of warning intelligence; it isn’t always shared with everyone who needs to know. And even when it is shared appropriately, many people don’t take it seriously.)

Assuming the FBI Director was aware of the reporting, then it would have been his duty to inform the White House, if not the President, about the possibility of criminal activity at the Stop the Steal rally. If they still occur, the weekly briefing would have been the appropriate setting for the FBI Director to bring up the issue, although I doubt the briefing is still a regular event. It would have been a sharp “speak truth” moment but a necessary one for the President’s own safety. Informed by the FBI briefing, POTUS and/or his advisers could have chosen to cancel his speech or more likely explicitly warn the crowd not to act unlawfully.

So that’s how the process would have worked in a more normal administration. My guess would be that this process has decayed or been completely abandoned. I’ve always been opposed to process for its own sake, but I have to admit that this scenario highlights the importance of having a reliable, rigorous approach to crucial issues, such as national security.

In addition to highlighting the importance of a consistent approach to national security, the consideration of how the scenario would have unfolded in a more normal administration reveals several other questions that need asking.

First is how aware was the FBI Director of the threats that his officers were picking up on social networks prior to January 6? If he wasn’t aware, then he needs to reexamine how information flows in the Bureau. If he was aware, did he forward the warning to other parts of the government? Did he for example inform the Secret Service, responsible for the security of the President and Vice-President? (One would hope so.) Might that be the reason the President did not accompany the marchers to the Capitol, after saying he would? Would a desire to avoid having to answer such questions explains the FBI Director’s lack of public comment to date?

But if the President and/or White House were in fact warned about the potential for violence and did not alter their plans, then their complicity appears clear, even if they were not involved in the planning beforehand. If they weren’t informed about the threats, then they are probably to blame for creating an environment where government officials don’t want to deliver bad news or see no purpose in speaking truth to power. A dangerously dysfunctional administration.

Political instability in US likely to continue even after Trump’s departure

(What follows is a mock analysis piece written from the perspective of an intelligence officer in a more or less neutral country, such as Switzerland or Norway. They’ve been asked the question by the policymaker: Is it over? I’ve written it in the style of intelligence analysis I was trained in and propagated for several decades: Make your main point in short paragraphs and then provide supporting data or amplification in bullets. The idea being that a reader should be able to get your main points even if they only had time to glance at the piece.)

Just a few days after the violent occupation of the US Capitol, American politicians have returned to the partisan squabbling that fails to address the country’s widening social, political, racial and economic fault lines.

  • Twitter’s permanent ban of Donald Trump was necessary given the possibility he could again move to incite supporters, but Republicans have used it to pivot to a more popular topic: defense of “free speech.”
  • Democratic Speaker Pelosi’s move to impeach the President again, intended to demonstrate that Trump’s reckless, if not premeditated, behavior demands consequences, nevertheless serves to divert attention from the declining legitimacy of the American democratic system.

Public opinion polls indicate the overwhelming majority of Americans disapproved of the attack, but nevertheless just under 10% expressed support for a violent effort to overturn democratic elections. Analysis of posts on social media platforms reveals the assault on the Capitol had been planned for weeks; recent monitoring suggests that more protests are likely in the run up to and during Inauguration Day on January 20

  • In addition to Inauguration Day, protesters are declaring January 17 as a day of “armed marches” on all 50 US State Capitols and again in Washington, D.C.
  • The recent purge by Twitter and other social media companies of hundreds of thousands of extremists and QANON supporters from their platforms is intended to disrupt extremists’ planning efforts. However, extremists likely will migrate to fringe sites and closed messaging applications to communicate, platforms that are harder for authorities to access and monitor.

President-elect Biden believes he can calm the political turmoil and restitch the union, but he faces significant obstacles.

  • Polling from December indicated that 75% of Republicans rejected the election results. This is a historically high number; in 2016 most Democrats (65%) accepted the legitimacy of Trump’s victory. The skepticism of the Republican base will embolden GOP legislators to obstruct Biden’s agenda.
  • Ending the COVID19 pandemic is Biden’s highest priority, but efforts to do so, such as encouraging mask mandates and restricting social gatherings, will only further antagonize extremist groups, many of whom have staked their “freedom” agendas on opposing COVID-19-related restrictions.

Going Forward

I was asked recently whether the Intelligence Community, and CIA specifically, would be able to go back and return to normal in a Biden presidency.

My answer was NO!

You might think that I was blaming the damage done to the CIA’s credibility and claim to authority in the past four years.

And there is that. But my real point was that the IC and the CIA should not WANT to go back to the way things were. The “way things were” wasn’t optimal then and has become less so in the last four years. 

What would an optimal Intelligence Community look like?

First, it would not default to secret information, usually expensive to gather and narrow in its scope, to answer the most important questions of our policymakers and about our world. The legislation that established the Director of National Intelligence asked the Intelligence Community to explore more seriously the potential that Open-Source information had for meeting our sense-making needs. Fifteen years later, the space still begs to be charted. The analytic product that is prepared for policymakers still relies on secrets collected by the intelligence-industrial complex. The policymakers usually have to be in secure facilities to access this intelligence and the professionals who prepare it aren’t able to work from home. These restrictions have proven problematic during the pandemic.

The reliance on secrets was the founding vector of the Intelligence Community. And it made sense then. We were the victors in a World War where we had gained essential advantage by uncovering other countries’ secrets. And then our fickle ally, the Soviet Union, became a dangerous opponent who controlled all essential information. The priority for national security was to discover what Moscow and later Beijing wanted to keep hidden. And no amount of reading of Pravda or the People’s Daily would tease out everything we needed to know. The Intelligence Community’s first directive had to be the collection and analysis of secrets.

But whether you think that should remain the first directive depends upon what you understand to be the “engine” that runs the world. Is it the actions of humans and national governments conspiring to gain advantage over others, plotting secret maneuvers and surprise attacks? Or is it social forces and planetary dynamics that evolve over time but can erupt when you least expect them? Like populism, technology shifts, thawing permafrost and yes…pandemics. (and there is likely to be a relationship between climate change and new diseases.) In the first scenario we desperately need to know what the leaders and elites are thinking—and they become our primary targets for clandestine collection. In the latter category, such leaders and elites either don’t exist or emerge with little warning. And the phenomena themselves defy most of our collection methods.

The answer is obvious. Both engines power human society. Some governments remain enigmatic, unpredictable, and dangerous. Our secret collection efforts must remain focused on them.  But social forces and planetary dynamics are becoming more important as human complexity grows—certainly modern society produces more unintended consequences. Unfortunately, the historic methods of the Intelligence Community have not provided us with enough insight on these less elite-driven forces. Thinking back on the last ten years, events such as the Great Recession, the Arab Spring, Syrian refugee flows, Brexit, colored revolutions, resurgent populism, and the coronavirus have all caught intelligence agencies and national governments less prepared than they would have wanted. And no amount of secret intelligence collection would have improved their prospects.

What would have improved their chances? Perhaps smarter and more committed use of Open-Source information. Taiwan’s ability to prepare early for the coronavirus is illustrative. On December 31, 2019 a doctor posted a warning on Taiwan’s version of Reddit that a nasty disease was exploding in China. Taiwan’s health officials saw the warning. On New Year’s Day, Taiwan began inspecting flights coming from Wuhan and a year later Taiwan leads the world in controlling the disease.

The Taiwan story tells us that we can use Open-Source information to help defend the nation, but its details also point to potential problems. Presumably few people mind if health officials monitor social media to help detect disease outbreaks (although there are some who do), but lots of people get kinda sore when they think of government intelligence agencies routinely monitoring Twitter and Reddit for useful information, even when that information is posted publicly for all to see.

Which connects to the second reason why the Intelligence Community can’t just go back to the way things were. Our information climate has changed, irrevocably, in ways that challenge the work of intelligence agencies and even the legitimacy of national governments. Individuals are able to sluice and direct information streams–however they want–to construct whatever narrative suits their biases and preferences. What results are hundreds of “Truth Networks” that self-perpetuate and resist authoritative rebuttals. Conclusions drawn by intelligence agencies are no longer the final or convincing word. Consider the recent finding of the Cybersecurity and Infrastructure Security Agency that the 2020 Presidential Election was the most secure in history. This finding proved irrelevant to the tens of millions of Americans who believe the opposite and can find hundreds of “facts” to prove their case. And transparency, rather than helping, actually ends up abetting the work of conspiracy manufacturers, who scan thousands of hours of videotaped vote processing to find moments of apparent skullduggery.

Let’s play out the national security implications of this information climate. Imagine that the Biden administration discerns the need to deploy US forces to some new crisis zone—or perhaps just to return to Afghanistan to ward off a resurgent terrorist threat. However legitimate the reason, a counter-narrative will immediately emerge, supported by slick videos featuring pseudo-experts. QAnon will drop some cryptic couplets. Critics will demand the release of intelligence justifying the military action. When the government proves unable to do so for security reasons, it loses credibility and flexibility, and eventually the ability to wage successful military operations.

The new administration somehow has to reconceptualize the way government, the public, and information interact. Yikes, that’s one tall order! The way out of our current predicament will be messy, featuring false starts and no doubt bonehead ideas. But there’s no going back. Normal has disappeared and something new must be created. And the Intelligence Community will need to be part of it.

I’m not at all certain how it happens or what it would entail. I think a first step is for intelligence agencies to file for divorce from over-classification. The DNI should audit key national security issues to determine which really require intensive secret collection. The Intelligence Community’s work on social forces and planetary dynamics should be easily accessible to policymakers and when appropriate to the general public—not once a year but on a continuous basis. As acknowledged earlier, transparency often can be manipulated by conspiracy-prone individuals, but there doesn’t appear to be any other way. The goal should be to create a new culture of sense-making collaboration among intelligence officers, policymakers, and yes the public. The public’s ability to contribute to the sensemaking process would be one way of rebuilding trust.

Given that it may be just too hard for existing agencies to embrace such a radical model, a new enterprise may have to be created for Open-Source sensemaking and collaboration. (It could build on the National Intelligence Council’s Global Trends project, for example, but with a much more dynamic and inclusive approach.)  Such an agency might begin with a narrow mandate—perhaps exploring just a few less controversial issues, if such exist. It could then grow as it gained experience and confidence with its sensemaking processes.

One of the traps that befall changemakers is the Athena complex. The birth myth of Athena, the Goddess of Wisdom, is that she emerged fully formed from the forehead of Zeus. And so new ideas are expected to emerge fully formed from the foreheads of change agents. But that’s not how difficult new things get started. They begin unevenly, nervously, saddled with objections and reservations. But the key thing is to take the first step, to move on with the new, because there is no going back.

For the Intelligence Community there can be only one direction: Forward.

Thinking in the Time of Coronavirus–Part 2

20200402_172635The previous post discussed three important thinking dynamics relevant to our analysis of coronavirus:

Actions reveal intentions and motivations
Ideology often colors how we think; and
Worst case scenarios are always considered unlikely
(I’d amend that now to say almost always.)

…but there are many more.

Since the crisis began in January, I’ve come across many commentators—scientists, non-scientists, experts in other fields, jacks-of-all-trades—speculating about coronavirus and attempting to predict its course. Many statements were similar to this one by Dr. Fauci on January 26: “It’s a very, very low risk to the US.” I could not comprehend at the time the evidentiary or logical basis for such statements. Did the individuals making these statements believe the Chinese Government was engaged in some weird overreaction or that the virus would only uniquely prosper in China? Did they assume that the hundreds of thousands of people who would come to the US in 2020 after visiting China (or in a few weeks time Italy) would all be free of the disease or that we would somehow detect them as they entered the country? Were they just making a linear projection from the miniscule number of cases then in the US?

One cognitive pathology at work here is that INDIVIDUALS, EVEN TRAINED SCIENTISTS, ARE REALLY BAD AT DRAWING APPROPRIATE CONCLUSIONS FROM AVAILABLE EVIDENCE. Because I worked as an analyst at CIA for 32 years, I am familiar with this phenomenon. Policymakers are always demanding judgments from analysts, and we often feel obliged to provide them even when the evidentiary basis is insufficient. At any moment regarding any situation, how accurately does the evidence available to us reflect reality? Today as I write this, how much do we really know about coronavirus: 50% of reality, 30%, 10%? The answer at this point is unknowable. Therefore, predictions concerning its future course are tenuous.

Two other realities about thinking are worth mentioning here. First, OUR ABILITY TO KNOW IS A FUNCTION OF OUR TOOLS FOR KNOWING. We can only know what our tools reveal to us. Breakthroughs, revolutions in thinking in so many fields have been the result of inventions/discoveries of new knowledge tools. In cosmology, for example, our understanding of the universe expanded when we learned to build great observatories and combined cameras with telescopes. The deployment of orbital platforms such as the Hubble have further revolutionized our knowledge.

Our understanding of coronavirus has been diminished not just by its novelty but also because China may not have revealed all it has learned about the disease. Another tool problem is the lack of comprehensive testing of populations. Some of my Texas friends have claimed that Texas must be doing a great job containing coronavirus (or that there really isn’t a threat) because of the relatively low rates of infections and deaths. But Texas, as of April 15, has one of the three lowest rates of testing in the country. We don’t really know what’s going on there. And we won’t comprehend critical attributes of the virus, such as fatality and contagion rates, until we have tested a large and random sample of our population. This inherently incomplete nature of our knowledge should make us more humble about our predictions and expectations concerning the course of the disease. For many questions, we still do not have sufficient information to make a firm determination and thus need to err on the sides of caution and resilience.

But instead we have a tendency when confronted with limited information to succumb to THE STREETLIGHT EFFECT. The joke is that a policeman runs across an individual, usually described as inebriated, looking for car keys under a street lamp. When the policeman asks if this is where the keys were lost, the seeker answers “No, but this is the only place I can see.”

When we make confident predictions based on insufficient or flawed evidence, we are succumbing to the streetlight effect. One vivid example is how people jumped on the  hydroxychloroquine bandwagon after just a couple of positive reports. At the start of the pandemic, many (and some still do) argued that covid-19 would be no worse than a bad seasonal flu. Those arguments were based on deaths up to that point (a few hundred or thousands) and I’m not exactly sure what else. So many flaws in that argument it’s hard to know where to begin. First, the number of flu deaths are totals for an entire year while the number of covid-19 deaths are just for a few weeks; we are assuming a lot about how the disease (and  people…) will perform during the course of an entire year. Second, the statement assumed linear growth which of course is not what happens during uncontrolled epidemics. Third, this argument implied that the Chinese stupidly and inexplicably closed down their economy because of the seasonal flu. (Actions reveal intentions and motivations.)

Another flaw in the argument that covid-19 is just another flu is captured by the aphorism: QUANTITY HAS A QUALITY ALL ITS OWN. Mistakenly attributed to Joseph Stalin, the observation appears to have become popularized instead by the US military-industrial complex. It attacks the logic behind linear projections—it’s just more of the same thing and therefore we can handle it. At some point, more of the same thing evolves into a different plant; we can pull out a few weeds by hand but not an entire yard-full. And quantity is not the only factor in play; pacing and tempo have significant impacts as well. One million cases of covid-19 during the course of a year may be manageable but half a million cases in 8 weeks not so much.

When I’m asked to recommend a book for aspiring intelligence analysts, I always mention Daniel Kahneman’s Thinking Fast and Slow. One of his famous findings is that humans are bad at comprehending exponential numbers. (If you start with a penny and double it every day, at the end of the month you will have more than $5 million; actually if the month has 31 days you end up with more than $10 million.)

I like to extend that idea by observing that HUMANS FIND IT HARD TO DEAL WITH EXPONENTIAL CAUSALITY. Exponential causality is one of the characteristics of complex systems. Any one event can have a cascade of consequences in unpredictable directions and time frames. Feedback can even travel backwards in time in the sense that a development today can reveal the unappreciated causal importance of some past event. Because exponential causality confounds humans, we like to pretend it doesn’t exist; a popular way to do that these days is by subscribing to conspiracy theories. So many factors contribute to today’s reality that there’s always a stray thread or two that can be pulled to create a conspiracy-based explanation. If you yearn for a simpler, linear world, then you’re happy to accept that Bill Gates and 5G technology have combined to cause the coronavirus. It’s a particularly dangerous cognitive trap.

One of my first bosses at CIA, John, impressed me with a story from his early days as an analyst. He was following a particular insurgent group in southeast Asia in the 1960s, and had calculated that because of supply disruptions the group would literally use up its ammunition by a date certain. John’s boss advised him to rethink his analysis because YOU NEVER RUN OUT OF BULLETS. In other words, linear predictions are always flawed because 1. our knowledge of any situation is incomplete 2. we never know the exact dimensions of our ignorance; and 3. shit happens.

 

Which brings us to the topic of coronavirus models. I’m sure statisticians will beat me up for this but I often think of models as compilations of hundreds of linear projections. The modeler tries to include every possible variable in her model and stipulates the tens of thousands of relationships among the variables—which is like really hard.  As the model runs every possible combination of variables is instantiated. This can be helpful to policymakers by representing in a more digestible fashion a complex set of possibilities. But models always simplify the complex—they make more linear that which is random. In my experience, models are particularly bad at accounting for the variations and peculiarities of human psychology—one of the most important factors determining the course of covid-19. Indeed, the failings of models will luckily keep human intelligence analysts employed for years to come.

Another useful aspect of models is that they bring into focus the most dangerous, possible outcomes and identify the levers policymakers and individuals can pull to avoid them. Which brings us to the PARADOX OF WARNING. The world has moved smartly to limit the worst consequences although the ones we’re left with are still pretty dire; it turns out the Chinese were not crazy to lock down entire cities to prevent further spread of the disease. But as we succeed in lowering the final number of deaths and infections, we start hearing from critics who claim the crisis was exaggerated from the start. Aaaargh! The only point of warning is to avoid the bad outcomes. No one should be rooting for maximum coronavirus. Effective warners always want to be wrong.

The coronavirus pandemic illustrates that good thinking is more than an academic exercise. It can be a matter of life and death. I’ve seen too many friends on social media using poor arguments to justify bad decisions. Please everyone, just put on your thinking caps.

 

 

Thinking in the Time of Coronavirus–Part 1

I’ve been wanting to comment on all the examples of bad thinking and cognitive traps that I’ve seen regarding coronavirus for a while now, well since early February for sure, but I’ve hesitated to put them down in writing because there is already too much content drawing spurious links to this horrible pandemic. But as we see signs that the infection curves are beginning to flatten in some countries (although certainly not all), it strikes me that good thinking will be just as critical as we work to recover our economies and manage the continuing threat of disease. So what follows is a compilation of some of the best and worst thinking practices revealed so far this year. (There are many so expect at least two posts.)

I was convinced the reports of a new, SARS-like disease in China were significant by mid-January. On 16 January I spoke at a conference that had a sizable contingent of attendees from Seattle and I remember fretting that Seattle would likely be one of the first American cities to get hit by coronavirus given the Chinese population on the West Coast and the travel patterns associated with Lunar New Year. I started tweeting and posting on Facebook about the disease in the second half of January and by late February it dominated my posts. Friends have asked me why I was so sure the disease would pose such a threat and I answered with one of my favorite heuristics from my CIA years: ACTIONS REVEAL INTENTIONS AND MOTIVATIONS.

When you’re trying to figure out a government or actor’s intentions, it’s always best to start with their actions. Pay attention to what they are doing. Given China’s obsession with economic growth and how the Communist Party’s legitimacy rested on delivering prosperity, I could not imagine why China would have closed down one of its most important cities out of an “abundance of caution”—a good name for a new rock band. The coronavirus had scared the shit out of the Chinese Government and the most reasonable explanation was that it was contagious and dangerous.

Whe20200411_144242n we began to see reports of massive disinfection campaigns and attacks on Chinese doctors who issued first warnings, I began to wonder what Beijing was trying to hide, if anything. Of course there was immediate speculation that coronavirus was some type of bioweapon; I’m no expert on this issue so I have to accept the judgment that the virus is not man-made. But the possibility that coronavirus leaked because of an industrial mishap or accidental discharge remains credible to me. Recent reports that the Chinese Government is controlling research into the origins of coronavirus just further pique my suspicions. Actions reveal intentions and motivations.

When I actually shared this view on social media a few weeks ago, several friends criticized me for going there. Why, I wondered. It wasn’t like the Chinese Government was known for its transparency and complete honesty. Why couldn’t these ideas be entertained? My answer in part is that IDEOLOGY OFTEN COLORS HOW WE THINK. There are so many examples of this dynamic spanning the ideological spectrum.

  • Advocates of globalization loathe to admit that China might have deceived other countries.
  • Supporters of the international system reluctant to criticize the World Health Organization.
  • Proponents of American exceptionalism insisting, against a lot of evidence, that the US has had the best response to the coronavirus.
  • Backers of the President condemning any suggestion that the US could have acted more quickly to contain the disease.
  • Critics of the President attacking his decision to limit travel from China in late January, although it was clearly the right thing to do. The more valid criticism is that it didn’t go far enough and there were too many loopholes.

And countless other examples we could mention. Because this is such a terrifying disease, it’s natural for people to fall back upon their values and ideological beliefs to interpret events. It’s natural but not helpful. In fact, it’s dangerous. Our beliefs lead us to ignore facts that don’t fit our ideology and overamplify developments that do. Unfortunately this thinking weakness will haunt our recovery efforts, particularly in the US where our politics have become exceptionally poisonous.

One important caveat: our ideology and values will play an unavoidable role going forward as we think about levels of acceptable risk. To my knowledge there is no objective way to measure the value of a human life. In the months to come we will be trading hundreds if not thousands of lives for decimals of economic growth. Your values are what will determine how you solve that equation. Less-polarized societies will find it easier to agree on the solution. The math will be difficult for the US. (And let me add that the very idea that this can be thought of as a math problem is anathema to many.)

I spoke at a conference in D.C. on 6 February about cognitive traps and used the emerging disease for my examples. The one cognitive bias that was most evident then is that WORST-CASE SCENARIOS ARE ALWAYS CONSIDERED UNLIKELY. In early February few people were expecting the disease to ravage Western Europe and the US and painted any such thinking as worst-case scenarios. Indeed, the first deaths did not occur in Italy until the last week of February. And yet it was reasonable to assume, I thought, that the disease could easily flare up in any country with connections to China, which was basically any place on the planet.

If you’re an analyst responsible for warning, remember that when you paint the most dangerous scenarios as worst-case, you make it easier for the decision-maker to dismiss them. And that’s what appears to have happened in the US government. Impact and probability need to be thought of as independent variables. Some category of “worst-case” scenario happens every year; the only “unlikely” aspect of “worst-case” scenarios is the ability to predict their timing. We are unable to know with precision when a dangerous development will occur, but we are sure to experience several in our lifetimes.

Humans have been flourishing on this planet for tens of thousands of years, solving many problems (and, of course, creating others). We can assume that almost all the easy problems have been solved and many of the hard ones as well. Going forward, most of our problems will be difficult to handle and few, if any, will have clear-cut solutions. Only good thinking will help.

The Ten Habits of Non-Conventional Thinkers

One of the things I do a couple of times a year is lead a discussion on conventional wisdom. It wasn’t my idea to do this. I was asked a few years ago by someone who was teaching a class on intelligence who wanted to hold a session on conventional wisdom. He thought I was the perfect person to lead it. Whatever…

Conventional wisdom is like the monster under the bed for intelligence analysts. We’re all afraid of it but we can’t quite describe what it looks like. Some worry very much that they are actually guilty of it themselves. But we’d rather not talk about it.

I struggled with the assignment because I felt that to talk about conventional wisdom I needed to use examples. Otherwise it wouldn’t be meaningful. But of course my example of conventional wisdom might be someone else’s strongly-held beliefs. I finally hit upon the idea of talking about conventional wisdom in the context of cosmology. If you study the history of man’s understanding of the universe and its origins, you become aware that it is actually the story of conventional wisdoms (plural intentional). The prevailing theory is replaced by a new theory that sooner or later becomes conventional wisdom ripe for replacement by the next new theory. Second verse same as the first.

The lessons I draw could also be reversed and thought of as best practices for people who don’t want to be conventional thinkers. And so here they are. The 10 habits of non-conventional thinkers just in case you want to be one too.

1. Non-conventional thinkers are very suspicious of what anyone says they “know”.  They consider knowledge a pretty slippery character who is largely the creation of whatever sensemaking tools are popular at the moment. When we develop new tools, we develop new knowledge that often topples down all previous architectures of knowledge.

2. Non-conventional thinkers eschew tidy, neat thinking. They like messy ideas. They go looking for them. They are not taken in by common human crutches such as the desire for symmetry.

3. This is hard, but non-conventional thinkers try to avoid falling in love with their ideas. They are mean to them, even abusive. (or at least they should be!)

4. Non-conventional thinkers don’t censor themselves. They try to say out loud or write down everything they’re thinking. I think too many people don’t even offer up the good ideas inside their heads.

5. They talk to and listen to very diverse people. They enjoy reading about ideas that are way out there. Last night on Netflix I watched a fascinating biography of William Burroughs. My Netflix Horoscope right now says I enjoy watching cerebral biographical documentaries.

6. They don’t think much is sacred. Not even Albert Einstein. Albert Einstein was a brilliant man. It seems like he was also a kind person. But he famously hated the idea of an expanding universe. Because astronomers and physicists were so in awe of him, they tried to explain away for about 20 years data that pointed to a big bang.

7. Non-conventional thinkers love to attack disciplines and ideas that have been static for a long time. They like even better to attack truths.

8. They love to look at things from completely different angles. They want to see the very finest details. They actually prefer to know exactly how things work. Non-conventional thinkers take no perspectives for granted and expect to find an element of truth even in the most outlandish points of view.

9. Non-conventional thinkers like to stimulate their thinking with sillinesses. They will engage in little rituals that fertilize their brains. Today I colored.

10. Non-conventional thinkers never stop looking.

Thinking Ain’t What It Used To be

I’ve been reading a great book the last couple of weeks, Thinking, Fast and Slow by Daniel Kanehan. (I read books like I watch TV, I dip in and out, watching (or reading) several things at one time.) I recommend Kanehan’s book to everyone; it is not as hard to read as you might think–in fact the prose style is very pleasant, although the illustrative mental puzzles do take a bit of effort sometimes. I think most people will react like I have, reflecting on the implications of Kanehan’s findings for how I lead my life, how we make decisions. His major message so far is that all of us need to be aware of the shortcuts (fast thinking) we use and the likelihood that these shortcuts will lead us astray.

Thinking about thinking has ended up being a huge part of my life’s work. That’s a lot of what I did for CIA. It is no reassuring task to write or edit reports intended for policymakers, even for the President, and then pause to ask yourself whether the prose before you actually provides anything useful or insightful to the intended reader. Because the real truth is that facts (outside of science, and even there you gotta wonder!) rarely speak for themselves.  (And it hardly matters at all if they are secret facts.) In those rare instances when they do speak for themselves, then you don’t need an analyst or interpreter to make sense of them. Do you? Most facts require context, invite assumptions, have a back story, and probably also have a future. That’s what the analyst, the real thinker needs to bring to the conveyance of the fact. But it is all too rarely accomplished.

I can’t help but think we have a humongous thinking deficit in today’s world. (We also have significant values confusion and volatility, but that would be another blog.) I don’t watch any of the political debates because I can’t stand to watch people embarrass themselves. But I sort of follow them on Twitter sometimes so I know the various candidates drop Fact Bombs; Speaker Gingrich engages in carpet bombing and Rick Perry often misses the target. It doesn’t matter whether I support the candidate or not: they all misuse facts and encourage the sloppy thinking habits that haunt us today.

Some of our thinking errors have been monumental. For example you would think that when the US and other Western nations began investing in large social welfare programs in the 1950s and 1960s they might have actually done some actuarial planning, plotting out demographic, social, and economic trends to develop projections for how long the economy could sustain such benefits. Now my bet is that in some governments someone actually did something like this, but that inconvenient scenarios were dismissed as worst case scenarios and thus relegated to the low probability trash can. 

This is one of the most common thinking errors I encountered professionally: the association of worst case scenarios with low probabilities. Think about it. I bet most of you do it all the time. You hear worst case, you think “unlikely to happen”. The two values–probability, severity–move independently of each other, of course. There is a variation to this thinking pathology: that’s when worst case is is equated with high probability or even only possibility. In the past ten years, this has manifested itself in obsessive hoarding of gold.

Another area where a little long-range thinking and contextual analysis might cast some interesting light is the immigration debate. Some of us know that immigrants are an increasingly important part of US economic growth and that economic growth is a good way to ameliorate deficits. But these points aren’t often made. If the point can’t be conveyed in a soundbite or factbomb, then it isn’t conveyed at all.

What follows are examples of sloppy–even meaningless–analysis I see all the time even in the most serious publications.

The negotiations will be difficult. Now I’m betting that if the situation was easy to resolve, you wouldn’t need negotiations in the first place. Negotiations are supposed to be difficult. Also lengthy, bitter, hard-fought. This is one of my faves, because you SEE IT ALL THE TIME. (Also see below for discussion on use of adjectives and adverbs.)

It is too soon to tell and its cousin Only time will tell. No elucidation necessary.

The transition will be difficult or The transition will be smooth. I distrust just about all adjectives and adverbs in analysis. These slippery little modifiers disguise many errors in thinking. What smooth means to the writer may not be what smooth means to me. I would rather have the elements of the transition discussed so that in addition to considering the analyst’s judgment, I can develop my own.

The elections are too close to call. Basically, I don’t understand why we spend so much analytic and journalistic energy trying to forecast elections in the first place. An election is actually a specific event in the near future. Its timing is known to all. When it ends, we will know the outcome. If the election is rigged, then our forecast doesn’t matter. I’m gobsmacked at how American journalists in particular have convinced so many people to hang on their every word about events concerning which they have little insight and over which they have little control.

X dynamic is not a problem in Society Y because it only has the support of 10% of the population. Another classic mistake that you see ALL THE TIME and which, at this point, is absolutely criminal. When you hear about a particular revolution not being anticipated, you can bet that this type of analytic statement was at the heart of the faulty thinking. This analytic statement is flawed because it tries to capture extremely complicated societal dynamics in a criminally simple mathematical statement. Before making  such a statement, the analyst needs to examine his theory for social change. Is social change simply an arithmetic progression? Or do movements often gain support suddenly and/or exponentially?

Anyway I think I’ll stop now. Really I’ve just been venting on many of the thinking errors that get under my skin. For a much better discussion, do read Thinking, Fast and Slow.

On CIA, typewriters, and sensemaking

Check out my guest blog post on IBM’s Smarter Planet blog.  http://asmarterplanet.com/blog/2011/09/10766.html

Revisiting Lessons from a CIA Heretic

Events in Middle East have led me to reflect on the talk I gave in September of last year at the Business Innovation Factory. (You can read the prepared text here or see the video of my speech here)  I was noting how the world is changing and how that in turn requires a different sensemaking method. The key paragraphs:

If you think that the world is driven mostly by the secret deals and aspirations of powerful people—the Hitlers, the Communist Party of the Soviet Unon, Mao Tse Tung, Idi Amin, Saddam Hussein, Osama bin Laden, I’m desperately trying to think of a likely woman here—then you will conclude that you need some kind of capability to figure out what these people are doing, to ferret out their secrets. To protect our nation from some very nasty ideas these individuals cook up. And you may also want an organization that can impede their plans, cross your fingers.

But if you think that most of the forces the US will need to navigate are not specifically man-made, or at least not specifically made by one man or a small group of them–then you need a different kind of organization. If what matters is that the US understand the trends in the world, like globalization or the emergence of new economies such as India and China and Brazil (which clearly no one is like trying to keep a big secret) than spending a lot of time digging out secrets seems not as important, and what you really want is to have your hand on the pulse of the world, to be out there sensing and in many ways just being part of the whole big ride.

(A little later in the talk.)  

Making sense of the world is so hard and so important that it demands collaboration with as broad a network as possible. It was around this time that this thought entered my mind: The CIA will end up being the last secret organization in the world. And being the last of anything is never a good thing.

And so back to the question. I actually think the answer to it is very complicated. But I do believe that more of what will be important to US prosperity in the future will lie in the second dynamic and our success will depend on how well we understand these large shift changes underway and are able to engage them. Here’s where the imbalance of the Intelligence Community really can hurt us. To deal with the first circumstance it’s important to be a closed network. But to understand and prosper in the second dynamic it’s best to be an open network.  What we have here is a real innovator’s dilemma.

That’s why one of my passions now that I’ve retired from the Agency is to do what little I can to help Americans think about connecting, about working in open networks, about transparency. I believe as a successful multicultural society the US is poised to be innovative in this new world, and this time perhaps all out of proportion to our size. I love all social networks and in particular Twitter because of its power to spread ideas faster than the speed of light. Just think of it. One thought can reach a thousand people much faster than a single beam of light could physically touch those same individuals.