Category Archives: Thinking

In Search of Insight

When I was a manager of analysis at CIA, I would hear our customers, often senior policymakers, demand more INSIGHT in our analysis. And I would go back and tell the analysts they needed to produce more insight. Until one day an analyst asked me “Well, what is INSIGHT?” And I thought to myself, that’s a good question; a “good question” is ipso facto a question for which you do not have a ready answer.

I put on my tattered thinking cap and worked to come up with some type of answer—a “formula for INSIGHT” that was reproducible and generated a set of actions that analysts could actually perform. I asked many colleagues to describe how they thought. This is another good question. Almost nobody could describe their own thinking process.

“I read and then I write.”

What happens in between?

“I shake my head until some ideas fall out,” one analyst offered.

Eventually I came up with a formula—the steps of analysis—that I thought enough of to share with others. Like everything I do, it’s imperfect but hopefully it offers a starting point.

  1. COMPREHENSION. When we’re thinking about a problem, first we try to comprehend it. We assemble relevant information and consume it however we prefer.

  2. CATEGORIZATION. Once we’ve achieved some comfort in our level of understanding, the next step is categorization. We sort what we know into various categories and patterns. (Actually, this starts to happen organically during the Comprehension stage. This is unavoidable and can be the place where cognitive biases take root. Some information you consume early on colors how you think about every subsequent report, and you fall victim to the anchoring bias. I’ve always wanted to run an experiment where the same 100 pieces of information were presented to analytic teams, but in different orders. Would their analysis differ? My bet is yes!)

    The categories can be as simple as Old Information, New Information, but they eventually evolve into a complex taxonomy that forms the backbone of your Analytic Lines. These Analytic Lines are powerful beings and resist change. This is usually very bad.

  3. PRIMARY INSIGHT. INSIGHT occurs when you see things you’ve never seen before or in ways that are new to you. When an individual takes an item of information and argues that it belongs in a different category, you have produced a moment of INSIGHT. Recategorization of information is a way of generating INSIGHT. Is President Xi’s third term in China an indicator of his strength or of China’s weakness? The conventional wisdom probably is to categorize the event as the former but making a credible argument that it is the latter generates INSIGHT. The INSIGHT argument doesn’t have to be convincing; just provoking others to take a second look is useful.

  4. PROFOUND INSIGHT. A harder but more powerful way to generate INSIGHT is to renovate and/or rebuild your categorization schema. For example, analysts realize that a significant amount of information remains uncategorized—it doesn’t easily fit the current taxonomy. Do you ignore it, or do you begin to experiment with new categories that might better explain the information? And at some point you can experiment with rethinking your categorization scheme—your Analytic Line—from scratch. To return to the China example, how best should we think of it—as an emerging Superpower, as a declining power, or as a country destined for the middle-income trap?  Each of these options can generate significantly different categorization schemes. (When your Analytic Line is long in the tooth, lots of information will no longer easily fit your existing categories. This is a “sell signal” for how you currently think about your problem, but not enough analysts recognize it as such.)

Analytic teams need to be hawk-like in policing their categorization schemes because they often sneakily embed themselves in the back-office processes of the organization. Take, for example, the reading profiles of an analytic team—the algorithms that determine which information flows into their inboxes. Ask your analysts how often these reading profiles are updated. You will not be happy with their answers.

What inspired me to natter on about analysis and insight on this beautiful fall day? Reading Adrian Wolfberg’s excellent monograph In Pursuit of Insight: The Everyday Work of Intelligence Analysts Who Solve Real World Novel Problems. It’s not a quick read but luckily there’s a shorter version available here. Based on extensive interviews with analysts of varying experience, Wolfberg seeks to unpack how insight actually happens from a cognitive, neurological perspective. It tackles the all-important step that my all-too-neat process completely ignores: how does the idea for new categories enter your brain? What leads to its emergence?

Wolfberg writes that the insight process begins with a trigger phase, “an initiating event that, seemingly by chance, brings the analyst’s attention to a particular problem to address; alternatively, after an analyst has been working on a given problem, a random event contributes to focusing their attention more intently on the problem. Entering into and navigating through the trigger phase takes cognitive and emotional courage on the analyst’s part.”

After the trigger phase, Wolfberg identifies emergence as the next step. Two activities promote the emergence of insight: internalized tensions and priming. Quoting from the shorter paper:

Internalized Tensions: As analysts start working on a novel problem, they become aware of inconsistencies that can be cognition-based (i.e., inconsistencies between pairs of thoughts or ideas) or emotion-based (i.e., inconsistencies between an analyst’s action and interpretation of others’ reactions). Tensions induced by these inconsistencies can originate within the individual (i.e., self-initiated) or in the individual’s social environment (i.e., related to organizational structure and the behavior of others). An analyst who concludes that exploring a diversity of ways to represent a problem would lead to the most accurate assessment, while others judge that a standardized process would be best, is an example of cognition-based tension at the individual level. An analyst who presents a unique methodology in a detailed, transparent way to address skeptical concerns, while worried this could lead to being discredited in a production system that values standard product lines, is an example of emotion-based tension at the social level.

Priming: Analysts draw upon memories of past experiences unrelated to their present novel problem in order to make progress toward overcoming tensions and reaching insight. Priming sources also occur across the intersection of the emotion-cognition and individual-social dimensions. In an example of cognition-based priming at the individual level, an analyst who studied in graduate school how physical factors in the environment could trigger social or political outcomes applies that cause-and-effect knowledge to a national security novel problem. In an example of emotion-based priming at the social level, an analyst who had lived in a foreign country appreciates that even in countries where the same language is spoken, cultures can be very different.

What’s clear is that insight emerges from a rich casserole of experiences, emotions, and feelings in an analyst’s mind. Our intuition, what Daniel Kahneman calls our System 1, is the primary custodian of these insight-generating prompts. Wolfberg notes that “although these past experiences were unrelated to the problem at hand, an aspect of these past experiences brought forth a combination of emotional and cognitive meaning that informed how the analysts thought…” Every analyst interviewed by Wolfberg reflected on past experiences unrelated to the problem.

Clarity and INSIGHT are the most sought-after products from intelligence analysts. Clarity is getting the facts straight, sorting out complicated situations. Intelligence organizations usually do clarity well, but not always (think Iraq WMD). INSIGHT requires going beyond the facts to consider hidden causes and motivations, anticipate unexpected changes in trajectory, and appreciate the nonlinearity of events. The work processes of most intelligence teams are suited more to producing clarity than generating INSIGHT. Analysts often describe having to buck the established way of doing things to explore their emerging INSIGHT.

As Wolfberg notes, leaders of intelligence organizations need to appreciate the conditions necessary for the generation of INSIGHT and work to allow the time and space necessary for its emergence. Many of the work processes of the Intelligence Community emphasize order and consistency over thoughtfulness and contemplation. Working 8+ hours a day in a cubicle is also not ideal. As the science writer Annie Murphy Paul notes in her excellent book The Extended Mind, human brains evolved to think best when we’re physically active. My favorite “structured analytic technique” has always been to take a walk, preferably with a favorite thinking partner.


Wolfberg’s study has many other insights about INSIGHT. It’s a rewarding read for anyone wanting to make intelligence analysis better.

American Exceptionalism

I look forward to a return to civil political discussions. I do not demonize those with whom I disagree politically. In fact my views encompass many parts of the political spectrum, and I suspect this is true for many of us. Also, political and social views are constantly emerging and evolving.

Today’s snapshot:

1. More often than not government (all) regulations do not entirely achieve their intended effects. Their unintended effects can be positive or negative. This is due to the world’s and society’s infinite complexity. Thus, I am skeptical of most grand efforts to “fix a problem”.

2. Immigration is a net plus for societies and nations. Most closed and static systems wither and die. Illegal immigration is unfortunate but the individuals involved are humans. If you can’t stop the illegal immigrants at some point the only good option is to normalize their status. The world’s population is stabilizing and will begin to decline in the second half of this century. This will undermine societies and economies whose systems assume population growth. All countries need vibrant young populations to pay taxes and support programs such as welfare safety nets. By 2050 all countries will be actively competing for immigrants.

3. Abortion is horrible. Forcing women to be prisoners of their bodies is horrible. Given that I cannot achieve clarity on this issue I think it’s best left to personal choice.

4. Climate change is real and it is currently driven by humans. Given that regulatory approaches are often flawed, solutions should be emergent and market and locally-based. (See point 1) Thirty years ago I was debating pollution and energy with a friend in an English pub. He was advocating a large government program. I asserted that the first successful electric car would be created by a private company.

5. Humans form associations to solve problems or take advantage of opportunities that can only be done at scale. Governments and large corporations are examples of such associations. Epidemic diseases need to be tackled at scale.

6. Human ingenuity and innovation are everywhere. So is human fecklessness, selfishness, and evil. Large organizations and societies try to maximize the first category and suppress the second. It’s hard. But the key point is that both large organizations and governments are staffed by the same species–adorably imperfect humans.

7. Decent health care should be available and/or affordable for everyone. Unfortunately designing an equitable health care system is particularly complex. Some nationalized health systems–such as the NHS in the UK–suffer from chronic underfunding because once you nationalize health care it has to be funded through taxes and you know how people feel about taxes.

8. Altruism is advantageous from an evolutionary perspective. Communities with altruistic individuals do better than communities without. The Golden Rule is just about the only philosophy we need.

9. Ideologies are unhelpful. They narrow your options and simplify complex reality.

10. Skin color is irrelevant to human performance.

11. America is the world’s most multicultural nation. That is its only true exceptionalism. We will prove to be either a successful example or a tragic one.

Thinking in the Time of Coronavirus–Part 2

20200402_172635The previous post discussed three important thinking dynamics relevant to our analysis of coronavirus:

Actions reveal intentions and motivations
Ideology often colors how we think; and
Worst case scenarios are always considered unlikely
(I’d amend that now to say almost always.)

…but there are many more.

Since the crisis began in January, I’ve come across many commentators—scientists, non-scientists, experts in other fields, jacks-of-all-trades—speculating about coronavirus and attempting to predict its course. Many statements were similar to this one by Dr. Fauci on January 26: “It’s a very, very low risk to the US.” I could not comprehend at the time the evidentiary or logical basis for such statements. Did the individuals making these statements believe the Chinese Government was engaged in some weird overreaction or that the virus would only uniquely prosper in China? Did they assume that the hundreds of thousands of people who would come to the US in 2020 after visiting China (or in a few weeks time Italy) would all be free of the disease or that we would somehow detect them as they entered the country? Were they just making a linear projection from the miniscule number of cases then in the US?

One cognitive pathology at work here is that INDIVIDUALS, EVEN TRAINED SCIENTISTS, ARE REALLY BAD AT DRAWING APPROPRIATE CONCLUSIONS FROM AVAILABLE EVIDENCE. Because I worked as an analyst at CIA for 32 years, I am familiar with this phenomenon. Policymakers are always demanding judgments from analysts, and we often feel obliged to provide them even when the evidentiary basis is insufficient. At any moment regarding any situation, how accurately does the evidence available to us reflect reality? Today as I write this, how much do we really know about coronavirus: 50% of reality, 30%, 10%? The answer at this point is unknowable. Therefore, predictions concerning its future course are tenuous.

Two other realities about thinking are worth mentioning here. First, OUR ABILITY TO KNOW IS A FUNCTION OF OUR TOOLS FOR KNOWING. We can only know what our tools reveal to us. Breakthroughs, revolutions in thinking in so many fields have been the result of inventions/discoveries of new knowledge tools. In cosmology, for example, our understanding of the universe expanded when we learned to build great observatories and combined cameras with telescopes. The deployment of orbital platforms such as the Hubble have further revolutionized our knowledge.

Our understanding of coronavirus has been diminished not just by its novelty but also because China may not have revealed all it has learned about the disease. Another tool problem is the lack of comprehensive testing of populations. Some of my Texas friends have claimed that Texas must be doing a great job containing coronavirus (or that there really isn’t a threat) because of the relatively low rates of infections and deaths. But Texas, as of April 15, has one of the three lowest rates of testing in the country. We don’t really know what’s going on there. And we won’t comprehend critical attributes of the virus, such as fatality and contagion rates, until we have tested a large and random sample of our population. This inherently incomplete nature of our knowledge should make us more humble about our predictions and expectations concerning the course of the disease. For many questions, we still do not have sufficient information to make a firm determination and thus need to err on the sides of caution and resilience.

But instead we have a tendency when confronted with limited information to succumb to THE STREETLIGHT EFFECT. The joke is that a policeman runs across an individual, usually described as inebriated, looking for car keys under a street lamp. When the policeman asks if this is where the keys were lost, the seeker answers “No, but this is the only place I can see.”

When we make confident predictions based on insufficient or flawed evidence, we are succumbing to the streetlight effect. One vivid example is how people jumped on the  hydroxychloroquine bandwagon after just a couple of positive reports. At the start of the pandemic, many (and some still do) argued that covid-19 would be no worse than a bad seasonal flu. Those arguments were based on deaths up to that point (a few hundred or thousands) and I’m not exactly sure what else. So many flaws in that argument it’s hard to know where to begin. First, the number of flu deaths are totals for an entire year while the number of covid-19 deaths are just for a few weeks; we are assuming a lot about how the disease (and  people…) will perform during the course of an entire year. Second, the statement assumed linear growth which of course is not what happens during uncontrolled epidemics. Third, this argument implied that the Chinese stupidly and inexplicably closed down their economy because of the seasonal flu. (Actions reveal intentions and motivations.)

Another flaw in the argument that covid-19 is just another flu is captured by the aphorism: QUANTITY HAS A QUALITY ALL ITS OWN. Mistakenly attributed to Joseph Stalin, the observation appears to have become popularized instead by the US military-industrial complex. It attacks the logic behind linear projections—it’s just more of the same thing and therefore we can handle it. At some point, more of the same thing evolves into a different plant; we can pull out a few weeds by hand but not an entire yard-full. And quantity is not the only factor in play; pacing and tempo have significant impacts as well. One million cases of covid-19 during the course of a year may be manageable but half a million cases in 8 weeks not so much.

When I’m asked to recommend a book for aspiring intelligence analysts, I always mention Daniel Kahneman’s Thinking Fast and Slow. One of his famous findings is that humans are bad at comprehending exponential numbers. (If you start with a penny and double it every day, at the end of the month you will have more than $5 million; actually if the month has 31 days you end up with more than $10 million.)

I like to extend that idea by observing that HUMANS FIND IT HARD TO DEAL WITH EXPONENTIAL CAUSALITY. Exponential causality is one of the characteristics of complex systems. Any one event can have a cascade of consequences in unpredictable directions and time frames. Feedback can even travel backwards in time in the sense that a development today can reveal the unappreciated causal importance of some past event. Because exponential causality confounds humans, we like to pretend it doesn’t exist; a popular way to do that these days is by subscribing to conspiracy theories. So many factors contribute to today’s reality that there’s always a stray thread or two that can be pulled to create a conspiracy-based explanation. If you yearn for a simpler, linear world, then you’re happy to accept that Bill Gates and 5G technology have combined to cause the coronavirus. It’s a particularly dangerous cognitive trap.

One of my first bosses at CIA, John, impressed me with a story from his early days as an analyst. He was following a particular insurgent group in southeast Asia in the 1960s, and had calculated that because of supply disruptions the group would literally use up its ammunition by a date certain. John’s boss advised him to rethink his analysis because YOU NEVER RUN OUT OF BULLETS. In other words, linear predictions are always flawed because 1. our knowledge of any situation is incomplete 2. we never know the exact dimensions of our ignorance; and 3. shit happens.

 

Which brings us to the topic of coronavirus models. I’m sure statisticians will beat me up for this but I often think of models as compilations of hundreds of linear projections. The modeler tries to include every possible variable in her model and stipulates the tens of thousands of relationships among the variables—which is like really hard.  As the model runs every possible combination of variables is instantiated. This can be helpful to policymakers by representing in a more digestible fashion a complex set of possibilities. But models always simplify the complex—they make more linear that which is random. In my experience, models are particularly bad at accounting for the variations and peculiarities of human psychology—one of the most important factors determining the course of covid-19. Indeed, the failings of models will luckily keep human intelligence analysts employed for years to come.

Another useful aspect of models is that they bring into focus the most dangerous, possible outcomes and identify the levers policymakers and individuals can pull to avoid them. Which brings us to the PARADOX OF WARNING. The world has moved smartly to limit the worst consequences although the ones we’re left with are still pretty dire; it turns out the Chinese were not crazy to lock down entire cities to prevent further spread of the disease. But as we succeed in lowering the final number of deaths and infections, we start hearing from critics who claim the crisis was exaggerated from the start. Aaaargh! The only point of warning is to avoid the bad outcomes. No one should be rooting for maximum coronavirus. Effective warners always want to be wrong.

The coronavirus pandemic illustrates that good thinking is more than an academic exercise. It can be a matter of life and death. I’ve seen too many friends on social media using poor arguments to justify bad decisions. Please everyone, just put on your thinking caps.

 

 

Thinking in the Time of Coronavirus–Part 1

I’ve been wanting to comment on all the examples of bad thinking and cognitive traps that I’ve seen regarding coronavirus for a while now, well since early February for sure, but I’ve hesitated to put them down in writing because there is already too much content drawing spurious links to this horrible pandemic. But as we see signs that the infection curves are beginning to flatten in some countries (although certainly not all), it strikes me that good thinking will be just as critical as we work to recover our economies and manage the continuing threat of disease. So what follows is a compilation of some of the best and worst thinking practices revealed so far this year. (There are many so expect at least two posts.)

I was convinced the reports of a new, SARS-like disease in China were significant by mid-January. On 16 January I spoke at a conference that had a sizable contingent of attendees from Seattle and I remember fretting that Seattle would likely be one of the first American cities to get hit by coronavirus given the Chinese population on the West Coast and the travel patterns associated with Lunar New Year. I started tweeting and posting on Facebook about the disease in the second half of January and by late February it dominated my posts. Friends have asked me why I was so sure the disease would pose such a threat and I answered with one of my favorite heuristics from my CIA years: ACTIONS REVEAL INTENTIONS AND MOTIVATIONS.

When you’re trying to figure out a government or actor’s intentions, it’s always best to start with their actions. Pay attention to what they are doing. Given China’s obsession with economic growth and how the Communist Party’s legitimacy rested on delivering prosperity, I could not imagine why China would have closed down one of its most important cities out of an “abundance of caution”—a good name for a new rock band. The coronavirus had scared the shit out of the Chinese Government and the most reasonable explanation was that it was contagious and dangerous.

Whe20200411_144242n we began to see reports of massive disinfection campaigns and attacks on Chinese doctors who issued first warnings, I began to wonder what Beijing was trying to hide, if anything. Of course there was immediate speculation that coronavirus was some type of bioweapon; I’m no expert on this issue so I have to accept the judgment that the virus is not man-made. But the possibility that coronavirus leaked because of an industrial mishap or accidental discharge remains credible to me. Recent reports that the Chinese Government is controlling research into the origins of coronavirus just further pique my suspicions. Actions reveal intentions and motivations.

When I actually shared this view on social media a few weeks ago, several friends criticized me for going there. Why, I wondered. It wasn’t like the Chinese Government was known for its transparency and complete honesty. Why couldn’t these ideas be entertained? My answer in part is that IDEOLOGY OFTEN COLORS HOW WE THINK. There are so many examples of this dynamic spanning the ideological spectrum.

  • Advocates of globalization loathe to admit that China might have deceived other countries.
  • Supporters of the international system reluctant to criticize the World Health Organization.
  • Proponents of American exceptionalism insisting, against a lot of evidence, that the US has had the best response to the coronavirus.
  • Backers of the President condemning any suggestion that the US could have acted more quickly to contain the disease.
  • Critics of the President attacking his decision to limit travel from China in late January, although it was clearly the right thing to do. The more valid criticism is that it didn’t go far enough and there were too many loopholes.

And countless other examples we could mention. Because this is such a terrifying disease, it’s natural for people to fall back upon their values and ideological beliefs to interpret events. It’s natural but not helpful. In fact, it’s dangerous. Our beliefs lead us to ignore facts that don’t fit our ideology and overamplify developments that do. Unfortunately this thinking weakness will haunt our recovery efforts, particularly in the US where our politics have become exceptionally poisonous.

One important caveat: our ideology and values will play an unavoidable role going forward as we think about levels of acceptable risk. To my knowledge there is no objective way to measure the value of a human life. In the months to come we will be trading hundreds if not thousands of lives for decimals of economic growth. Your values are what will determine how you solve that equation. Less-polarized societies will find it easier to agree on the solution. The math will be difficult for the US. (And let me add that the very idea that this can be thought of as a math problem is anathema to many.)

I spoke at a conference in D.C. on 6 February about cognitive traps and used the emerging disease for my examples. The one cognitive bias that was most evident then is that WORST-CASE SCENARIOS ARE ALWAYS CONSIDERED UNLIKELY. In early February few people were expecting the disease to ravage Western Europe and the US and painted any such thinking as worst-case scenarios. Indeed, the first deaths did not occur in Italy until the last week of February. And yet it was reasonable to assume, I thought, that the disease could easily flare up in any country with connections to China, which was basically any place on the planet.

If you’re an analyst responsible for warning, remember that when you paint the most dangerous scenarios as worst-case, you make it easier for the decision-maker to dismiss them. And that’s what appears to have happened in the US government. Impact and probability need to be thought of as independent variables. Some category of “worst-case” scenario happens every year; the only “unlikely” aspect of “worst-case” scenarios is the ability to predict their timing. We are unable to know with precision when a dangerous development will occur, but we are sure to experience several in our lifetimes.

Humans have been flourishing on this planet for tens of thousands of years, solving many problems (and, of course, creating others). We can assume that almost all the easy problems have been solved and many of the hard ones as well. Going forward, most of our problems will be difficult to handle and few, if any, will have clear-cut solutions. Only good thinking will help.

Merry Humanity to You!!

Today my mom and I engaged in what is becoming a Christmas Eve ritual for us: noshing on burgers and fries at Gott’s Roadside diner in the Napa wine region of California. We’ve spent Christmas week in Napa for the past four years, gently sipping and eating and wondering what it would be like to spend the other 360 days of the year doing the same. During previous visits I had noticed the custom at Gott’s to use pseudos for your pickup name: “Fang, your order is ready, Fang!” I’d been anticipating all year what I might call myself the next time I visited Gott’s. I settled on a category: Greek philosophers.

I’ll let my tweets record what happens next:

How did Aristotle become Eristonald, I wonder? The person taking my order didn’t flinch when I said Aristotle. Didn’t ask how to spell it; in fact just confidently keyed my chosen name into the computer. Now, as is the case in fast-food jobs, this person was quite young and it is entirely conceivable and understandable that Aristotle had yet to enter her consciousness. But why ERISTONALD?

Does she know an ERISTONALD?

How could she so confidently type in a name she was clearly only guessing at?

There’s something lovely about her phonetics (or quirky about my pronounciation.). The ERIST syllables seem to suggest another language.

Is correct spelling just an eccentricity these days? or  Is correct spelling not a core Gott’s competency?

The poor person who had to announce to the world: “Eristonald, your order is ready, Eristonald” looked at me for an explanation. When I told her it was supposed to be Aristotle, she was relieved but only shrugged.

And then it all struck me as charming. Just another lovely example of sweet human imperfection. And how silly it is for us to get caught up in conceits, however small.

Which gets me to Christmas and religion. This time of year some of us think more about religious and spiritual matters than we normally do. And the pleasure I got from the sweetness of “the mistake” made me think about one of my main problems with most religions–the insistence that the goal of existence is human perfection.

I can’t help but think how silly this idea is. Human perfection seems perfectly pointless. Our charm lies in our clumsiness. Our grace is that we forgive each other our mistakes–or at least we should. And our passion comes from the desire to improve. Without the desire to improve, I just don’t understand how we can be very human.

There’s nothing profound here, I’m sure; it’s just one of the ways that the emotional logic of mainstream religions escapes me.

Like the promise of eternal life. First, I REALLY hate being bribed into religious belief. Sure, just play upon my fears. Second, I can’t think of a worst fate for humanity than eternal life. And if it’s eternal, perfect life, I’m really trying to understand what could possibly be the point of that.

I’m much happier just trying to be a productive member of the human team, making sweet mistakes that in time others may learn from.

Merry Humanity to All.

Being Open to the Serendipity of Sharing

A good friend (almost 40 years younger than I am) asked me last week what I thought of the message in this vide0.

I wrote my friend back yesterday and what I’ve posted below is my response unedited.

“So as someone who has essentially lived by herself her entire adult life–I have absolutely no problem with being alone. At the same time there is nothing I value more than having good conversations with people I know well–and also with new people who bring some interesting new dimension to the way I think.

I have personally found social networks very enriching because I learn so much more about people, both the ones i know in real life–though truth be told most of them hardly use social networks–and the ones I have met NIRL. I don’t think I’m confused about the difference between conversation and connection; that said I think some of my on-line relationships are quite substantial. These individuals appreciate the way I think and I appreciate the way they think and we bring interesting ideas to each others’ attention. If I post something unusually negative for me, they notice and ask me if something is wrong. This is not something that replaces IRL friendship but is an interesting and developing complement to it. (It is very helpful when I’m sitting in an airport waiting for my flights, for example. I always have the best on-line conversations in that hour at the gate.) I’ve often heard the 150 number and while I generally think there is a limit to whom we can know, the 150 number is based I think on experiments done before the advent of these new technologies. I’d like to see research done about the conditions we find ourselves in now.

The video doesn’t talk about what I think is one of the great new phenomena today–how near or complete strangers can delight each other through things they share online. I share a slice of my inner dialogue on-line. I see something interesting that makes me think; now I post many of those in case someone else might find it interesting as well. Some great exchanges happen as a result of being open to the serendipity of sharing.

What I actually think has been much more corrosive to the quality of people’s lives, much more so than sharing and the online life, is the culture of entertainment, which long predates Facebook and Twitter. I’m really troubled when I see people seemingly living their lives through the entertainment they consume. It drives me nuts really. Living your life as if the purpose of it is to be entertained is my definition of hell on earth.

Hope you have a great weekend and thanks for asking me what I thought about the video.

Your IRL friend,

Carmen”

Rules I Try to Live By

So the idea for this post began last week when a GovLab fellow was telling me that he thought he was finally figuring me out. (GovLab is a leadership development/innovation program I work with at Deloitte Consulting; actually I think of myself as the GovLab Yoda. I was of course interested in anyone willing to talk about me for an extended period of time. Bring it on!) And he said that a phrase he associates with me now is “And another way to look at this is…” This pleased me as I pride myself on being a contrarian thinker–a natural rebel trait. (I was going to edit out the word thinker and just say contrarian but I actually believe there is a difference between a contrarian and a contrarian thinker. A contrarian will say NO to many things; a contrarian thinker just wants to always examine the other side before coming to a conclusion–if indeed a conclusion is appropriate.) (I can tell already this post will contain many parenthetical statements.)

Anyhoo, I said, “Well Yes. I think that’s right. But another thing I’m trying to impart is that…”

1. Nothing is insignificant. My 32-year career as an intelligence analyst taught me, at least, that anything and everything can matter. In the early 1990s I read a book called Complexity by Mitchell Waldrop, which pretty much changed my intellectual life forever. complexity(If indeed it can be said that I have an intellectual life.) The book is an easily-digestible introduction to the principles of Complexity science. What it taught me is that big changes can be started by little things and ever since then I have thought of myself as an Analyst of Little Things.

2. You never run out of bullets. While we’re on lessons drawn from my analyst career, this phrase was told to me by a manager early on in my apprenticeship. He was recounting some work he had done as a young analyst on an insurgent or guerrilla group somewhere in the world. He had figured out, literally, how many bullets this particular group had, how many bullets they used per day, and therefore thought he knew exactly the date when the guerrillas would run out of bullets. My boss’s manager had saved him from this rookie mistake with the sage advice that “You never run out of bullets.” I.E. something will happen, some contingency will occur, that will upend your careful projection. As you can tell I never forgot that piece of advice. A more general and perhaps useful way of rephrasing it is: Linear Projections Ain’t So.

3. Everything stays the same…Until it changes. The last of my analysis-related rules. Change is a slippery rascal. It taunts you with false hope. (Or endless anxiety if you fear the change.) And then, many times when you’re least expecting, it pounces on you like a cat.  (All blog posts benefit from a Cat Gif)

153 cat gif

The world is just chock-full of rulers, practices, conventions, assumptions long past their Best By Dates. (It was even worse 30 years ago I think.) Estimating when the change will occur is pretty much a loser’s game. Even guessing correctly just once will mark you as a genius forever.

This rule also draws upon elements of complexity thinking. Everything looks like it’s staying the same because the change energy is brewing underneath the status quo line. Up until the moment it breaks through, you probably won’t be aware of the change. It’s not unlike how little earthquakes presage huge volcanic eruptions.

Being able to anticipate the imminence of big change is the ultimate test of any analyst, I think. As I said prediction is difficult, but understanding what is brewing below the status quo line should be the goal of every analyst. Always unpack claims that any kind of analysis is right 90% of the time. How much of that number is accounted for by correct predictions of continued stability?

4. The ends never justify the means because rarely do human projects reach their ends. So LIVE your Means. I don’t think this needs much explanation really. Life is what happens to you when you’re making other plans. Live your principles…all..the…time. 

5. The best thing God made is one day after the other. My grandmother used to say this. My grandmother’s name was Obdulia, but she was better known as Doña Yuya. (A friend of mine enjoyed that name so much that she called her car that.) As I write about my abuela I realize I never once heard her boast about herself.abuela

6. Try to see the humor in everything. It is particularly important to see the “funny” in things that are making you mad, like bureaucracy, or people who aren’t thinking, or the Internal Revenue Service, or our current political system.  (Come to think of it the American political system has zoomed way past humor and is now making a strong bid for absurd.)

7. Everything has meaning. It’s your problem if you don’t appreciate it.

Reality is the Land of Unintended Consequences

I was struck the other day by reports that President Obama last fall opposed the near unanimous recommendation of his advisors that the US arm Syrian rebels. (The USG has now decided, according to press reports, to provide some rebel groups with direct, non-lethal aid.) Although history may yet judge Obama’s reticence harshly, I couldn’t help but feel good that at least one senior US official–in this case the President–expressed concerns about the efficacy and unintended consequences of the traditional foreign policy “toolkit.” Certainly during my time in government and really just as a private citizen I’ve noticed that the actions of government don’t often seem to achieve their ends neatly, if at all.

As far as the world is concerned, we only have one “N”–there is only one earth and one history. We have no way of really judging the absolute efficacy of the grand schemes and decisions of government. We don’t really know how things would have turned out, for example, if we had never had the War in Vietnam. There is no John Madden-like sports simulation for foreign policy that would let us replay 100 times the US Government’s Asia policy in the 1950s and 1960s to determine statistically which gameplan would have fared better.

I think actually it may be in part due to this “unknowingness” that decisionmakers not just in government but in many industries stress the importance of making confident and fast decisions–why being a “J” on the Myers-Briggs is such a highly valued executive characteristic. The individuals who want to think through the decision a little bit longer–let’s call them the Let’s-think-about-it Firsters–are almost always argued down.

No doubt this chart –inspired by the Cynefin framework–is a bit unfair to strong decision-makers, but it nevertheless captures how I, a charter member of the Let’s-think-about-it Firsters, see the dynamic.Decisionmaking spectrum

As I hope the chart makes clear, even the Let’s-think-about-it Firsters miscalculate the reality algorithm.

At some point, months, perhaps years later, the decisionmakers begin to experience the miscalculation of their earlier solutions.  (I use the verb “experience” here purposefully. It’s hard to change a decision you’re invested in until you FEEL the mistake you’ve made.) The levers they pulled didn’t deliver the causal punch they expected and–usually worse–produced different consequences that appear to be just making things worse. That’s when the decisionmaking dynamic begins to look like this.Newdecisionmaking

It’s at this point that a different, more nuanced, and more flexible set of decisions becomes possible. The bold decisionmakers and Let’s-think-about-it Firsters are closer in their appreciation of the dynamic they are trying to “solve.” (Even at this point, only the most bold would dare suggest that no solution might be immediately forthcoming.)

As is usually the case, I don’t have a “solution” for this predicament. There probably isn’t one. Perhaps the best approach is for everyone to be a bit more humble about their recommended courses of action. And always be ready to revisit decisions you made, even if you were positive they were the right thing to do.

The Fallacy of Worst Case Scenarios

Hurricane Sandy has led to some pretty foolish thinking. Just the other night some high school classmates were involved in some dreary silliness on Facebook about what might have caused Noah’s flood. One, a proud climate change denier and I’m sure strong Christian, opined that perhaps global warming back then was caused by dinosaur farts. (I told you it was dreary.)

Hard to take that conversation seriously but there are other statements in the wake of Hurricane Sandy that strike me almost as inane. Several times I’ve heard officials say some version of “no one thought something like this could happen.” What a strange excuse that! First, of course, the Discovery and National Geographic channels have made some decent money the past decade entertaining disaster groupies with exactly how something like this could happen. Second, anticipatory thinking is an essential element of managing and leading. If you can’t generate vision well at least you might be able to anticipate.

So I think what people really mean  by that statement is that no one wanted to think about the scenario that just happened. No doubt they put it in the category of Worst Case scenario. Once you assign a possible event to the category of worst case scenario you are at risk of succumbing to a dangerous fallacy, one I saw with some regularity in government. To wit: the assumption that “worst case scenario” and “unlikely” are synonyms.

Think back on some of the meetings you’ve attended. Some Cassandra in the room starts warning of a frightful worst case scenario. Some other person, determined to avoid excess ruffling, says: “Oh you’re overreacting. That’s just a worst case scenario!” What they are also saying, of course, is “that’s very unlikely.”

(I am reminded here of something I once heard former Secretary of Defense Robert Gates say. Because he was a notorious “worst case” thinker he was often accused of being a Cassandra–the prophet who predicted the fall of Troy. At which point he would remind us that “Cassandra was right!”)

Severity of impact and likelihood are independent variables. It is entirely possible for a worst case scenario to be quite feasible, if not even probable. But the conflation of worst case with unlikely is quite common and potentially catastrophic. Unfortunately, things that we find unimaginable have a nasty way of becoming inevitable.

I imagine another factor was at play here, one for which I find more sympathy. When they say that they didn’t think something like Hurricane Sandy could happen, what they also might mean is that they didn’t believe the threat was likely enough to justify taking expensive preventative actions given other budget priorities. This strikes me as a much more honest statement and reflects a judgment that public officials must make on a regular basis. Nevertheless, consideration of “likelihood” must still be made independently of any other variable. We humans have a tendency to think we are bulletproof and to assume that bad things are unlikely to happen to us. When in fact the art of living, of government, of leadership in any sector is best captured by how well we deal with calamities.
For those who can, I am linking here to the donation form for AmeriCares, a charity I myself support.

Donate Now.

The Ten Habits of Non-Conventional Thinkers

One of the things I do a couple of times a year is lead a discussion on conventional wisdom. It wasn’t my idea to do this. I was asked a few years ago by someone who was teaching a class on intelligence who wanted to hold a session on conventional wisdom. He thought I was the perfect person to lead it. Whatever…

Conventional wisdom is like the monster under the bed for intelligence analysts. We’re all afraid of it but we can’t quite describe what it looks like. Some worry very much that they are actually guilty of it themselves. But we’d rather not talk about it.

I struggled with the assignment because I felt that to talk about conventional wisdom I needed to use examples. Otherwise it wouldn’t be meaningful. But of course my example of conventional wisdom might be someone else’s strongly-held beliefs. I finally hit upon the idea of talking about conventional wisdom in the context of cosmology. If you study the history of man’s understanding of the universe and its origins, you become aware that it is actually the story of conventional wisdoms (plural intentional). The prevailing theory is replaced by a new theory that sooner or later becomes conventional wisdom ripe for replacement by the next new theory. Second verse same as the first.

The lessons I draw could also be reversed and thought of as best practices for people who don’t want to be conventional thinkers. And so here they are. The 10 habits of non-conventional thinkers just in case you want to be one too.

1. Non-conventional thinkers are very suspicious of what anyone says they “know”.  They consider knowledge a pretty slippery character who is largely the creation of whatever sensemaking tools are popular at the moment. When we develop new tools, we develop new knowledge that often topples down all previous architectures of knowledge.

2. Non-conventional thinkers eschew tidy, neat thinking. They like messy ideas. They go looking for them. They are not taken in by common human crutches such as the desire for symmetry.

3. This is hard, but non-conventional thinkers try to avoid falling in love with their ideas. They are mean to them, even abusive. (or at least they should be!)

4. Non-conventional thinkers don’t censor themselves. They try to say out loud or write down everything they’re thinking. I think too many people don’t even offer up the good ideas inside their heads.

5. They talk to and listen to very diverse people. They enjoy reading about ideas that are way out there. Last night on Netflix I watched a fascinating biography of William Burroughs. My Netflix Horoscope right now says I enjoy watching cerebral biographical documentaries.

6. They don’t think much is sacred. Not even Albert Einstein. Albert Einstein was a brilliant man. It seems like he was also a kind person. But he famously hated the idea of an expanding universe. Because astronomers and physicists were so in awe of him, they tried to explain away for about 20 years data that pointed to a big bang.

7. Non-conventional thinkers love to attack disciplines and ideas that have been static for a long time. They like even better to attack truths.

8. They love to look at things from completely different angles. They want to see the very finest details. They actually prefer to know exactly how things work. Non-conventional thinkers take no perspectives for granted and expect to find an element of truth even in the most outlandish points of view.

9. Non-conventional thinkers like to stimulate their thinking with sillinesses. They will engage in little rituals that fertilize their brains. Today I colored.

10. Non-conventional thinkers never stop looking.