Tag Archives: coronavirus

Thinking in the Time of Coronavirus–Part 2

20200402_172635The previous post discussed three important thinking dynamics relevant to our analysis of coronavirus:

Actions reveal intentions and motivations
Ideology often colors how we think; and
Worst case scenarios are always considered unlikely
(I’d amend that now to say almost always.)

…but there are many more.

Since the crisis began in January, I’ve come across many commentators—scientists, non-scientists, experts in other fields, jacks-of-all-trades—speculating about coronavirus and attempting to predict its course. Many statements were similar to this one by Dr. Fauci on January 26: “It’s a very, very low risk to the US.” I could not comprehend at the time the evidentiary or logical basis for such statements. Did the individuals making these statements believe the Chinese Government was engaged in some weird overreaction or that the virus would only uniquely prosper in China? Did they assume that the hundreds of thousands of people who would come to the US in 2020 after visiting China (or in a few weeks time Italy) would all be free of the disease or that we would somehow detect them as they entered the country? Were they just making a linear projection from the miniscule number of cases then in the US?

One cognitive pathology at work here is that INDIVIDUALS, EVEN TRAINED SCIENTISTS, ARE REALLY BAD AT DRAWING APPROPRIATE CONCLUSIONS FROM AVAILABLE EVIDENCE. Because I worked as an analyst at CIA for 32 years, I am familiar with this phenomenon. Policymakers are always demanding judgments from analysts, and we often feel obliged to provide them even when the evidentiary basis is insufficient. At any moment regarding any situation, how accurately does the evidence available to us reflect reality? Today as I write this, how much do we really know about coronavirus: 50% of reality, 30%, 10%? The answer at this point is unknowable. Therefore, predictions concerning its future course are tenuous.

Two other realities about thinking are worth mentioning here. First, OUR ABILITY TO KNOW IS A FUNCTION OF OUR TOOLS FOR KNOWING. We can only know what our tools reveal to us. Breakthroughs, revolutions in thinking in so many fields have been the result of inventions/discoveries of new knowledge tools. In cosmology, for example, our understanding of the universe expanded when we learned to build great observatories and combined cameras with telescopes. The deployment of orbital platforms such as the Hubble have further revolutionized our knowledge.

Our understanding of coronavirus has been diminished not just by its novelty but also because China may not have revealed all it has learned about the disease. Another tool problem is the lack of comprehensive testing of populations. Some of my Texas friends have claimed that Texas must be doing a great job containing coronavirus (or that there really isn’t a threat) because of the relatively low rates of infections and deaths. But Texas, as of April 15, has one of the three lowest rates of testing in the country. We don’t really know what’s going on there. And we won’t comprehend critical attributes of the virus, such as fatality and contagion rates, until we have tested a large and random sample of our population. This inherently incomplete nature of our knowledge should make us more humble about our predictions and expectations concerning the course of the disease. For many questions, we still do not have sufficient information to make a firm determination and thus need to err on the sides of caution and resilience.

But instead we have a tendency when confronted with limited information to succumb to THE STREETLIGHT EFFECT. The joke is that a policeman runs across an individual, usually described as inebriated, looking for car keys under a street lamp. When the policeman asks if this is where the keys were lost, the seeker answers “No, but this is the only place I can see.”

When we make confident predictions based on insufficient or flawed evidence, we are succumbing to the streetlight effect. One vivid example is how people jumped on the  hydroxychloroquine bandwagon after just a couple of positive reports. At the start of the pandemic, many (and some still do) argued that covid-19 would be no worse than a bad seasonal flu. Those arguments were based on deaths up to that point (a few hundred or thousands) and I’m not exactly sure what else. So many flaws in that argument it’s hard to know where to begin. First, the number of flu deaths are totals for an entire year while the number of covid-19 deaths are just for a few weeks; we are assuming a lot about how the disease (and  people…) will perform during the course of an entire year. Second, the statement assumed linear growth which of course is not what happens during uncontrolled epidemics. Third, this argument implied that the Chinese stupidly and inexplicably closed down their economy because of the seasonal flu. (Actions reveal intentions and motivations.)

Another flaw in the argument that covid-19 is just another flu is captured by the aphorism: QUANTITY HAS A QUALITY ALL ITS OWN. Mistakenly attributed to Joseph Stalin, the observation appears to have become popularized instead by the US military-industrial complex. It attacks the logic behind linear projections—it’s just more of the same thing and therefore we can handle it. At some point, more of the same thing evolves into a different plant; we can pull out a few weeds by hand but not an entire yard-full. And quantity is not the only factor in play; pacing and tempo have significant impacts as well. One million cases of covid-19 during the course of a year may be manageable but half a million cases in 8 weeks not so much.

When I’m asked to recommend a book for aspiring intelligence analysts, I always mention Daniel Kahneman’s Thinking Fast and Slow. One of his famous findings is that humans are bad at comprehending exponential numbers. (If you start with a penny and double it every day, at the end of the month you will have more than $5 million; actually if the month has 31 days you end up with more than $10 million.)

I like to extend that idea by observing that HUMANS FIND IT HARD TO DEAL WITH EXPONENTIAL CAUSALITY. Exponential causality is one of the characteristics of complex systems. Any one event can have a cascade of consequences in unpredictable directions and time frames. Feedback can even travel backwards in time in the sense that a development today can reveal the unappreciated causal importance of some past event. Because exponential causality confounds humans, we like to pretend it doesn’t exist; a popular way to do that these days is by subscribing to conspiracy theories. So many factors contribute to today’s reality that there’s always a stray thread or two that can be pulled to create a conspiracy-based explanation. If you yearn for a simpler, linear world, then you’re happy to accept that Bill Gates and 5G technology have combined to cause the coronavirus. It’s a particularly dangerous cognitive trap.

One of my first bosses at CIA, John, impressed me with a story from his early days as an analyst. He was following a particular insurgent group in southeast Asia in the 1960s, and had calculated that because of supply disruptions the group would literally use up its ammunition by a date certain. John’s boss advised him to rethink his analysis because YOU NEVER RUN OUT OF BULLETS. In other words, linear predictions are always flawed because 1. our knowledge of any situation is incomplete 2. we never know the exact dimensions of our ignorance; and 3. shit happens.

 

Which brings us to the topic of coronavirus models. I’m sure statisticians will beat me up for this but I often think of models as compilations of hundreds of linear projections. The modeler tries to include every possible variable in her model and stipulates the tens of thousands of relationships among the variables—which is like really hard.  As the model runs every possible combination of variables is instantiated. This can be helpful to policymakers by representing in a more digestible fashion a complex set of possibilities. But models always simplify the complex—they make more linear that which is random. In my experience, models are particularly bad at accounting for the variations and peculiarities of human psychology—one of the most important factors determining the course of covid-19. Indeed, the failings of models will luckily keep human intelligence analysts employed for years to come.

Another useful aspect of models is that they bring into focus the most dangerous, possible outcomes and identify the levers policymakers and individuals can pull to avoid them. Which brings us to the PARADOX OF WARNING. The world has moved smartly to limit the worst consequences although the ones we’re left with are still pretty dire; it turns out the Chinese were not crazy to lock down entire cities to prevent further spread of the disease. But as we succeed in lowering the final number of deaths and infections, we start hearing from critics who claim the crisis was exaggerated from the start. Aaaargh! The only point of warning is to avoid the bad outcomes. No one should be rooting for maximum coronavirus. Effective warners always want to be wrong.

The coronavirus pandemic illustrates that good thinking is more than an academic exercise. It can be a matter of life and death. I’ve seen too many friends on social media using poor arguments to justify bad decisions. Please everyone, just put on your thinking caps.

 

 

Thinking in the Time of Coronavirus–Part 1

I’ve been wanting to comment on all the examples of bad thinking and cognitive traps that I’ve seen regarding coronavirus for a while now, well since early February for sure, but I’ve hesitated to put them down in writing because there is already too much content drawing spurious links to this horrible pandemic. But as we see signs that the infection curves are beginning to flatten in some countries (although certainly not all), it strikes me that good thinking will be just as critical as we work to recover our economies and manage the continuing threat of disease. So what follows is a compilation of some of the best and worst thinking practices revealed so far this year. (There are many so expect at least two posts.)

I was convinced the reports of a new, SARS-like disease in China were significant by mid-January. On 16 January I spoke at a conference that had a sizable contingent of attendees from Seattle and I remember fretting that Seattle would likely be one of the first American cities to get hit by coronavirus given the Chinese population on the West Coast and the travel patterns associated with Lunar New Year. I started tweeting and posting on Facebook about the disease in the second half of January and by late February it dominated my posts. Friends have asked me why I was so sure the disease would pose such a threat and I answered with one of my favorite heuristics from my CIA years: ACTIONS REVEAL INTENTIONS AND MOTIVATIONS.

When you’re trying to figure out a government or actor’s intentions, it’s always best to start with their actions. Pay attention to what they are doing. Given China’s obsession with economic growth and how the Communist Party’s legitimacy rested on delivering prosperity, I could not imagine why China would have closed down one of its most important cities out of an “abundance of caution”—a good name for a new rock band. The coronavirus had scared the shit out of the Chinese Government and the most reasonable explanation was that it was contagious and dangerous.

Whe20200411_144242n we began to see reports of massive disinfection campaigns and attacks on Chinese doctors who issued first warnings, I began to wonder what Beijing was trying to hide, if anything. Of course there was immediate speculation that coronavirus was some type of bioweapon; I’m no expert on this issue so I have to accept the judgment that the virus is not man-made. But the possibility that coronavirus leaked because of an industrial mishap or accidental discharge remains credible to me. Recent reports that the Chinese Government is controlling research into the origins of coronavirus just further pique my suspicions. Actions reveal intentions and motivations.

When I actually shared this view on social media a few weeks ago, several friends criticized me for going there. Why, I wondered. It wasn’t like the Chinese Government was known for its transparency and complete honesty. Why couldn’t these ideas be entertained? My answer in part is that IDEOLOGY OFTEN COLORS HOW WE THINK. There are so many examples of this dynamic spanning the ideological spectrum.

  • Advocates of globalization loathe to admit that China might have deceived other countries.
  • Supporters of the international system reluctant to criticize the World Health Organization.
  • Proponents of American exceptionalism insisting, against a lot of evidence, that the US has had the best response to the coronavirus.
  • Backers of the President condemning any suggestion that the US could have acted more quickly to contain the disease.
  • Critics of the President attacking his decision to limit travel from China in late January, although it was clearly the right thing to do. The more valid criticism is that it didn’t go far enough and there were too many loopholes.

And countless other examples we could mention. Because this is such a terrifying disease, it’s natural for people to fall back upon their values and ideological beliefs to interpret events. It’s natural but not helpful. In fact, it’s dangerous. Our beliefs lead us to ignore facts that don’t fit our ideology and overamplify developments that do. Unfortunately this thinking weakness will haunt our recovery efforts, particularly in the US where our politics have become exceptionally poisonous.

One important caveat: our ideology and values will play an unavoidable role going forward as we think about levels of acceptable risk. To my knowledge there is no objective way to measure the value of a human life. In the months to come we will be trading hundreds if not thousands of lives for decimals of economic growth. Your values are what will determine how you solve that equation. Less-polarized societies will find it easier to agree on the solution. The math will be difficult for the US. (And let me add that the very idea that this can be thought of as a math problem is anathema to many.)

I spoke at a conference in D.C. on 6 February about cognitive traps and used the emerging disease for my examples. The one cognitive bias that was most evident then is that WORST-CASE SCENARIOS ARE ALWAYS CONSIDERED UNLIKELY. In early February few people were expecting the disease to ravage Western Europe and the US and painted any such thinking as worst-case scenarios. Indeed, the first deaths did not occur in Italy until the last week of February. And yet it was reasonable to assume, I thought, that the disease could easily flare up in any country with connections to China, which was basically any place on the planet.

If you’re an analyst responsible for warning, remember that when you paint the most dangerous scenarios as worst-case, you make it easier for the decision-maker to dismiss them. And that’s what appears to have happened in the US government. Impact and probability need to be thought of as independent variables. Some category of “worst-case” scenario happens every year; the only “unlikely” aspect of “worst-case” scenarios is the ability to predict their timing. We are unable to know with precision when a dangerous development will occur, but we are sure to experience several in our lifetimes.

Humans have been flourishing on this planet for tens of thousands of years, solving many problems (and, of course, creating others). We can assume that almost all the easy problems have been solved and many of the hard ones as well. Going forward, most of our problems will be difficult to handle and few, if any, will have clear-cut solutions. Only good thinking will help.