Thinking in the Time of Coronavirus–Part 2

20200402_172635The previous post discussed three important thinking dynamics relevant to our analysis of coronavirus:

Actions reveal intentions and motivations
Ideology often colors how we think; and
Worst case scenarios are always considered unlikely
(I’d amend that now to say almost always.)

…but there are many more.

Since the crisis began in January, I’ve come across many commentators—scientists, non-scientists, experts in other fields, jacks-of-all-trades—speculating about coronavirus and attempting to predict its course. Many statements were similar to this one by Dr. Fauci on January 26: “It’s a very, very low risk to the US.” I could not comprehend at the time the evidentiary or logical basis for such statements. Did the individuals making these statements believe the Chinese Government was engaged in some weird overreaction or that the virus would only uniquely prosper in China? Did they assume that the hundreds of thousands of people who would come to the US in 2020 after visiting China (or in a few weeks time Italy) would all be free of the disease or that we would somehow detect them as they entered the country? Were they just making a linear projection from the miniscule number of cases then in the US?

One cognitive pathology at work here is that INDIVIDUALS, EVEN TRAINED SCIENTISTS, ARE REALLY BAD AT DRAWING APPROPRIATE CONCLUSIONS FROM AVAILABLE EVIDENCE. Because I worked as an analyst at CIA for 32 years, I am familiar with this phenomenon. Policymakers are always demanding judgments from analysts, and we often feel obliged to provide them even when the evidentiary basis is insufficient. At any moment regarding any situation, how accurately does the evidence available to us reflect reality? Today as I write this, how much do we really know about coronavirus: 50% of reality, 30%, 10%? The answer at this point is unknowable. Therefore, predictions concerning its future course are tenuous.

Two other realities about thinking are worth mentioning here. First, OUR ABILITY TO KNOW IS A FUNCTION OF OUR TOOLS FOR KNOWING. We can only know what our tools reveal to us. Breakthroughs, revolutions in thinking in so many fields have been the result of inventions/discoveries of new knowledge tools. In cosmology, for example, our understanding of the universe expanded when we learned to build great observatories and combined cameras with telescopes. The deployment of orbital platforms such as the Hubble have further revolutionized our knowledge.

Our understanding of coronavirus has been diminished not just by its novelty but also because China may not have revealed all it has learned about the disease. Another tool problem is the lack of comprehensive testing of populations. Some of my Texas friends have claimed that Texas must be doing a great job containing coronavirus (or that there really isn’t a threat) because of the relatively low rates of infections and deaths. But Texas, as of April 15, has one of the three lowest rates of testing in the country. We don’t really know what’s going on there. And we won’t comprehend critical attributes of the virus, such as fatality and contagion rates, until we have tested a large and random sample of our population. This inherently incomplete nature of our knowledge should make us more humble about our predictions and expectations concerning the course of the disease. For many questions, we still do not have sufficient information to make a firm determination and thus need to err on the sides of caution and resilience.

But instead we have a tendency when confronted with limited information to succumb to THE STREETLIGHT EFFECT. The joke is that a policeman runs across an individual, usually described as inebriated, looking for car keys under a street lamp. When the policeman asks if this is where the keys were lost, the seeker answers “No, but this is the only place I can see.”

When we make confident predictions based on insufficient or flawed evidence, we are succumbing to the streetlight effect. One vivid example is how people jumped on the  hydroxychloroquine bandwagon after just a couple of positive reports. At the start of the pandemic, many (and some still do) argued that covid-19 would be no worse than a bad seasonal flu. Those arguments were based on deaths up to that point (a few hundred or thousands) and I’m not exactly sure what else. So many flaws in that argument it’s hard to know where to begin. First, the number of flu deaths are totals for an entire year while the number of covid-19 deaths are just for a few weeks; we are assuming a lot about how the disease (and  people…) will perform during the course of an entire year. Second, the statement assumed linear growth which of course is not what happens during uncontrolled epidemics. Third, this argument implied that the Chinese stupidly and inexplicably closed down their economy because of the seasonal flu. (Actions reveal intentions and motivations.)

Another flaw in the argument that covid-19 is just another flu is captured by the aphorism: QUANTITY HAS A QUALITY ALL ITS OWN. Mistakenly attributed to Joseph Stalin, the observation appears to have become popularized instead by the US military-industrial complex. It attacks the logic behind linear projections—it’s just more of the same thing and therefore we can handle it. At some point, more of the same thing evolves into a different plant; we can pull out a few weeds by hand but not an entire yard-full. And quantity is not the only factor in play; pacing and tempo have significant impacts as well. One million cases of covid-19 during the course of a year may be manageable but half a million cases in 8 weeks not so much.

When I’m asked to recommend a book for aspiring intelligence analysts, I always mention Daniel Kahneman’s Thinking Fast and Slow. One of his famous findings is that humans are bad at comprehending exponential numbers. (If you start with a penny and double it every day, at the end of the month you will have more than $5 million; actually if the month has 31 days you end up with more than $10 million.)

I like to extend that idea by observing that HUMANS FIND IT HARD TO DEAL WITH EXPONENTIAL CAUSALITY. Exponential causality is one of the characteristics of complex systems. Any one event can have a cascade of consequences in unpredictable directions and time frames. Feedback can even travel backwards in time in the sense that a development today can reveal the unappreciated causal importance of some past event. Because exponential causality confounds humans, we like to pretend it doesn’t exist; a popular way to do that these days is by subscribing to conspiracy theories. So many factors contribute to today’s reality that there’s always a stray thread or two that can be pulled to create a conspiracy-based explanation. If you yearn for a simpler, linear world, then you’re happy to accept that Bill Gates and 5G technology have combined to cause the coronavirus. It’s a particularly dangerous cognitive trap.

One of my first bosses at CIA, John, impressed me with a story from his early days as an analyst. He was following a particular insurgent group in southeast Asia in the 1960s, and had calculated that because of supply disruptions the group would literally use up its ammunition by a date certain. John’s boss advised him to rethink his analysis because YOU NEVER RUN OUT OF BULLETS. In other words, linear predictions are always flawed because 1. our knowledge of any situation is incomplete 2. we never know the exact dimensions of our ignorance; and 3. shit happens.

 

Which brings us to the topic of coronavirus models. I’m sure statisticians will beat me up for this but I often think of models as compilations of hundreds of linear projections. The modeler tries to include every possible variable in her model and stipulates the tens of thousands of relationships among the variables—which is like really hard.  As the model runs every possible combination of variables is instantiated. This can be helpful to policymakers by representing in a more digestible fashion a complex set of possibilities. But models always simplify the complex—they make more linear that which is random. In my experience, models are particularly bad at accounting for the variations and peculiarities of human psychology—one of the most important factors determining the course of covid-19. Indeed, the failings of models will luckily keep human intelligence analysts employed for years to come.

Another useful aspect of models is that they bring into focus the most dangerous, possible outcomes and identify the levers policymakers and individuals can pull to avoid them. Which brings us to the PARADOX OF WARNING. The world has moved smartly to limit the worst consequences although the ones we’re left with are still pretty dire; it turns out the Chinese were not crazy to lock down entire cities to prevent further spread of the disease. But as we succeed in lowering the final number of deaths and infections, we start hearing from critics who claim the crisis was exaggerated from the start. Aaaargh! The only point of warning is to avoid the bad outcomes. No one should be rooting for maximum coronavirus. Effective warners always want to be wrong.

The coronavirus pandemic illustrates that good thinking is more than an academic exercise. It can be a matter of life and death. I’ve seen too many friends on social media using poor arguments to justify bad decisions. Please everyone, just put on your thinking caps.

 

 

2 responses to “Thinking in the Time of Coronavirus–Part 2

  1. Thanks for writing these, Carmen. You offer food for thought and reawaken memories of what can happen when policymakers determine that the worst case has come true (e.g., the run-up to the Iraq war). In defense of Dr. Fauci’s “very, very low risk” statement in January when there were only two known cases of the virus in the US, the context of his remark appears designed to discourage the American citizenry from panicking while emphasizing that the public health community was taking the virus very seriously. In other words, his statement was less an analytic one and more of a public service announcement.

  2. Great explanation of exponential causality and the flaws of linear predictions. I also like the title of this article, which looks to me like a take off from Gabriel Garcia Marquez’ classic, “Love in the Time of Cholera”. Cheers

Leave a comment