As commentators, politicians and academics struggle to make sense of the recent financial crisis and its ramifications, many of their accounts seek to identify a root cause or the “beginning of the story.”

Theories abound: Former Federal Reserve Board Chairman Alan Greenspan’s “easy money” for banks; the blindness of the credit-rating companies; strategies that encouraged low-income Americans to own homes; the invention of high-risk investment instruments; high-leveraged borrowing; short-sighted executives; greedy investment bankers; lying real-estate dealers, and so on.

We know there was no single cause or event that set in motion the crisis and that the truth is complex and multicausal. So why do we keep seeking the easy answers? It may be that we are hard-wired to do so.

The human brain is designed to support two modes of thought: visual and narrative. These forms of thinking are universal across human societies throughout history, develop reliably early in individuals’ lives, and are associated with specialized regions of the brain. What isn’t universal or natural is the kind of highly structured cognitive processes that underlie logical and mathematical thinking -- the kinds of analysis that produce the most remarkable human cultural products, especially scientific achievements such as interplanetary travel, electronic devices and genetic engineering. They also allow the types of analysis needed to design effective economic policies and business strategies.

Visual Thinking

Nonetheless, we shouldn’t underestimate the powers of our innate visual and narrative cognitive systems. The human visual system is a remarkably adept interpreter of the physical environment. It still beats any non-human software at object recognition. And, we navigate complex three-dimensional environments, even when moving at speeds far beyond those afforded by our physical bodies, with amazingly few accidents.

And compared with all other animals, we are endowed with remarkable capacities for causal discovery and causal reasoning, the skills that underlie the narrative habit. The divide between human and our nearest primate cousins in causal cognition capacities is as dramatic as our advantage in language use.

The trouble is that narrative thinking often supplants scientific thinking in domains of analysis and policy where we should look for more than a good story. Narrative thinking is easier for the thinker than its less natural analytic alternatives, and it is often persuasive when used to make arguments to others.

For one thing, narratives give us a false sense of understanding and control, when they are really mere redescriptions of selected subparts of the events to which they refer. Once we have a good narrative summary, we have the illusion that we could have intervened and controlled outcomes, or could have predicted what in hindsight seems to be an obvious outcome. But, unlike valid causal explanations that support informative forecasts and suggest ways to change events further down the causal stream, narratives lack these basic properties of true causal explanations.

Narratives also tend to be dominated by a few major actors, and faux explanatory power is derived from simplistic interpretations of those actors’ characters and motives. And the universal human illusion that consciously accessible thoughts are in the driver’s seat and controlling our own actions means that the salient actors in a narrative we want to understand are attributed information and incentives to a greater degree than is warranted.

New Model

The mathematics of causal reasoning has recently experienced a major change, with the widespread acceptance of Bayesian Causal Networks as a normative, rational model for causal induction and reasoning. Now, analytic approaches to causal discovery and causal analysis are becoming the gold standard and replacing expert intuitions about causes in important scientific and policy situations. (Acceptance is not universal, of course, and the Causal Networks approach is new and still “under development.”)

My University of Chicago colleague Benjamin Rottman and I recently reviewed research on intuitive human causal reasoning. We compared human reasoning habits against the “rational” prescriptions from the Bayesian Causal Networks model and found that, as with vision, humans are remarkably well-adapted and close to rational in most regards.

One of the most subtle causal inferences involves reasoning about effects or outcomes that are multiply caused by several “cause events.” If the multiple causes are independent of one another, what mathematicians call a “noisy or gate,” the occurrence of each causal event increases the likelihood the effect-event will occur, independent of the contributions of other causes that may or may not be present. Under this condition, knowing that the effect has occurred and that any one of the possible causes has also occurred should rationally lower belief in the occurrence of any of the other causes.

This subtle result is called “discounting” or “explaining away.” That is because the presence of the effect plus some causes makes it less likely that another cause occurred or is needed to “explain” the effect.

For example, if an outcome, such as a forest fire, is most likely produced by any one of several independent causes, then when we are sure that one cause is arson, we should rationally reduce our belief that it was caused by something else, such as lightning or a careless camper.

It appears, however, that humans are a bit too eager to “discount,” and discounting is an error, when conjunctions of causes are the correct explanation, as they usually are. This habit can get us into trouble when we’re reasoning about complex multiply caused events. Perhaps the single-factor explanations for the recent financial recession reflect a bias produced by over-discounting.

First Causes

Another subtle irrationality in causal reasoning stems from our quest for a coherent, well-formed story. The legendary theorists of decision-making Amos Tversky and Daniel Kahneman illustrated this habit with the following pair of judgment questions: One group of respondents was asked, “What is the probability that a massive flood will occur sometime in the next year and drown more than 1,000 Americans?” The typical estimate was low (less than 20 percent). But, when another comparable sample of respondents was asked, “What is the probability that an earthquake in California will be followed by a flood in the next year that drowns at least 1,000 Americans?” the estimates were significantly higher.

The irrationality is that the second question is about a much more specific event, an earthquake that would be only one of the several reasons for the flood referred to in the first question. It is logically impossible for the second probability to be higher than the first. But, because the second question provides a plausible scenario for the unlikely outcome in the first query, our innate preference for a good story trumps our logical thinking skills.

So the next time you hear a good story about why the financial recession, or any other economically significant event, was caused by a single collection of bad actors -- or how a simple linear narrative “explains” an important event -- remember this: Just as we are wired to like a diet rich in fats and sugars, we have an appetite for simple, coherent narratives. Neither habit is good for our long-term health.

(Reid Hastie, a professor of behavioral science at the University of Chicago Booth School of Business, is a contributor to Business Class. The opinions expressed are his own.)

Read more opinion online from Bloomberg View.

Today’s highlights: the View editors on India’s economic crossroads and monitoring mad-cow disease; A.A. Gill on London’s shareholder revolt; Ezra Klein on Richard Lugar’s concession speech; Noah Feldman on Israel’s new coalition; Caroline Baum on the nature of U.S. unemployment; Sam Sherraden on China’s liberalization.

To contact the writer of this article: Reid Hastie at reid.hastie@chicagobooth.edu

To contact the editor responsible for this article: Max Berley at mberley@bloomberg.net