One can ask what most accurately describes the essence of intelligent, objective, public service-oriented politics. Is it primarily an honest competition among the dominant ideologies of our times, a self-interested quest for influence and power or a combination of the two? Does it boil down to understanding the biological functioning of the human mind and how it sees and thinks about the world? Or, or is it something else entirely?
Turns out, it isn’t even close. Superforecasting comes down squarely on the side of getting the biology right. Everything else is a distant second.
Superforecasting: The Art & Science of Prediction, written by Philip E. Tetlock and Dan Gardener (Crown Publishers, September 2015), describes Tetlock’s ongoing research into asking what factors, if any, can be identified that contribute to a person’s ability to predict the future. In Superforecasting, Tetlock asks how well average but intellectually engaged people can do compared to experts, including professional national security analysts with access to classified information.
What Tetlock and his team found was that the interplay between dominant, unconscious, distortion-prone intuitive human cognitive processes (“System 1” or the “elephant” as described before) and less-influential but conscious, rational processes (“System 2” or the “rider”) was a key factor in how well people predicted future events.
Tetlock observes that a “defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere. . . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.”
It turns out that with minimal training and the right mind set, some people, “superforecasters,” routinely trounce the experts.
Based on a 4-year study known as the “Good Judgment Project,” funded by the DoD’s Intelligence Advanced Research Projects Agency, about 2,800 volunteers made over a million predictions on topics that ranged from potential conflicts between countries to currency fluctuations. Those predictions had to be, and were, precise enough to be analyzed and scored.
About 1% of the 2,800 volunteers turned out to be superforecasters who beat national security analysts by about 30% at the end of the first year. One even beat commodities futures markets by 40%.
The superforecaster volunteers did whatever they could to get information, but they nonetheless beat professional analysts who were backed by computers and programmers, spies, spy satellites, drones, informants, databases, newspapers, books, and whatever else that lots of money can buy.
As Tetlock put it:
“…these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries, and pensions.”
What Makes Them So Good?
The top 1-2% of volunteers were carefully assessed for personal traits. In general, superforecasters tended to be people who were eclectic about collecting information and open minded in their world view. They were also able to step outside of themselves and look at problems from an “outside view.”
To do that they searched out and aggregated other perspectives, which goes counter to the human tendency to seek out only information that confirms what we already know or want to believe. That tendency is an unconscious bias called confirmation bias.
It turns out that with minimal training and the right mind set, some people, 'superforecasters,' routinely trounce the experts.
Superforecasters tended to break complex questions down into component parts so that relevant factors could be considered separately, which also tends to reduce unconscious bias-induced fact and logic distortions. In general, superforecaster susceptibility to unconscious biases was significantly lower than it was for other participants. That appeared to be due mostly to their capacity to use conscious System 2 thinking to recognize and then reduce unconscious System 1 biases.
Most superforecasters shared 15 traits including (i) cautiousness based on an innate knowledge that little or nothing was certain, (ii) being reflective, i.e., introspective and self-critical, (iii) being comfortable with numbers and probabilities, and (iv) being pragmatic and not wedded to any particular agenda or ideology. Unlike political ideologues, they were pragmatic and did not try to “squeeze complex problems into the preferred cause-effect templates [or treat] what did not fit as irrelevant distractions.”
What the best forecasters knew about a topic and their political ideology was far less important than how they thought about problems, gathered information, and changed their minds based on new information. The best engaged in an endless process of information and perspective gathering, weighing information relevance and questioning and updating their own judgments when it made sense.
It was work that required effort and discipline. Political ideological rigor was detrimental, not helpful.
Regarding common superforecaster traits, Tetlock observed that “a brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.”
Superforecasters have a real capacity for self-critical thinking. Political, economic, and religious ideology is mostly beside the point.
Why This Is Important
The topic of predicting the future might seem to some to have little relevance and/or importance to politics and political policy. That belief is wrong. Tetlock cites an example that makes the situation crystal clear.
In an interview in 2014 with General Michael Flynn, head of the Defense Intelligence Agency (the DoD’s 17,000 employee equivalent to the CIA), Gen. Flynn said, “I think we’re in a period of prolonged societal conflict that is pretty unprecedented.” A quick Google search of the phrase “global conflict trends” and some reading was all it took to prove that belief was wrong.
Why did Gen. Flynn, a high-ranking, intelligent and highly accomplished intelligence analyst make such an important, easily-avoided mistake? The answer lies in System 1 and its powerful but unconscious “what you see is all there is” (WYSIATI) bias. He succumbed to his incorrect belief because he spent 3-4 hours every day reading intelligence reports filled with mostly bad news. In Ge. Flynn’s world, that was all there was.
In Flynn’s unconscious mind, his knowledge had to be correct and he therefore didn’t bother to check his basic assumption. Most superforecasters would not have made that mistake. They train themselves to relentlessly pursue information from multiple sources and would have found what Google had to say about the situation.
Tetlock asserts that partisan pundits opining on all sorts of things routinely fall prey to the WYSIATI bias for the same reason. They frequently don’t check their assumptions against reality and/or will knowingly lie to advance their agendas. Simply put, partisan pundits are frequently wrong because of their ideological rigidity and the intellectual sloppiness it engenders.
Superforecasting concludes with an open invitation to participate in the Good Judgment Project. Since Superforecasting was published in September 2015, it is reasonable to assume that anyone interested in testing their ability to see the future can still be screened and participate if they pass the initial battery of cognitive tests. For those interested, go to www.goodjudgement.com and check it out.
For more information on the book, DP’s reading notes are here. Additional book review comments on the (i) limits of forecasting, (ii) criticisms of forecasting, and (iii) comparing the nearly identical approaches and attitudes that Tetlock and DP have toward making politics more rational or objective are here.