Think how useful it would be to make accurate forecasts of political events. Will there be a ceasefire in the Yemen war in the next six months? Will the Chinese government allow uncensored access to the Internet in the next year? Will there be a successful military coup in Mali in 90 days?
With accurate estimates of the likelihood of such events, activists could be more effective in targeting their campaigns. However, there’s a complication. Activist campaigns could change the outcome being forecast. Set this aside for now.
Philip Tetlock is the world’s leading researcher on political forecasting. In studies beginning in the 1980s, he sought to see whether anyone could accurately predict events. Certainly there are all sorts of people making predictions: pundits, politicians, political scientists. The problem with many of them is that their predictions are too loose to be tested. When a prominent figure like New York Times columnist Thomas Friedman says there’s a possibility of a default by the Italian government, the wording usually leaves an escape hatch. There’s nothing that can be rigorously evaluated.
When Tetlock actually tested political predictions rigorously, he found that most were woeful, usually no better than chance. Furthermore, commentators and political experts who were highly knowledgeable were no better at forecasting political events than anyone else. Tetlock said, memorably, that the average expert was about as accurate as a dart-throwing chimpanzee.
If political forecasting is a hopeless task, that is the end of the story: Don’t trust commentators who make predictions, no matter how authoritative they might be and sound. But Tetlock continued his studies with a new goal. Although it’s true that most forecasting is little better than random guessing, there are some individuals who can do better. Tetlock reports on his findings in a new book titled “Superforecasting: The Art and Science of Prediction,” co-authored by writer Dan Gardner.
One of the big problems in explaining forecasting is helping people to understand probabilities. Suppose the weather forecast says there’s an 80 percent chance of rain tomorrow. If it doesn’t rain the next day, most people say the forecasters got it wrong. Actually, though, a single outcome like this doesn’t prove that a forecast is right or wrong or prove that a forecaster is good or bad.
Think of it this way. On 100 different days, the weather forecaster says the chance of rain is 80 percent. If it rains on 80 of the days, the forecast is accurate as a prediction of the chance of rain. If it rains on 70 days or 90 days, the forecast is not so accurate. The point is that accuracy can only be judged using a sequence of forecasts. Knowing whether it rains on any particular day is not enough.
Most people don’t grasp this point. For example, they blame U.S. intelligence agencies for not predicting the 9/11 attacks, or perhaps for saying the chance of a terrorist attack is high when actually there is no attack. But these assessments are unfair, because they use a single instance when actually what’s needed is examination of a large number of predictions and outcomes.
To judge forecasts, they need to be precise. If you say, “It’s going to rain,” you’re bound to be right eventually. Likewise, if you say, “The regime will collapse,” you almost certainly will be right, but maybe you’ll have to wait a century. So weather forecasts are for the next day or the next week, and likewise your forecast about regime stability needs to include a time frame: “There’s a 70 percent chance the regime will collapse in the next year.” But what exactly does it mean to say, “the regime will collapse”? That the president will step down? That the regime’s repressive system will be replaced? That the government will break down into warring groups? For a precise forecast, try something like “There’s a 70 percent chance that the president will leave office by September 30.”
If the president does leave office by September 30, does that mean you’re a good forecaster? It doesn’t prove a lot, because you might have been lucky. So to show your skills, you need to make lots of precise forecasts and show you can do better than uninformed guessing.
Tetlock’s research involved setting up a system for inviting people to make lots of specific political forecasts on a wide range of political and economic events and seeing how well they did. He used a formula to calculate the accuracy of the forecasts. Quite a few people joined and did better than chance. Then there were a few who did especially well. Tetlock calls them superforecasters.
Superforecasters tend to be intelligent, understand probabilities, and be heavy readers of the media and other information sources. Even after making a forecast, they keep on the lookout for new information and then slightly revise their forecasts in the light of the new information.
Tetlock also put groups of forecasters together and found they could do better than individuals. But superforecasters could do better than groups of regular forecasters, and teams of superforecasters better still. However, performance depended on the ways teams were set up and operated.
The implication of Tetlock’s studies is that forecasting is a skill that can be learned. Some people, due to their ways of thinking, may have a head start, but anyone can get better. Tetlock even provides a list of 10 guidelines for becoming better, such as to break difficult problems into manageable sub-problems and to strike a balance between underconfidence and overconfidence.
Some of Tetlock’s studies are funded by the U.S. military. The surprising finding is that superforecasters, relying entirely on public sources, can make better predictions of political events than experienced intelligence analysts with access to classified information. One advantage superforecasters have is intellectual humility: They don’t have status or reputations to defend.
There are several possible reactions to the finding that unheralded citizen superforecasters can do better than the large agencies paid to do the job. One is to express glee that intelligence agencies are so hopeless. Another is to express alarm that the agencies will now have access to superforecasting techniques and skills. Yet another is to investigate how forecasting can be used to aid nonviolent struggles. That is my focus here.
Tetlock describes the ways that superforecasters think, and it is reasonable to believe that few activists think the same way. Many activists are driven by a sense of outrage over injustice or by a feeling of duty to take a stand. Also, they need to believe their efforts will make a difference. These beliefs and emotions are not conducive to the calm, rational, probabilistic approach used by superforecasters.
Nevertheless, it is possible to become better at forecasting. Few people have trained systematically at it. Now that the skills are better understood and there are ways of obtaining feedback, it should be possible for many more people to realistically aspire to become superforecasters.
Some activists might want to do this themselves. It might also be possible to find people who, although not involved in activism themselves, are sympathetic to nonviolent methods and goals and willing to provide their insights to movements. In fact, because personal involvement can bias predictions, sympathetic yet independent forecasters might have an advantage.
Consider this scenario. Movement-sympathetic forecasters give a 30 percent chance that there will be a successful military coup against the elected government in a given country within the next six months. The forecasters are then provided with information that local campaigners will put a major effort into raising awareness and fostering anti-coup skills, and revise their estimate of the chance there will be a successful coup to 20 percent.
Activists now have a decision to make: should they make this major effort to prevent a coup? On the one hand, it seems worthwhile to reduce the probability of the coup succeeding from 30 percent to 20 percent: that’s a significant one-third reduction. On the other hand, it’s only a 10 percent change overall. There’s a 70 percent chance the coup wouldn’t succeed even without any activist preparation, and a 20 percent chance the coup will succeed despite the preparation. This makes it seem like a lot of effort for a small prospect of making much of a difference.
Then there’s another thought. Perhaps there might not have been a coup anyway — the 70 percent chance — but the activist effort will pay dividends later on, past the six-months forecast horizon. And perhaps even if the coup is successful despite activist efforts — the 20 percent chance — those efforts will contribute to limiting the damage due to the coup and reversing it later.
To add to the complications, activists need to consider the opportunity costs of an anti-coup effort. They might instead put their efforts into something quite different, say into community building or opposition to the arms trade. To make a better informed decision about what to do, forecasters could help by offering predictions about the impacts of the different efforts, assuming again that a precise formulation of the outcomes is possible.
Then there is the additional complication of activist interests, morale and momentum. Few activist groups have the capacity to reassign their passion and energy to a different issue or campaign simply because some analyst says it offers a better chance of having an impact. It is likely that many activists will resist the cold rationality associated with forecasting, preferring their own judgments about what is worth doing.
If it is possible to combine activist drive with improved forecasting, this could make a difference, and it could be a virtuous cycle. When activists believe they have a greater chance of success and of making a difference, they are more likely to continue their efforts, making success more likely.
Bill Moyer developed the Movement Action Plan, which lays out a set of stages through which many social movement campaigns proceed. A crucial stage is when the movement is on the verge of a breakthrough, yet activists become unaccountably demoralized, anticipating failure. The misperception at this stage might be countered by listening to independent forecasters sympathetic to the movement. Forecasting, though it has to be anchored in realities, can sometimes offer real hope, as opposed to the artificial hope by so many predictions.
As Tetlock well recognizes, most political forecasting is not much better than chance, and even those few who can do better can say little beyond a limited time horizon of at most a few years. However, his research shows what can be achieved and gives the best available guide on how to do better. Furthermore, Tetlock’s research shows that some independent individuals, not paid for their efforts, can do better than intelligence agencies with access to classified information and supplied with vast resources.
Some members of nonviolent movements should be following the research, learning the skills to become better forecasters and recruiting superforecasters to serve the goals of the movements. Such an effort might result in only a small increase in the effectiveness of nonviolent campaign, but even a small increase could make a huge difference to people’s lives.
It takes effort to track the impacts of mass mobilizations like #MeToo, Occupy or Black Lives Matter, but understanding social change is impossible without such work.
By sharing our lived experiences, I have seen how incarcerated people can stop the pipeline funneling troubled teens to prison.
As the new ‘Rustin’ biopic shows, the great organizer of the 1963 March on Washington was always working to join more people together in the struggle for greater justice and peace.