Long time readers know that I don’t have a lot of patience with amateurs working on critical problems, but now NewScientist bids to set me on the straight path. In “Work the crowd: How ordinary people can predict the future,” (NewScientist, 24 February 2018, paywall) Aaron Frood reports on experiments involving teams of amateurs working problems:
The answer surprised even the US intelligence officials behind the experiment. It turns out crowds really can make accurate predictions – so accurate, in fact, that they promise to permanently change how states analyse intelligence.
We have known some of the benefits of collective wisdom since Aristotle, but a slightly more recent example features in the 2004 book The Wisdom of Crowds by journalist James Surowiecki. The opening pages tell the story of the day Charles Darwin’s cousin Francis Galton went to a country fair. Galton, a formidable scientist himself, asked people to guess the weight of an enormous ox. Most got it absurdly wrong, but the median guess of the 800-strong crowd was just 1 pound off the true weight of the ox, which for the record was 1198 pounds, or 543 kilograms.
The wisdom of crowds is an integral part of life today. We try suspected criminals by jury. We use crowdfunding websites to back new products. We follow the throng to popular restaurants. Now it even seems it may be possible to predict the future using the masses.
The underplayed element? It took me a couple of days to bring this to the fore in my brain. From a sidebar:
Are you a superforecaster?
Crowds of ordinary people can be good at predicting the future. But the most accurate predictions come when you identify the best 2 or 3 per cent of a crowd and team them up. Nearly all these superforecasters have a university degree, a wide range of interests and a curious mind. They also tend to have a few other key characteristics. [Intelligence, Shrewdness, Motivation and commitment, Teamwork are the headers.]
So we’re not talking about average folks making extraordinary forecasts, we’re talking about extraordinary folks working together to come up with extraordinary forecasts. This isn’t crowd-sourcing, despite the advertising, but the employment of top line people who just don’t happen to hold advanced expertise in the subject area. In fact, what did happen to the experts in the competitions?
… in 2005, a book called Expert Political Judgement brought crowd predictions to the fore again. The author Philip Tetlock, a psychologist now at the University of Pennsylvania, had studied expert predictions for two decades. In one experiment, he surveyed about 300 professional political and economic forecasters, asking them a series of questions about the future and getting them to pick answers from a range of options. He also asked them to assign a probability to their stance. He amassed tens of thousands of predictions and compared them with what really happened. The experts performed terribly: worse than if they had assigned equal odds to each outcome every time.
Along with the failure of the experts, notice one other thing: the subject areas. We’re not talking a wide range of subjects, but two of the vaguest and hardest, politics and economics. It’s unfortunate that the nature of the test areas wasn’t played up in the article, as it might have revealed more about the problems of experts in these areas, as well as how the teams of top-flight amateurs solved their problems. A differential comparison of the experts against the teams might have yielded (and perhaps it did) key insights into what goes wrong for the experts, and how they might compensate in the future.
So, in the end, I feel justified in sticking with my judgment that we need experts in government, not a pack of amateurs. Not only this, but the debacle occurring in Congress is enough to leave me content.