When it comes to climate change, here’s a notable problem for the scientists, from a couple of months ago:
There are dozens of climate models, and for decades they’ve agreed on what it would take to heat the planet by about 3° Celsius. It’s an outcome that would be disastrous—flooded cities, agricultural failures, deadly heat—but there’s been a grim steadiness in the consensus among these complicated climate simulations.
Then last year, unnoticed in plain view, some of the models started running very hot. The scientists who hone these systems used the same assumptions about greenhouse-gas emissions as before and came back with far worse outcomes. Some produced projections in excess of 5°C, a nightmare scenario.
Wait for it …
The scientists involved couldn’t agree on why—or if the results should be trusted. Climatologists began “talking to each other like, ‘What’d you get?’, ‘What’d you get?’” said Andrew Gettelman, a senior scientist at the National Center for Atmospheric Research in Boulder, Colorado, which builds a high-profile climate model. [Bloomberg Green]
Sadly, I’ve never worked on computer models of anything, but I can see how such models, particularly those working in a Bayesian-type mode in which real-world results are iteratively fed into the model, which then adjusts its internal calculations so that its early predictions would have matched observed reality (think of it as a psychic going back and adjusting their claims after the murder victim’s body is found), might find it difficult to explain to its human creators just why it came out with a particular result.
Something for modelers to consider in the future.
Meanwhile, the modelers are trying to understand if the predictions are wrong – or if there’s something to it. Let’s hope the predictions are just wrong.