How To Tell An Apple From An Orange

Kenneth Anderson on Lawfare wades into the problem of statistical comparisons:

What’s “The Bathtub Fallacy,” according to [Justin] Fox? Following a terrorist incident or government counter-measure, he says (quoting a recent Financial Times (paywalled) column by its principal political columnist, Janan Ganesh), statistics are “dug out to show that fewer Westerners perish in terror attacks than in everyday mishaps. Slipping in the bath is a tragicomic favourite. We chuckle, share the data and wait for voters and politicians to see sense.” ..
The conclusion that terrorism is different relies importantly on Fox’s characterization as a “fat-tailed distribution” of risks.  Fox cites Nassim Nicholas Taleb’s (of Black Swan fame) extensive writing on this, but then moves on to an interview Fox conducted with Carnegie Mellon University professor Baruch Fischhoff in researching this column.  Who’s Baruch Fischhoff, you ask? Well, among (many) other things, Fischhoff is a “past president of the Society for Risk Analysis, past member of multiple national and international commissions on the risks of terrorism and other bad stuff, and author of lots of books with ‘risk’ in the title.”  Also, Fox adds, he was Daniel Kahneman’s former research assistant at the Hebrew University of Jerusalem in the early 1970s, and thus someone “present at the creation of the school of psychological research that has shown how bad we humans can be at processing probabilities.”

“People who just look at the average are doing the analysis wrong,” Fischhoff tells Fox.  Fischhoff does not think, either, that “it’s irrational to fear terrorism more than falling in the bathtub.” Why? It’s different in terms “of the uncertainty and the shape of the distribution, how well we understand it and the possibility of these large-scale events.”  Moreover, Fischhoff adds, in another deceptively simple observation, that “people tolerate risks where they see a benefit.”

Seems to me that when comparing statistics this way, the magnitude of the difference in the standard deviations of the two averages will be a measure of the incommensurability of the comparison, taken as a ratio to the average value.

That is, a big standard deviation indicates the importance of the average value as a predictive tool is very low; a small standard deviation indicates the value of the average as a predictive tool is quite high, as compared to the actual value. As a species, we do value predictability, so when the deviation is high, we need to be more concerned about the phenomenon in question.

And I’m sure all the risk analysis and statistical nerds are already chanting We knew that already. Ah, but I didn’t.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.