AI Risks Fallacies Deep Dive #1
The Airplane Analogy and How Ambiguity and Innumeracy Shape the Debate
On the #AIXrisks series: Artificial Intelligence (AI) risks have quickly replaced COVID-19 as the deeply misguided yet widely embraced Risk Judgment and Decision-Making (RJDM) domain. As such, it presents a unique opportunity to shed light on the inadequacies of our risk discourse. Safe-esteem will not become an AI-focused publication, but we will address this topic as it relates to our central themes. For a summary of my thoughts on existential AI risks, see the following:
Tristan Harris and Aza Raskin of the Center for Humane Technology have been particularly active in shaping the AI risk debate and alarmist perspective. As of today, thousands, if not millions, around the world have either heard or seen Tristan and many of his followers invoke his airplane passengers' analogy on YouTube, in interviews, and in articles:
Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?
It's a perfect example of how highly questionable and ill-informing risk arguments enter the public debate and become embraced even by senior decision-makers. Anecdotes and analogies are indeed the most effective way to communicate risks, outperforming any statistical information, regardless of their validity. Most of us will take a story about how dangerous San Francisco has become over the statistics that place it below the violent crimes national average any day.
Let's look at how this particularly popular argument exploits our risk JDM incompetence: The 10 percent refers to the results of the AI Impacts 2022 Expert Survey on Progress in AI.
Tristan presents the results as follows: β50% of A.I. researchers believe thereβs a 10% or greater chance that humans go extinct from our inability to control AI.β Itβs an outrageous misrepresentation.
Just about 50% of respondents gave AI a 10% chance of having an 'extremely bad' impact. The term extremely bad only includes 'human extinction' in a boolean fashion and is, therefore, indistinguishable from other degrees of impact severity. Moreover, this is a perfect example of lexical ambiguity where the risk language can be interpreted so inconsistently by participants to have little, if any, practical value.
If we consider the novelty and relative immaturity of the technology, the uncertainty of any long time-horizon forecast, and the above-average numeracy of these respondents (engineers are, in large part, familiar with concepts like probability and confidence intervals), it would, in fact, be absurd to expect anything less than five to ten percent 'eyeballing' response by a good number of them.
Consider a different - and more honest - question and analogy in light of these survey results. Imagine yourself in 1903, the night before the Wright brothers' first flight. Suppose you surveyed numerous engineers to get their predictions on the future of aviation and flight technology. Would it be reasonable to expect half of them to assign a probability of five to ten percent to 'extremely bad' outcomes?
Fear sells, and itβs infuriating to witness people who are seemingly committed to remediating the direction of technologies that have handsomely enriched them while turning entire segments of the worldβs population into a proverbial Pavlovian experiment, resorting yet again to manipulative and distorting messaging to elevate themselves.
Thank you for the thoughtful article, lots to ponder over. Not sure you are aware of Ben Goldacre (of Bad Science fame) who has campaigned for years to get standardised communications with patients in medical care to avoid exactly this kind of cherry picking and confusing numbers sets being presenting by 'an expert'.