The Prism Of Power

Mhairi Aitken apparently sees the pronouncements of the AI warning community as simply hypocritical:

The loudest voices shouting about existential risk are coming from Silicon Valley. This may seem at odds with big tech’s stake in driving innovation and investment in AI, yet these narratives protect its interests by diverting attention away from big tech’s current actions to instead speculate about hypothetical future abilities of AI.

By suggesting AI might develop its own intelligence, the focus shifts to how we might hold the technology accountable in the future, rather than how we can hold big tech accountable today. All of this creates an illusion of inevitability, suggesting we are observing the evolution of AI, rather than a series of conscious and controllable decisions by organisations and people.

It is important to consider not just what is being said, but who is being listened to. We have seen Altman being warmly greeted by regulators across Europe, while the voices of people who are negatively affected by AI are barely heard. This is symptomatic of what academic Kate Crawford has termed AI’s “white guy problem“: claims of existential risk are largely coming from affluent white men in positions of power, those least likely to experience the harms of AI today (but with most responsibility for causing them). [NewScientist]

But it’s important not to mistake congruency for proof. I cannot speak for the businessman community, which is too large to homogenuously characterize, but the computer scientist community can often be almost painfully earnest. Remember prominent member Bill Joy, CTO of the late and lamented Sun Microsystems? He warned of the gray goo of nanobots, and became no longer prominent.

And there’s been no gray goo of humanity-ending nanobots. My point is that the paranoid people spreading the warnings may be entirely honest in their concerns, even if they are silly, as I’ve expressed elsewhere.

But this all is sort of a nit, because Aitken’s larger point is that those negatively affected by the large language model systems being applied (see previous link) will be ignored for so long as their replacement doesn’t – literally – crash and burn. It’ll only be when we have some artifact in ruins, with bodies of humans flung here and there, will we finally return to the question of Should we, and not apply, as the primary metric, the profit metric, but rather the accuracy, or excellence, metric.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.