It’s been nearly three years since my last entry on this topic, but Secretary of Defense Hegseth is making alarming news:
Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei for a meeting at the Pentagon on Tuesday as the Defense Department attempts to pressure the artificial intelligence company to loosen its restrictions around the use of its technology to spy on Americans and to enable weapons to fire without human involvement. The Defense Department is currently heavily reliant on the AI tool, Claude, and Anthropic’s executives have thus far resisted the pressure campaign. Needless to say, Anthropic acquiescing to Hegseth on this matter would open up the likelihood of yet more powerful AI tools being deployed against people abroad and living within U.S. borders. [Ja’han Jones, MS NOW]
Even if the possibility of faux-AI[1] systems turning on their keepers is mostly delusional, this is still disturbing as it sounds more like the slaughter of the civilians and safeguarding of power rather than, say, the use of AI to negotiate peaceful relations.
The sad part? The nature of warfare and its weapons does not lead to stable scenarios; escalation is the unfortunate norm, as any historian will attest. And the consequential increase in casualties, both military and civilian. Protests against same, so exciting for the protesters, may lead to results such as invaders winning their objectives, i.e., taking over your peaceful little community.
But these “killer robots” make me very uncomfortable.
Almost as much as the next section at the link above:
Donald Trump’s sons have invested in an Israeli-backed drone company that’s slated to do business with the Trump administration and has boasted about its ability to kill people at low cost.
My bold.
1 An “AI” system lacking self-agency is not intelligent, it’s just a machine-learning (ML) system, and, as many pundits far better versed in the relevant technologies than I will tell the reader, there’s no evidence of such a system existing, yet. Such systems don’t think, they just followed rules deduced from data fed them by people.
That last sentence is far more important than I can emphasize.
