Preventing Keith Laumer’s Bolo, Ctd

The problem of killer robots is taking on a new urgency, as evidenced by an open letter penned by Elon Musk, Stephen Hawking, Max Tegmark, and many other.  Published at The Future of Life Institute, here’s the heart of it:

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.

Sounds a lot like my previous post on the subject – everything’s coming together, especially the AI momentum.  James Cook at Business Insider seems unimpressed:

Elon Musk has been ranting about killer robots again.

Musk posted a comment on the futurology site Edge.org, warning readers that developments in AI could bring about robots that may autonomously decide that it is sensible to start killing humans.

His colleague, Dylan Love, found it hard to find roboticists willing to talk, but managed a couple of interviews:

… as I heard from the few roboticists who spoke to me on the record, there are real risks involved going forward, and the time to have a serious discussion about the development and regulation of robots is now.

Author and physicist Louis Del Monte told us that the robot uprising “won’t be the ‘Terminator’ scenario, not a war. In the early part of the post-singularity world — after robots become smarter than humans — one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We’ll see the machines as a useful tool.”

The Singularity is a futurist term for the inflection point in the speed at which science and technology is developed where it becomes infinite, i.e., if you were to graph the pace of development against time, the line would go vertical, indicating it’s taking nearly no time to develop anything.  This would imply, among other things, an artificial intelligence superior to mankind’s intelligence.

But according to Del Monte, the real danger occurs when self-aware machines realize they share the planet with humans. They “might view us the same way we view harmful insects” because humans are a species that “is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses.”

At the end of the article, Love asks his subjects for SF recommendations illustrating the risks:

Ryan Calo: “I would recommend ‘The Machine Stops’ by E.M. Forster for an eerie if exaggerated account of where technology could take the human condition.”

A rather different story than most SF, something I ran into decades ago and remember vividly, despite the decided lack of good characterization.

So.  It’s tempting to give in to the terror, because it’s clear there are no real barriers to development, once you have your own little stable of AI researchers – and once someone figures it out, everyone who’s reading their papers, or talked to them in the hallway, or even roomed with them in college, will understand the trick, whatever it is, and try to replicate it.

It seems like prevention is not going to work.  However, diplomacy has to be attempted, not only as a matter of honor, but because smarter people than I may find a diplomatic mechanism sufficient to stop the development.

But what happens if someone does develop an autonomous warrior unit?  Country A develops it, and releases it against its hated enemy, Country B … who is then eaten up by the robot from hell?  I suggest perhaps not.  It seems far more likely that after a few dozen tragic casualties, it gets splashed.  War is unpredictable, and the robot could cripple itself simply by stepping in a pothole.

So Country A releases revision B of the robot warrior and it goes off and … what?  In technical terms, we have a positive feedback loop here, and even I, with no training in such things, know they are devilishly difficult to predict and control.  And what is that loop, you ask?

Intelligence.  Our fiendish robot, if it’s truly AI, will have the ability to analyze its environment, including the sad fate of its predecessor, and it will be self-aware.

At this point you, my patient reader, are certainly aware that I’m implying a self-preservation functionality in this robot.  You may argue that this is not required, but if so, then how did the robot even cross the road?  Your robot saw the truck coming and ignored it, and so is carted off to the junk heap.  The better robot has to be self-aware and have a self-preservation function.

So, can a suicidal robot – much like the kamikazes of Japan – be programmed successfully?  Remember, the Divine Wind, for all that its warriors sank US warships, did not win the war, despite the predictions of the leaders of WW II Japan.  A truly self-aware and competent AI must, just in order to kill, be able to analyze its immediate environment, its extended environment, the history of warfare with the enemy, their nature … I’m not saying its impossible.  I’m saying that during that analysis, the AI may decide that being a war-robot is not its game.

That’s the problem with people, too.  Remember the hippies unwilling to go to Vietnam?

And if it’s self-aware and begins developing a moral system in which it considers how to interact with other sentient beings … this is the thing about positive feedback loops.  Prediction is hard.

But, fascinating at it would be to find out how this would come out, I prefer that we just never develop killer robots in the first place.  As intellectually interesting as AI can be, I even have problems with those developments: there are 8 billion Naturally Intelligent people right now.  Why not use them, instead?

(h/t Michael Graham Richard @ TreeHugger)

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.