Preventing Keith Laumer’s Bolo, Ctd

It appears the name of this thread is increasingly becoming a lost cause. Via Lawfare I find Defense One has a report on a study on current AI’s use on the battlefield, by Patrick Tucker:

At their smartest, our most advanced artificially intelligent weapons are still operating at the level of insects … armed with very real and dangerous stingers.

So where does AI exist most commonly on military weapons? The study, which looked at weapons in military arsenals around the world, found 284 current systems that include some degree of it, primarily standoff weapons that can find their own way to a target from miles away. Another example would be Aegis warships that can automatically fire defensive missiles at incoming threats.

An interesting, but inexact, analogy. Are we talking individual insects – or swarms? An ant colony can do interesting things, as this NewScientist (10 September 2016, paywall) article reports:

But then she [a hunting ant of the Eciton species] stops. The ground has dropped away in front of her. There is no scent trail, just empty space. Other members of the colony that were following begin to climb over her. Now, instead of walking in a line, they grip hold of one another using hooks on their feet, adding body after body to build an impromptu bridge. More and more join in, until they traverse the gap. And there they remain until the entire foraging party, numbering hundreds, has crossed. Then, as suddenly as it came into being, the bridge disperses, and the ants continue on their way.

How do these creatures achieve such an impressive feat of coordination with very limited brainpower and no overview of the situation? That’s the question a group of researchers working on Barro Colorado Island set out to answer. Their efforts have revealed how ants use simple cues to organise themselves into complex living structures. It’s a wonder of nature, and it could offer insights for engineers, mathematicians and robot designers. What’s more, it might even shed some light on our own interactions.

Individually stupid, collectively smart – without the integrated consciousness with which we operate. How about those AIs, do they work together as simply? Researchers theorize the ants have evolved a simple set of behaviors (rules) that automatically kick in as cued by their surroundings. Have the AIs been similarly? While evolutionary programming is no longer new, it does require an environment capable of repetition … I think. One of my regrets as a programmer was reading about, yet never trying, evolutionary programming. My point being that evolving behaviors is a time consuming business – just look at how long it’s taken ants.

Back to the Defense One article:

But even if the U.S. military “wins the competition” by producing the best autonomic systems, other nations may yet put AI to unexpected and even destabilizing effect. “It should be noted that the technological incorporation of autonomy will not necessarily come only from the world’s strongest powers, and the balancing effect that may have will not likely be stabilizing. Regional powers with greater abilities in autonomous weapons development, such as Israel, may destabilize a region through their use or through their export to other nations,” says Roff.

A clear recognition that software design and innovation is much more of a non-linear activity than hardware design. Mathematicians of transformational character can come from anywhere, even backward countries, because all they need is their minds … or, more traditionally, paper and pencil. While computers are advanced technology, it’s not difficult to acquire them surreptitiously – and programming is much like mathematics, it only takes a few gifted individuals to make tremendous progress.

That non-linearity can also work against you. One missed end-case, whether in the programming or the machine learning of the AI, and disaster could occur on the battlefield. But this is nothing new – from Napoleon to the Battle of Guadalcanal, mistakes by one side are used by the other to win battles, and sometimes those mistakes can seem minor at the time.

Finally, in view of the current Presidential election, Patrick notes this in regard to an ethical position that might result in defeat:

The observe, orient, decide, and act cycle, sometimes called the OODA loop, is today in the hands of humanity when it comes to warfare. But in other areas of human activity, like high-frequency trading, it’s moved to the machines. William Roper, the head of the Pentagon’s Strategic Capabilities Office, discussed his concerns about that acceleration at the recent Defense One Technology Summit.

“When you think about the day-trading world of stock markets, where it’s really machines that are doing it, what happens when that goes to warfare?” Roper asked. “It’s a whole level of conflict that hasn’t existed. It’s one that’s scary to think about what other countries might do that don’t have the same level of scruples as the U.S.”

It’s also scary to think about what the United States might do if its leaders woke up in a war where they were losing to those countries.

Poorly phrased, but the point is clear and relentless. Exploration of it is essential. It is my position that a philosophical or ethical system – or even just a point- that leads to the destruction of those who advocate that system or point is an unsuccessful, an unworthy system, or point. But is it logical to examine the consequences of an ethical position or point and, based on the calculated results, abandon it? In this section, the ethical position is that if a human life is to be taken, a human should make that decision, not an AI. But if that imperils your capability of winning a war … is it good ethics, or bad ethics? Are ethics based on reasoning, a priori, or results, ex post facto?

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.