It Isn’t Intelligence If It Can Be Marketed

Trevor Paglen is a geographer and artist who works with what is still called artificial intelligence in his latter capacity, and so I found his viewpoint on AI, recently expressed in a review of his recent work in NewScientist (12 October 2019, paywall), to be interesting:

Paglen fears the way the word intelligence implies some kind of superhuman agency and infallibility to what are in essence giant statistical engines. “This is terribly dangerous,” he says, “and also very convenient for people trying to raise money to build all sorts of shoddy, ill-advised applications with it.”

“You’re STAFF,” she used to say.

Long time readers know that I’ve often expressed strong skepticism about the use of the term artificial intelligence in connection with what passes under that rubric. Intelligence is not an easy thing to define, and it’s become more and more clear that there is a wide spectrum of behaviors which may be defined as intelligence, such as recognition of self in a mirror, a capability which extends beyond humans to certain cetaceans and others. Anyone with a dog or cat, or an alpaca, knows there’s some intelligence in the critter.

But it seems to me that the use of the phrase artificial intelligence is, in itself, somewhat specious. Is there really a point to distinguishing the substrate of the intelligence? Does this accomplish anything beyond noting that it’s not biological, and thus possibly inferior? That would certainly be congruent with my sneaking hunch that people and corporations would still prefer to work with enslaved creatures that can hardly fight back, and that AI fills the bill.

I once touched in passing on an observation concerning when something is or is not the advanced form of programming in the context of ranked choice voting in Maine, and I’ll reiterate it:

When a programmer is given a task to solve, typically the steps that we’re encoding for the computer to follow are either well-known at the time of the assignment, or they can be deduced through simple inspection, or they can be collected out in the real world. An example of the last choice comes from the world of medicine, where early attempts at creating a diagnosis AI began with collecting information from doctors on how to map symptomology to disease diagnosis.

These steps may be laborious or tricky to code, either due to their nature or the limitations of the computers they will be run on, but at their heart they’re well-known and describable.

My observations of ML, on the other hand, is that ML installations are coded in such a way as to not assume that the recipe is known. At its heart, ML must discover the recipe that leads to the solution through observation and feedback from an authority entity. To take this back to the deferment I requested a moment ago, the encoding of the discovered recipe is often opaque and difficult to understand, as the algorithms are often statistical in nature.

I think that digital historians (and this dude claims he’s working on digital archaeology, so don’t laugh) will eventually classify algorithms based on whether the rules directing the program were concocted and encoded by the programmers or users, or if the program itself must deduce the rules based on behaviors and feedback from humans or entities that can validate the deduced rules. Note that the colloquial definitions of artificial intelligence, which require an approach to self-agency, even if it’s not achieved, are not really even relevant to this definition.

And this is important, as Paglen notes, because true self-agency isn’t just a game-changer, if it ever occurs, it’ll be a positive feedback loop. Engineers know that such loops amplify initially small effects in ways that are often out of control and destructive, while negative feedback loops damp down undesirable behaviors through detection and suppression.

An angry self-aware computer isn’t something we want to face, I suspect. Not only are there strong ethical conundrums to worry about, but, if it has access to weaponry, the concerns become existential.

Back to Paglen for another choice observation:

Asked what concerns him more, intelligent machines or the people who use them, Paglen answers: “I worry about the people who make money from them. Artificial intelligence is not about making computers smart. It’s about extracting value from data, from images, from patterns of life. The point is not seeing. The point is to make money or to amplify power.”

And they’re not concerned about ethics, I suspect, although occasionally a bit of noise about ethics leaks through. There’s gold up in those hills to be collected, and the slow poke gets nothing. Ethics, shmethics.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.