The Future of Smart Robots

Ross Douthat brings up a subject that’s been bothering me for years:

ONE of the anxieties haunting the 21st century is a fear that technological change will soon make many human lives seem essentially superfluous.

It’s a fear as old as the Luddites, but the promise of computing, robotics and biotechnology has given it new life. It suddenly seems plausible that a rich, technologically proficient society will no longer offer meaningful occupation to many people of ordinary talents, even as it offers ever-greater wealth, ever-widening powers and, perhaps, ever-longer life to the elite.

Then he veers off to his own specific concerns in the area of religion; I think he might have pushed a little further to discover that underlying this concern are the concepts of robots and artificial intelligence (AI) – these are the great enablers of the technology about which he is concerned. The usurpation of today’s jobs by creatures of our own making is what concerns me.

The NewScientist magazine has articles covering the new capabilities of robots and artificial intelligence on a regular basis; and, unlike most of the articles I read in that venerable pop-sci magazine, I do not get excited about what I read in articles on these two topics.  Unlike articles on astronomy, cosmology, even medicine, these two topics have a future, unpredictable ethical component that leaves me pondering.

Not being a creature of any great faith, I can hope there will be new jobs, new endeavours, hell, new adventures for future generations – with robots and even AI right along side them – but, honestly, I have no certainty in that.  And, yet, I know the Libertarians would no doubt suggest exactly that, having been a subscriber to REASON Magazine for 20+ years (I let it lapse several years ago when it abruptly converted into a cheerleading squad for the GOP).  Their thesis, at least back when I was reading their monthly output, would be that the robots are simply freeing us to find new endeavours, which in turn will improve the general lot of mankind.  And perhaps they’re right.  Foreseeing the future is rarely mankind’s forte.

My reaction when reading about AI is mixed: an interest in the technique, but a real feeling of WHY?  This planet positively crawls with nearly 8 billion people, most of them fairly smart and capable of doing the same work asked of an AI based program, in most cases much better.  However, in the future that may become less and less true.

Of course, the AI can generally do what it does well much more cheaply than a human; after all, a human is an active agent in deciding to trade their labor for (generally) money, and, if they’re not an active agent, we may strip away the euphemism and call them slaves.

Which leads to the next question: when does an AI become a slave?  Professor Nick Backstrom begins to address the question in this paper (page 8) with his Principle of Substrate Non‐Discrimination:

Principle of Substrate Non‐Discrimination
If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.

I prefer a more informal approach: if it’s self-aware, capable of self-analysis, and has the drive to survive (and that seems axiomatic on the face of it), then it’s a creature worth our respect, by which I mean it should be related to as moderated by a moral code – not by our use of the power button.

So what do we owe to a new life form?  Assuming a classic computer, it’s easy enough to replicate; if we assume a new, magical piece of computing machinery, maybe not so much.  Where one robot can be built, so can a slew.  Are we bound to create more sentient creatures once we’ve built one?  That’s the ethical question that bothers me – if we can build something self-aware, then must we make more?  The potential is there, as the Catholic Church might assert; whether we must fulfill that potential is not clear.  Bostrom suggests the question may be out of our hands; the AI may replicate itself, assuming sufficient resources.  I do have to wonder, though, whether an AI will have the drive to replicate.

On an entirely different tangent, whenever I read some gushing over the latest robot, it always crosses my mind: are we just looking for the next slave?  Well, the answer is actually yes: everytime we automate some process and take a human out of the loop, we’ve once again tried to accomplish some purpose at a minimal outlay.  I can’t help but note how the ancient evil of slavery relates to labor saving machinery; but I shall not even consider equating the two.  My suspicion is that if a machine does become self-aware, those humans profiting from it will deny it, and then fight any liberties which it may demand for itself, or others will demand for it.

Circling back to Ross, to which world does that lead us?  Immeasurably richer as our silicon/metal slaves do all the labor and we sit back, eating grapes?  A few extremely rich people surrounded by the jobless masses with no means for making a living?

Perhaps this is what we’ll value in the medium future, products with the label: “Made by Human Hands”

 

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.