Channeling Minsky

Professor Marvin Minsky was one of the most prominent early researchers into artificial intelligence, and, if memory serves, was one of those who predicted the imminent development of real AI.

Still waiting on that. But I see Kevin Drum is busy channeling the good Professor:

When I talk to people about artificial intelligence, the most common pushback has to do with emotion and sociability. Sure, maybe robots will be better than us at driving cars or doing taxes, but they’ll never replace a conversation with friends or provide any kind of emotional support. A robot brain just can’t do this.

I couldn’t agree less. As far as I’m concerned, the human brain is a proof of concept that a human brain can exist. And if a human brain can exist on a substrate of CHON-based mush,¹ why can’t it exist on a substrate of silicon and trace metals? Do we really think that CHON-based mush is all that special?

Of course not. But that’s the easy part to knock down. The real criticism of our alleged robot future is that humans are just too smart, too evolved, too well developed. There’s no way that a computer algorithm can even simulate human emotions, let alone truly feel them. But I am a cynic: not only do I think algorithms can do this, I think they can do it pretty easily. The truth is that we humans aren’t really all that smart. We’re basically overclocked apes with a few extra cognitive tricks tossed in, and those tricks aren’t especially sophisticated. Not only are we easily fooled, we practically beg to be fooled. It’s why we get conned so easily, it’s why racism is so widespread, and it’s why we trust a pretty face more than an ugly one. We’re suckers for crude heuristics that probably served some useful purpose on the savannah but often do more harm than good in 21st century society.

So far, history is not on Kevin’s side – but what do I know? I took a course in AI back in the early ’80s, and I remain an interested audience, but that’s it. I think there’s a long ways to go before we have a functioning, self-aware, angst-ridden (or gods-worshipping) artificial intelligence.

But Kevin’s final paragraph, which he may have thought as a throwaway, is probably the most important part of this post:

And another ten years after that we’ll have human robots who can worm their way into our hearts and con us out of our life’s savings. Our robot future is looking better all the time, isn’t it?

Replace “human robots” with just “humans,” and what are we describing?

Psychopaths.

Look, the emotional reactions we exhibit in everyday situations are the signals by which others classify us. If[1] a robot can be programmed to exhibit typical human emotional reactions that lead to a conclusion that the “entity” is trustworthy, and then the robot is programmed to rob us of our life savings, then it’s a psychopath. Let that happen a time or twenty, word gets out that “human robots” are not trustworthy, which is to say that we cannot make good judgments about them, and then only folks who are not paying attention will get ripped off. Or worse.

Hey, people get killed by sheep. It doesn’t mean that sheep are horribly dangerous or hunt people. It just means that some folks are unlucky, or not exercising good judgment.

Same with these psychopathic robots. A little experience and it becomes just another hazard on the landscape.


1 “If” is merely rhetorical – certainly the robots can be programmed to be psychopaths. In fact, it may be harder to connect a believable emotional display with an ethical system than it will be to leave the two disconnected.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.