With No Moral Direction

Susan Landau and Cathy O’Neil give an example  on Lawfare of how artificial intelligence is not quite as fool-proof as might be hoped:

Consider the recent example of Microsoft’s Tay-bot. This conversation-ready twitter bot was hastily removed sixteen hours after it was first put out. Tay had been carefully trained to avoid discussions of Eric Garner, but within a day the bot was spewing racist and anti-Semitic obscenities. According to Microsoft, this was because it was being trained through a “coordinated attack” by groups of Twitter users. And if you think we can get away with having bots that don’t “learn” from their interaction, we cannot. Without a dynamic interactive component, the bots will be obviously not human. We should learn our lesson from the Tay-bot; AI is simply not very smart yet.

One wonders how much ‘parenting’ the bot received. After all, the only intelligent beings who we respect are ourselves, so I (with my one college course in AI back in the 1980s <- this is known as self-snarking) would expect that our regimen for building an AI should include a ‘parenting’ phase, akin to our own – basically a supervised learning phase during which basic morality / ethics is instilled.

Judging from the reaction of the bot to the Twitter users, no such phase took place, and the bot, with no directives for, let us say, “right thinking”, went off the rails at the prompting of the “bigger boys”.

Regardless of the motivations of the Twitter users, I applaud their activities, as it exposes a flaw in this approach to AI. Fortunately for researchers, once they get the parenting phase right, then it should be a simple matter of copying that moral direction into each subsequent bot.

Unfortunately, parenting can be a hard thing to do. It might take years to finally get it right.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.