Elon Musk, techie and founder of Tesla, SolarCity, and SpaceX, is wary of artificial intelligence, as Nathaniel Scharping notes on D-brief. Elon is looking for proactive regulation of artificial intelligence systems, for he fears that otherwise killer robots may be cruising down Main Street before we know it. Nathaniel helpfully interviews some artificial intelligence experts concerning Mr. Musk’s worries. Most thought it was premature, but I found Martin Ford’s response unsettling:
Calls to immediately regulate or restrict AI development are misplaced for a number of reasons, perhaps most importantly because the U.S. is currently engaged in active competition with other countries, especially China. We cannot afford to fall behind in this critical race.
This is the sort of response built on a false assumption – that regulation, even the discussion of regulation, will slow down the development of the product[1]. The fact of the matter is that regulation, at its best, should be an attempt to amalgamate the judgment of multiple experts in independent mutually non-communicative contexts into a coherent set of rules which will help increase the safety factor[2] in our work. Martin characterizes China as pulling ahead if we work within a regulatory framework while China does not; why doesn’t he characterize it as China taking greater chances by not regulating this work, of perhaps losing an entire city to a wayward AI system?
All that said, several of the other researchers seemed to feel it wasn’t a big deal. Researcher Toby Walsh:
And I’m not too worried about what happens when we get to super-intelligence, as there’s a healthy research community working on ensuring that these machines won’t pose an existential threat to humanity. I expect they’ll have worked out precisely what safeguards are needed by then.
Apparently he hasn’t paid attention to what intelligent entities have done to each other throughout history – despite all those safeguards. Hell, I’ll bet the first dozen “kill switches” used as a preventative against rogue AIs fail because the AIs figure out how to disable them.
No matter how smart the kill switch inventors consider themselves.
1I find the idea of equating artificial intelligence with a “product” to be unsettling, but that’s irrelevant to the topic.
2I deliberately avoid such misleading words and phrases as “assure”, “ensure,” or “optimize” as implying some end point beyond which no more improvement can be made. Of course improvement can be made; our language is imprecise.