Sally Adee throws some cold water on the fears of Elon Musk, et al, in the pages of NewScientist (16 July 2016, paywall), suggesting that even the phrase “artificial intelligence” is misleading – because the computers aren’t really thinking:
“The black magic seduction of neural networks has always been that by some occult way, they will learn from data so they can understand things they have never seen before,” says Mark Bishop at Goldsmiths University of London. Their complexity (157 layers in one case) helps people suspend disbelief and imagine that the algorithms will converge to form some kind of emergent intelligence. But it’s still just a machine built on rule-based mathematical systems, says Schank.
In 2014, a paper that could be seen as the successor to the Lighthill report punctured holes in the belief that neural networks do anything even remotely akin to actual understanding.
Instead, they recognise patterns, finding relationships in data sets that are so complex that no human can see them. This matters because it disproves the idea that they could develop an understanding of the world. A neural network can say a cat is a cat, but it has no concept of what a cat is. It cannot differentiate between a real cat or a picture of one.
The paper isn’t the only thing giving people deja vu. Schank and others see money pouring into deep learning and the funnelling of academic talent.
“When the field focuses too heavily on short-term progress by only exploring the strength of a single technique, this can lead to a long-term dead end,” says Kenneth Friedman, a student at the Massachusetts Institute of Technology, who adds that the AI and computer science students around him are flocking to deep learning.
So the suggestion is that the AI field may be pursuing another dead-end approach to discovering actual thinking machines. While not everyone agrees on this point, Roger Schrank at Northwestern University says:
“The beginning and the end of the problem is the term AI,” says Schank. “Can we just call it ‘cool things we do with computers’?”
In other news, also from NewScientist, comes word of new EU regulations which will impact the more mysterious computing systems:
Soon, you may have the right to ask the inscrutable algorithms involved to explain themselves.
In April this year, the European parliament approved the General Data Protection Regulation (GDPR), a new set of rules governing personal data. Due to go into effect in 2018, it introduces a “right to explanation”: the opportunity for European Union citizens to question the logic of an algorithmic decision – and contest the results.
In life some things can be controlled and some cannot, and a key to sanity is having at least some control. The more we are at the mercy of the unfeeling vortex, the less happy we become, and the more bad decisions we make. The software engineers may whine about it, but I’ll happily applaud this decision and hope this idea may make its way over the Atlantic.