NewScientist (26 August 2017) notes that researchers have observed a strong correlation between images of familiar and unfamiliar objects and activity in the perirhinal cortex of monkeys. I thought this was interesting:
[Yasushi Miyashita at the University of Tokyo] says perirhinal neurons help convert the perception of an object – what it is – into its meaning.
A better understanding of this conversion process could help improve machine learning, says David Sheinberg at Brown University in Providence, Rhode Island. “Any realistic robot would not only need to know what it’s looking at, but also need to know if it’s seen that thing before,” he says. “The computer vision world is still stuck looking at that first part.”
A step towards self-awareness? How do we know that we know something about an object? This has both practical and philosophical facets.
We look in a mirror, and we see something which we recognize as ourselves. How do we attach meaning to that image when it’s about ourselves? Is this a deeply metaphysical question, or just a matter of storing information? I think it’s the former because of the profound, yet paradoxically practical questions it raises about that image: can we trust it, since it’s a secondary information source, what does this mean in comparison to other members of the tribe, etc.
And a key difference between us and the artificial intelligences about which we so freely speculate is that we are, quite naturally, encased in these physical entities, one to a customer. They are mobile and distinguishable. This is not necessarily true for artificial intelligences, which could be distributed to numerous processing units that are geographically distributed; the machines that might function as a body are certain to be manufactured once the problems are shaken out; etc. Will this modify the reactions of an artificial intelligence in a such a way as to differentiate its reactions to stimuli compared to humans?
You betcha.