NewScientist (29 August 2015, paywall) presents a short interview with Simon Colton of Falmouth University concerning his programs that discover things:
Can computers make breakthroughs?
I think we will only see computers making true discoveries when software can program itself. The latest version of HR [a program to discover things] is specifically designed to write its own code. But it’s a challenge; it turns out that writing software is one of the most difficult things that people do. And, ultimately, there are mathematical concepts that you can’t turn into code, especially ones dealing with infinity.
I’ve mentioned Noson S. Yanofsky and his book THE OUTER LIMITS OF REASON: WHAT SCIENCE, MATHEMATICS, AND LOGIC CANNOT TELL US in prior posts (here and here), and one of his subjects was the problem of paradoxical statements, which he attributed to languages capable of self-reference. The above section of the interview strikes me as related: a program which can write itself is, in a sense, self-referential because it has to understand, in some sense, that it exists (which is a strange thought in itself for what is basically an arrangement of bits in a computer, and becomes even stranger as one goes deeper into operating system implementations and realizes that this arrangement of bits can be partitioned and moved around as the operating system pages programs in and out … but I digress), and that it can modify itself in order to achieve its goals. I wonder if the program can formulate paradoxical statements and goals, and if this would eventually constitute a certain amount of consciousness / intelligence? (Is intelligence the ability to express & comprehend a paradox?)
Also of interest is the problem of representing certain mathematical concepts, such as infinity, which suggests, once again, a limitation of artificial intelligence capabilities (at least, based on current computing architectures) which may render them forever unable to match us in certain competitions … or may suggest a problem with our mathematical assumptions.
And goals!
How do you make software discover things?
You give it data that you want to find something out about, but rather than looking for known unknowns – as with machine learning, where you know what you’re looking for but not what it looks like – it tries to find unknown unknowns.
We want software to surprise us, to do things we don’t expect. So we teach it how to do general things rather than specifics. That contradicts most of what we do in computer science, which is to make sure software does exactly what you want. It takes a lot of effort for people to get their heads round it.
I’ll just say it’d be fascinating to see more on this subject. It also reminds me of the story of a friend of mine from, oh, thirty years ago, who claimed he’d put together a symbolic logic program and gave it some facts and told it to start deductions. Occasionally it’d ask him a question. Once it asked him if the famous little jerk in Mercury’s orbit had actually been observed. And once it asked him if a platypus as mammal or bird.
I’ve never been sure if he was pulling my leg or not.
Finally, something in what I use for a brain keeps pinging me with “DNA” in connection with this entire post. Can it be said that DNA is self-referential in some meaningful sense, since it … sort of … creates itself, including the self-creation aspect? I can’t quite make myself believe it, but the pattern match is occurring and demanding to be revealed.