I thought this was interesting:
Sounding the alarm on growing opacity of advanced AI reasoning models. Fortune reporter Beatrice Nolan reported this week on a group of 40 AI researchers, including contributors from OpenAI, Google DeepMind, Meta, and Anthropic, that are sounding the alarm on the growing opacity of advanced AI reasoning models. In a new paper, the authors urge developers to prioritize research into “chain-of-thought” (CoT) processes, which provide a rare window into how AI systems make decisions. They are warning that as models become more advanced, this visibility could vanish. [Fortune]
So, is Machine Learning (ML) that much like how humans reason? Mind you[1], there’s formal reasoning, which is difficult and deeply dependent on in-depth understanding context and reality, which in realistic situations is almost impossible; informal reasoning, not guaranteed to lead to correct conclusions but much easier and less dependent on context as much of it is heuristics; and intuitive reasoning, the I just know that is so often bloodily wrong – but has you running from the tiger just because that fern flapped in the breezeless afternoon.
So, if Elon Musk’s worst nightmare arrives, that of the super-intelligent actual AI system, can we understand its thought processes? Or will its reasoning function in such a different manner from the above that we’ll stare at the COT and consider it gibberish?
Hard to say. The fact I can’t imagine another way to reason doesn’t mean it’s not possible, it only means I’m not imaginative. Or, as with calculus, I’m not trying hard enough.
1 And I’m pulling this out of my ass from observation, not from study of psychology, which I always find a frustrating experience.

