I See A Ship In My Wake

How it got into the hotel bar, I’ll never know.” With apologies to Grouch Marx.[1]

Sorry. Not really. What triggered some pleasant silliness in this time of shock & unhappiness?

Indeed, many AI scientists are increasingly of the view that LLMs aren’t a route to the lofty goal of artificial general intelligence (AGI), capable of matching or exceeding anything a human can do – a machine that can think, as Turing would have put it. For example, in a recent survey of AI researchers, about 76 per cent of respondents said it was “unlikely” or “very unlikely” that current approaches will succeed in achieving AGI.

Instead, perhaps we need to think of these AIs in a new way. Writing in the journal Science this week, a team of AI researchers says they “should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated”. The researchers compare LLMs to “such past technologies as writing, print, markets, bureaucracies, and representative democracies” that have transformed the way we access and process information. [“Should governments really be using AI to remake the state?NewScientist (22 March 2025)]

I’ve expressed my opinion that generative AI is little more than a party trick for quite a while, as long-time readers may recall. Its use as a summarizer, however, may be of use in certain situations – although generative AI’s propensity for fallaciousness must give users pause.

But I also wonder why this took so long. Could it be caused by the marketing magic of the artificial intelligence industry? Ask most any AI researcher, and reportedly they will tell you that that what is popularly called artificial intelligence isn’t. It’s better known as machine learning. It even has a well-known acronym, which is ML.

In case you’re wondering, the mythical artificial intelligence is that which can pass the famous Turing Test,

… is a test of a machine’s ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine’s ability to answer questions correctly, only on how closely its answers resembled those of a human. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic). [Wikipedia]

Or, in other words, think of your friends and being able to depend on an AI to be as conceited and self-centered as them while spouting mathematical proofs of arcane propositions.

ML? Sure, here’s the Wikipedia definition, but I like mine better. Problem solving is the process of moving from state A (“my car won’t start!”) to state B (“my car started!”). If you know how to get from A to B, you don’t need ML (or AI, either). But if you don’t know how to get from A to B, nor does anyone else, then ML may be applicable. ML has proven reasonably good at discerning rules for taking a set of data (“This set of data represents a breast X-ray”) and discovering its implications (“You have breast cancer!”), if shown what similar sets of data implies.

I’m a bit off the beaten track, so let me get back to marketing. I’ve been somewhat fascinated by the thought that some of our current social chaos is caused by sloppy thinking. For example, those who want socialized medicine, aka single-payer healthcare, will occasionally make the argument that the citizenry deserves free healthcare. They make that statement with great sincerity.

And it’s just total bullshit. If my reader discards the well-meaning but lazy Well, I know what they mean filter, and turns on their literal interpretation capability, the “argument,” such as it is, collapses in a cloud of dust.

And if I continue on along this abstract track, I soon ask if such bad rhetorical devices are actually infecting their reasoning with bad consequences. Our example is certainly unconvincing, but rather than finding more effective arguments, it functions as a placeholder, a checkmark: I made an argument, one that is approved by my fellow enthusiasts. In that sense, the bad rhetorical argument binds together a disparate collection of individuals into a group, which implies social bonds, social hierarchies, and the whole bit of what Western Civilization pursues. It helps accomplish an unstated goal while actually hindering the putative goal.

As I said, it’s better to think of AI as ML, but if we call it AI, a phrase which comes with certain implications consequent of its parts, then certain expectations, beneficial to the patrons of the marketers, will come into play, while the limitations (or advantages, if you think about it) of ML fade from the consciousness.

And so the subtle contamination of the reasoning mind continues.


1 Yeah, yeah, I know, mine doesn’t depend on the ill-advised lack of punctuation to bring forth the ambiguity; for me, of late, the odd, unrelated multiple meanings of many English words has been striking. Why they went on strike … English is all about confusion masquerading as communication. Yes, the cat tricked me into getting up early this morning, and I’m grumpy about it.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Leave a Reply

Your email address will not be published. Required fields are marked *