Adam Aleksic talks about how feedback loops involving ChatGPT and other generative Large Language Models (LLMs) AI tools are affecting language use.
Linguistically, there’s nothing wrong with [the emphasis of certain words over their synonyms]. The word “delve” isn’t any worse for your brain than its synonyms. But language is a harbinger of greater social shifts. There are many other, far more insidious misrepresentations that are also coded into LLMs. Racial biases, gender biases and political biases — all of these are probably trained into the models much like linguistic biases, but these are harder to definitively measure. We need to remember that these aren’t neutral tools: They hold the power to subtly reshape our thinking. [WaPo]
And – they are at least one step removed from reality, and probably more accurately several. We’re not talking about creatures who experience reality directly by their senses. No, all they get is text input.
A great reason to be wary of such programs. Don’t take their outputs without checking them, and please don’t fall in love with them.
