Motivations For AI

Microsoft Research’s Kate Crawford has been studying AI for close to twenty years, and in this NewScientist (27 March 2021) interview she discusses her observations. NS provides a helpful introduction:

[Crawford] argues that AI, far from being something abstract and objective, is both material and intrinsically linked to power structures. The way it is made involves extracting resources from people and the planet, and the way it is used reflects the beliefs and biases of those who wield it. Only when we come to terms with this, says Crawford, will we be able to chart a just and sustainable future with AI.

Or, perhaps, reject it. But how to define AI?

You say in your new book [Atlas of AI] that AI is neither artificial nor intelligent. What do you mean?

Often when people think about artificial intelligence, they’ll think about binary code and math, or something that’s ethereal and in the cloud, or they might think about a series of corporate products like Alexa, Siri or Google’s search algorithm. But none of these things are artificial – in fact they are profoundly material. They only function because of large amounts of data scraped from the internet and an enormous extraction of resources, including minerals, energy and the human labour that is necessary to label the data used by AI systems. In this sense, AI is a material system that is very much coming from humans, created by humans, and more widely from the earth.

The implication is that artificial and material are opposites, but, at least in my mind, they are not opposites, at least in common usage. I think Crawford needs to consider her communications strategy, because I, a software engineer from the 1980s onward who, while not involved in AI R&D, have given it some thought on the ethical level, as noted on this blog, and I don’t quite understand the point she’s trying to make.

Artificial Intelligence from the movies!

I can – and do – accept the operational aspects of that paragraph, and I think they have enormous implications. I also reject the notion that we’ve nearly achieved AI in the literal sense – that is, as long-time readers know, in the emergence of self-agency, the notion that an entity should select its own goals, that it should evaluate its hypothetical actions in reaction to a stimulus in terms of its self-selected goals.

What computer science departments and corporate marketing calls AI is nothing more than exceedingly sophisticated tools developed by and for humans.

This paragraph, too, is somewhat puzzling:

You say that it is inherently political too. How?

Artificial intelligence is politics all the way down. From the way in which data is collected, to the automated classification of personal characteristics like gender, race, emotion or sexual identity, to the way in which those tools are built and who experiences the downsides.

Time and time again we’ve seen that people who are already marginalised are the ones who experience the worst harms from large-scale artificial intelligence systems. We’ve seen communities of colour targeted by predictive policing systems, immigrants surveilled and tracked by deportation tools, and people with disabilities cut off from support services due to poorly designed healthcare algorithms.

I’m optimistic when I see people starting to demand greater justice, transparency and accountability. We’ve seen widespread student protests in the UK over algorithmic mismanagement in the education system and we’ve seen substantial public pushback around facial recognition in the US.

Politics is an intentional activity, and I’m not convinced that the data collection practices we’ve seen produce poor results in such areas as facial recognition, for example, are the result of politics. I’ve been too long in the trenches to know that, hey, people make mistakes. That doesn’t make them deliberately political, it just means that we are humans operating in a world which we’ve constructed, but for which we have not evolved. When the big picture means the populations of seven continents in all of their eccentricities and you’ve never left the suburb of Philly that you grew up in, except for that one big trip to NYC, it’s not hard to see mistakes being made.

But I’m presenting nits – important nits – to her argument. Because “AI” lacks self-agency, it also necessarily lacks an independent ethics[1]. It does what it’s told – and if that’s to track Chinese citizens to make sure they behave in the manner prescribed by the Chinese Communist Party, then that’s what it’ll do. And that is political.

Given the proliferation of these tools, what are better called Machine Learning tools (ML), I think Crawford is absolutely asking and answering the right questions, and whether or not I understand those answers, I think she’s doing important work. The interview is a good place to start.


1 Not that intelligence necessarily gives rise to a sense of ethics! But I think it’s a necessary precursor, whether that sense of ethics is self-generated, collaboratively or not, or taught. But, of course, people like serial killer Ted Bundy show that intelligence doesn’t automatically lead to ethics. To not recognize that is to make the digital mistake with human intelligence. Even this postscript is making that mistake to a degree: I could see an argument that all humans come with a code of ethics, it’s just that some are agreeable to building a stable, peaceful society, and some are so self-centered that society rejects them. The malleability of the concept is treacherous.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.