Ali Alkhatib, a PhD student in computers at Stanford, points out that computer engineers are no different from any other part of Western Civ these days – they go where the money tells them to go. For him, that’s a problem:
[Professor James Landay] goes on to write about Engelbart’s “mother of all demos” in 1968, the introduction of something like half a dozen features of modern computing that we use every day: text editing (including the ability to copy/paste), a computer mouse, a graphical user interface, dynamic reorganizations of data, hyperlinks, real-time group editing (think Google Docs), video conferencing, the forerunner to the internet, the list goes on. What he doesn’t write about – what few of us talk about – is the funding the Stanford Research Institute got from the Department of Defense, the role the DoD played in the development of the internet and of Silicon Valley itself, and the uncomfortable readiness with which we collaborate with power. We’re shaping our work toward the interests of organizations – interests that are at best neutral and at worst in opposition toward the interests of the public.
John Gledhill wrote about the work of political anthropologists in the 1940s and 1950s in Power and its Disguises, arguing that “the subject matter … seemed relatively easy to define,” outlining that the ultimate motivation of government-sponsored political anthropology like EE Pritchard’s study of the Nuer people was that “… authority was to be mediated through indigenous leaders and the rule of Western law was to legitimate itself through a degree of accommodation to local ‘customs’” (Gledhill 2000). The danger of aligning our work with existing power is the further subjugation and marginalization of the communities we ostensibly seek to understand. …
Today the government isn’t the main director of research agendas and funding so much as private corporations are. Facebook, Google, Amazon, Twitter, and others offer substantial funding for people who conform to their ethics – one which fundamentally has to account to shareholders but not necessarily to people whose lives are wrapped up in these systems; numerous laws on the regular disclosure of the financial state of publicly traded companies carefully ensure that publicly traded companies are responsibly pursuing the best business decisions, but still in the United States almost no laws concerning the handling of data about us, the ethical commission of research on or about us, or even the negligent handling of private data.
The conflicts of interest are almost innumerable and mostly obvious; that organizations discussing the ethical applications of AI should not be mostly comprised of venture capitalists, AI researchers, and corporate executives whose businesses are built on the unregulated (or least-regulated) deployment of AI should be blindingly obvious. And yet, here we are. Somehow.
Certainly, we need to find some way to talk about the development of AI in a way that takes into account the interests of greater society, because it appears that it’s impact will be tremendous. It’s almost frustratingly tempting to say that government should be part of that discussion. It’s tempting because, ideally speaking, the role of government is to take the overall view of the well-being of society. They are best situated to regulate as necessary.
But it’s frustrating because the track record of government, as its various agencies are ‘captured’ to lesser or greater extents, is poor. Even worse, its embodiment of cultural arrogance makes it, again, a poor candidate for such regulation.
Ali’s written an interesting blog post, with few obvious solutions to what, for many, is not an obvious problem. It’s worth meditating on.
h/t C.J.