Last January Backchannel published an article relevant to this thread, concerning the establishment of an AI (Artificial Intelligence) ethics board by Google. One of the leadoff paragraphs contains links to a number of associated organizations:
Earlier this month, the MIT Media Lab joined with the Harvard Berkman Klein Center for Internet & Society to anchor a $27 million Ethics and Governance of Artificial Intelligence initiative. The fund joins a growing array of AI ethics initiatives crisscrossing the corporate world and academia. In July 2016, leading AI researchers discussed the technologies’ social and economic implications at the AI Now symposium in New York City. And in September, a group of academic and industry researchers organized under the One Hundred Year Study on Artificial Intelligence — an ongoing project hosted by Stanford University — released its first report describing how AI technologies could impact life in a North American city by the year 2030.
And I’ll get back to this. The Backchannel article covers the Google organization:
Perhaps the most significant new project, however, is a Silicon Valley coalition that also launched in September. Amazon, Google, Facebook, IBM, and Microsoft jointly announced they were forming the Partnership on AI: a nonprofit organization dedicated to matters such as the trustworthiness and reliability of AI technologies. Today, the Partnership announced that Apple is joining the coalition, and that its first official board meeting will be held on February 3, in San Francisco.
Think of this group as a United Nations-like forum for companies developing AI — a place for self-interested parties to seek common ground on issues that could do great good or great harm to all of humanity. …
The real issue — though it doesn’t have the same ring as “killer robots” — is the question of corporate transparency. When the bottom line beckons, who will lobby on behalf of the human good?
That should be the responsibility of the governments in question. After all, corporations are ill-equipped for such concerns; even the good corporations have a preoccupation with corporate survival, not societal survival, which is the explicit concern of government. Of course, given the clownish attributes of the current government, I’m not certain I want them holding those reins.
There’s not a great deal more relevant facts about the Google effort, as it seems to be a secret undertaking (or was in January).
One of the links referenced above is to an effort by venerable MIT to explore the topic.
The Media Lab and the Berkman Klein Center for Internet and Society will leverage a network of faculty, fellows, staff, and affiliates who will collaborate on unbiased, sustained, evidenced-based, solution-oriented work that cuts across disciplines and sectors. This research will include questions that address society’s ethical expectations of AI, using machine learning to learn ethical and legal norms from data, and using data-driven techniques to quantify the potential impact of AI, for example, on the labor market. …
“Artificial Intelligence provides the potential for deeply personalized learning experiences for people of all ages and stages,” says [Cynthia] Breazeal, who emphasizes the need for AI to reach people in developing nations and underserved populations. But she adds that it is also “a kind of double-edged sword. What should it be learning and adapting to benefit you? And what should it do to protect your privacy and your security?”
It’s an introductory document, not really meant for analysis. However, why let that stop me? There appears to be an assumption that the Artificial Intelligences of the future will be of what I’ll call the non-autonomous variety, by which I mean they will not be making decisions about their own tasks, futures, desires, and fates, but rather be exceptionally advanced hammers in our hands. And it’s a worthy limitation; but it sort of avoids the ultimate suite of questions, doesn’t it? That being, if we’re in the position to give birth to an entirely different sentient species, then do we have responsibilities associated with that event and what comes afterward, or are they more like a brand new batch of … slaves?