The Future of Smart Robots, Ctd

Anders Sandberg gets it.  That is, that there are ethical questions arising from the attempt to create an artificial intelligence.  He writes in NewScientist (12 September 2015, paywall):

It is the third problem that really interests me. Would emulations feel pain? Do we have to care for them like we do for animals or humans involved in medical research?

Exactly.  If you achieve your goal – creating an artificial intelligence – then is it ethical to deactivate the program, turn off the hardware at the end of the day?  Does the fact that we created that intelligence – depending on how you define create, as it’s very much a team enterprise – also give us the right to inflict pain upon and end the existence of the artificial intelligence?

The answer may technically be YES, but it would be a measurement of our maturity and intelligence to realize causing anguish to a living, thinking being – one that may feel and think on our level – is a moral hazard.  Anders agrees:

My suggestion is that it is better to be safe than sorry: assume that any emulated system could have the same mental properties as the organism or biological system it is based on, and treat it accordingly. If your simulation just produces neural noise, you have a good reason to assume there is nothing in there to care about. But if you make an emulated mouse that behaves like a real one, you should treat it like you would treat a lab mouse.

And then he continues onward to even more interesting questions, which may be unique:

What about euthanasia? Living organisms die permanently, and death means the loss of their only chance at being alive. But an emulated brain could be restored from a backup: Lab Rat 1.0 would awake in the same way no matter how many copies had been tested in the past. The only thing lost when restoring it would be the memories of the previous experiment. There may still be pleasures and pains that count. In some ethical views, running a million supremely happy rat simulations in the background might be a “moral offset” for doing something painful to one.

Maybe.   But the awareness of the imminence of extinction of this copy of the AI, if it causes anguish, is this a problem?

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.