Does artificial intelligence deserve the same ethical protections we give to animals?

artificial intelligence
Credit: CC0 Public Domain

In the HBO show Westworld, robots designed to display emotion, feel pain, and die like humans populate a sprawling western-style theme park for wealthy guests who pay to act out their fantasies. As the show progresses, and the robots learn more about the world in which they live, they begin to realize that they are the playthings of the person who programmed them.

Viewers might conclude that humans need to afford robots with such sophisticated artificial —such as those in Westworld—the same ethical protections we afford each other. But Westworld is a fictional TV show. And robots with the cognitive sophistication of humans don't exist.

Yet advances in artificial intelligence by universities and mean that we're closer than ever to creating that are "approximately as cognitively sophisticated as mice or dogs," says John Basl, who is an assistant professor of philosophy at Northeastern University. He argues these machines deserve the same ethical protections we give to animals involved in research.

"The nightmare scenario is that we create a machine mind, and without knowing, do something to it that's painful," Basl says. "We create a conscious being and then cause it to suffer."

Animal care and use committees carefully scrutinize to ensure that animals are not made to suffer unduly, and the standards are even higher for research that involves human stem cells, Basl says.

As scientists and engineers get closer to creating artificially intelligent machines that are conscious, the needs to build a similar framework by which to protect these intelligent machines from suffering and pain, too, Basl says.

"Usually we wait until we have an ethical catastrophe, and then create rules afterward to prevent it from happening again," Basl says. "We're saying we need to start thinking about this now, before we have a catastrophe."

Basl and his colleague at the University of California, Riverside, propose the creation of oversight committees—composed of cognitive scientists, artificial intelligence designers, philosophers, and ethicists—to carefully evaluate research involving artificial intelligence. And they say it's likely that such committees will judge all current research permissible.

But a philosophical question lies at the heart of all this: How will we know when we've created a machine capable of experiencing joy and suffering, especially if that machine can't communicate those feelings to us?

There's no easy answer to this question, Basl says, in part because scientists don't agree on what actually is.

Some people have a "liberal" view of consciousness, Basl says. They believe all that's required for consciousness to exist is "well-organized information processing," and a means by which to pay attention and plan for the long-term. People who have more "conservative" views, he says, require robots to have specific biological features such as a brain similar to that of a mammal.

At this point, Basl says, it's not clear which view might prove to be correct, or whether there's another way to define consciousness that we haven't considered yet. But, if we use the more liberal definition of consciousness, scientists might soon be able to create that can feel pain and suffering, and that deserve ethical protections, Basl says.

"We could be very far away from creating a conscious AI, or we could be could be close," Basl says. "We should be prepared in case we're close."

Citation: Does artificial intelligence deserve the same ethical protections we give to animals? (2019, May 9) retrieved 25 April 2024 from https://phys.org/news/2019-05-artificial-intelligence-ethical-animals.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Careful how you treat today's AI: It might take revenge in the future

24 shares

Feedback to editors