fbpx

Hollywood actor Diedrich Bader, known for his comedy roles, says, “I always have my kids say ‘thank you’ to Alexa so that hopefully in the future our robot overlords will remember their civility.” Jokes aside, pondering the many moral implications of a more robotic future might be worthwhile. Societal choices will largely determine whether artificial intelligence is ultimately beneficial.

AI is everywhere. Today, there are virtual assistant technologies like Alexa, internet platforms that micro-target users, superintelligent computer programs that can master chess, and data-mining programs that can track personal purchases to produce “pregnancy-prediction scores”.

Artificial intelligence systems that learn on their own are advancing medicine and science in miraculous ways. Such systems can identify skin cancer or eye disease. AI is even being used to pick up brain signals; through a type of artificial intelligence that recognizes words, electrodes transmit signals to help stroke victims speak. AI can also be used to analyze climate change data and create models to anticipate natural disasters.

Not all uses of artificial intelligence are so benign, however. In fact, AI is leading to real-life instances of the “trolley problem”—the classic ethics thought experiment in which an onlooker can save five people who are about to be hit by a trolley by diverting it to kill just one person. These difficult problems are making everyone from driverless car manufacturers to makers of smart home devices search for the ethical principles to cover emergent situations. Likewise, incorporating AI into human bodies raises complex questions about what happens if the body becomes 80 percent machine and 20 percent human.

Making moral decisions is not yet a high-tech experience. But imagine taking the Bible, the papal encyclicals, and the writings of the Early Church Fathers as algorithmic input—and boom, artificial intelligence can do the rest. Or can it?

Machines are soulless, and so far AI can’t transform memories into useful plans. But can AI develop a moral compass to drive each decision? Can AI ever replace the morals and sense of conscience that drives human decision-making?

Call for AI Ethics

These concerns are shared by the Vatican, and Pope Francis has been weighing in. The Pontifical Academy for Life signed the “Call for an AI Ethics” in February 2020, just as the pandemic hit. It’s a document calling for an ethical approach to artificial intelligence; the signatories hope to promote a sense of responsibility among organizations, governments, and the private sector. The key idea of the document is that technological progress should serve humanity instead of replacing it.

So far IBM, Microsoft, the Food and Agriculture Organization of the United Nations, and The Italian Ministry of Innovation have signed on, and more signatories are expected. On April 14, the U.S. Department of Commerce appointed members to a National AI Advisory Committee to advise President Joe Biden and the National AI Initiative Office. The panel may advise the president about adopting the Rome Call.

The endorsers of the Call for AI Ethics agree to an “alogrethics” that does not have as its sole goal the creation of greater profit or the replacement of people in the workplace. Rather, in the words of the Call: “The development of AI in the service of humankind and the planet must be reflected in regulations and principles that protect people – particularly the weak and the underprivileged – and natural environments.”

Going far beyond the three laws of robotics introduced in Isaac Asimov’s 1942 short story “Runaround,” The Rome Call for AI Ethics contains six basic principles:

  • Transparency: AI systems must be understandable to all.
  • Inclusion: These systems must not discriminate against anyone because every human being has equal dignity.
  • Accountability: There must always be someone who takes responsibility for what a     machine does.
  • Impartiality: AI systems must not follow or create biases.
  • Reliability: AI must be reliable.
  • Security and Privacy: These systems must be secure and respect the privacy of users.

In February Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life, met again with Microsoft President Brad Smith to reaffirm his support and to reinvigorate the advancement of the Rome Call for AI Ethics.

US-focused Catholic organizations are already working in similar areas. A priest-led group called Optic studies AI and offers consultations to tech leaders in the US. Created in 2012 by the Dominican Order, Optic says it’s “a research and action network that prioritizes human values in the development of new technologies.” The Notre Dame-IBM Technology Ethics Lab, which promotes human values in technology, also is focused on the ethics of automation.

In his video message on his prayer intention for November 2020, Pope Francis said that technological progress “can make a better world possible if it is joined to the common good.” He warns, however, that it has the potential to increase the inequalities in society if it is misused. In a humorous play on words, his prayer intention asks “that the progress of robotics and artificial intelligence may always serve humankind… we could say, may it ‘be human.’”


Image: Sapienza university signs the Rome Call for AI Ethics. Photo from Flickr, CC BY-NC-SA 2.0

 


Discuss this article!

Keep the conversation going in our SmartCatholics Group! You can also find us on Facebook and Twitter.


Liked this post? Take a second to support Where Peter Is on Patreon!
Become a patron at Patreon!

Kathleen Murphy is a journalist who has worked for CQ Roll Call, Stateline.org and Internet World. She was the inaugural journalist in residence at Marymount University. Early in her career she was Marco Island bureau chief and columnist for the Naples (Fla.) Daily News. Murphy earned a B.S. from Northwestern University’s Medill School of Journalism and a master’s degree from Yale Divinity School.

Share via
Copy link