The fight between the Trump administration and the artificial intelligence company Anthropic over the ethical uses of the firm’s AI just officially got theological.
A group of 14 Catholic moral theologians, ethicists and philosophers have filed briefs in federal court supporting Anthropic in its effort to limit certain military uses of its AI chatbot, Claude, particularly concerns it could be used for mass domestic surveillance or to power autonomous weapons whose targets and firings are picked by AI, not humans.
The filings come with the arrival of a new American pope who calls the ethical challenges of AI one of his top priorities. Pope Leo XIV picked a name to follow Pope Leo XIII, a pivotal Catholic figure in addressing social challenges of the Industrial Revolution. Days after Leo was elected last year, he called AI “another industrial revolution … that poses new challenges for the defense of human dignity, justice, and labor.” He is expected to release a major teaching on the subject this spring, andso far has warned priests not to use AI for sermons and called for media to label anything made by AI.
While humans have always debated the ethics of war and technology, the scholars say cutting-edge AI technology that pulls humans further from moral decisions creates a different set of questions.
“In order for a violent act to be justified under the conditions of a just war … a particular judgment by a human must be made,” the scholars wrote. Catholic tradition “has consistently emphasized that decisions affecting human life, freedom and dignity must remain the responsibility of human actors.”
Charlie Camosy, a moral theologian at Catholic University and one of the brief’s authors, told The Post that concepts such as fully autonomous, massive drone swarm attacks turn war “into something totally different, morally speaking. In fact, it isn’t clear that ‘war’ is even the right word for it given how different it is.”
Tensions between Anthropic and the Defense Department have been building this year, primarily over the government’s rejection of limits the AI firm has asked be imposed on the use of its technology. Disagreements have flared over issues including whether the Pentagon hypothetically could deploy Claude to shoot down a nuclear strike against the United States. U.S. officials also cited another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as another flashpoint. The military is continuing to rely on Claude to help carry out the assault on Iran.
Anthropic was the first AI company to make a deal to work with the U.S. on classified military networks. The contract is for up to $200 million.
But the Defense Department this month essentially blacklisted Anthropic, forbidding military contractors from partnering with it. Anthropic responded by suing the administration, saying it had violated the company’s First Amendment rights to speak about its views of the limits of AI’s military applications.
The Catholic scholars filed briefs supporting Anthropic in that lawsuit, which was filed in federal courts in San Francisco and D.C. on March 9.
Secretary of Defense Pete Hegseth and President Donald Trump have said the government, not Anthropic, should be the sole decision-makers about the use of its technology. In a late February post on X, Hegseth said the company had exhibited “arrogance and betrayal.”
“As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives,” Hegseth wrote. He called the company’s concerns “the sanctimonious rhetoric of ‘effective altruism.’”
The Pentagon has said it has never considered using autonomous weapons or mass domestic surveillance, but it is also not willing to prohibit them in its contract with Anthropic. Hegseth’s X post said his agency must have freedom to use tech for “every LAWFUL purpose.”
Requests for Defense Department comment about the theologians’ brief and how the agency sees the ethical issues were not answered Wednesday.
The Pentagon adopted ethical principles for AI use in 2020 for both combat and non-combat use. Those principles said the department would exercise judgement and care in the use of AI; ensure AI did not develop with unintended bias; and continuously test for safety and security.
The Catholic scholars noted in their brief that their position is not identical to Anthropic.
In a February statement, Anthropic CEO Dario Amodei wrote that there is a “narrow set of cases” when AI “can undermine, rather than defend, democratic values..Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” Two, he wrote, were the use of mass surveillance for intelligence-gathering, and fully autonomous weapons.
Anthropic responded to questions about the Catholic scholars’ brief Wednesday by pointing to Amodei’s statement.
The Catholic Church, the scholars wrote, opposes autonomous weapons “on principle,” and the Vatican last fall called for a global moratorium on autonomous weapons. A Vatican representative urged the U.N. Security Council to “recognize that certain applications, such as technology that replaces human judgment in matters of life and death, cross inviolable boundaries that must never be breached.”
On the topic of mass domestic surveillance, the Catholics’ brief cited their faith’s teachings on privacy as key to human relationships and dignity.
Human dignity views relationships “as a sacred space, and guards communications within those relationships,” they wrote.
“For the government (and especially the military) to intrude in this space, and use private communications for some other end, undermines the good of human relationships, and ultimately, the dignity of persons involved in those relationships.”
Privacy, they wrote, is not an “absolute right” in Catholic teaching. But mass surveillance by the Pentagon “clearly oversteps privacy as described in Catholic thought.” Totalitarian governments, they wrote, treat humans as objects and sources of data.
The scholars also cited Catholic teaching on “subsidiarity,” which is the idea that decisions should be made on the most local level, from individuals and families to neighborhoods and towns. Mass surveillance, they wrote, “concentrates the power to monitor and judge individuals in the hands of a remote central authority.”
In response to the clash between Anthropic and the Defense Department, a new interfaith group called Faith Family Technology Network also weighed in, issuing a statement calling the moment “a grave test.” Dozens of Muslim, Christian and Jewish leaders signed the statement, which said the administration violated the consciences of Anthropic leaders.
“The dispute before us, however, is not between our government and a rogue company using its market power to impose unreasonable conditions on our public representatives,” they wrote. “It is not even a dispute over a matter of deep moral ambiguity. At stake are questions of fundamental morality that transcend political and theological differences, and that should concern every believer and every person of good will.”
The post To Catholic thinkers, Pentagon’s AI demands violate ‘human dignity’ appeared first on Washington Post.




