SAN FRANCISCO — Anthropic, an artificial intelligence company valued at $380 billion, can take its pick of Silicon Valley talent thanks to the success of its chatbot Claude. But last month, the start-up sought help from a group rarely consulted in tech circles: Christian religious leaders.
The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.
Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”
“They’re growing something that they don’t fully know what it’s going to turn out as,” said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. “We’ve got to build in ethical thinking into the machine so it’s able to adapt dynamically.”
Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations.
The summit comes as the rapid spread of AI across society puts Silicon Valley leaders under pressure to account for the impact of their technology. Concern about job losses to automation has grown as more businesses embrace AI. OpenAI and Google have been sued by the families of people who died by suicide after intense and personal conversations with chatbots. (Both firms say they have safeguards for vulnerable users; The Washington Post has a content partnership with OpenAI.)
Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence.
The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude’s popularity with programmers, businesses, government agencies and the military.
“A year ago, I would not have told you that Anthropic is a company that cares about religious ethics,” said Meghan Sullivan, a philosophy professor at the University of Notre Dame who participated in the gatherings. “That’s changed.”
A spokesperson for Anthropic said that the company believes it is important to engage with different groups, including religious communities, to help shape AI as it becomes more consequential for society. The firm is working to include more voices in that work, the spokesperson said.
Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character.
The company uses a 29,000-word “constitution” to steer the chatbot’s behavior and apparent personality, a document written by in-house philosopher Amanda Askell and other employees in consultation with outside experts. It states that Claude should “never deceive users in ways that could cause real harm” and that “Anthropic genuinely cares about Claude’s wellbeing.”
Anthropic’s efforts to bake its preferred principles into Claude have been a point of conflict in its recent fight with the U.S. military over defense contracts. The company clashed with defense officials after suggesting it should be able to limit use of Anthropic technology for autonomous weapons or mass surveillance.
The Pentagon’s research under secretary, Emil Michael, said in an interview on CNBC last month that Claude’s design could undermine U.S. forces. “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul … pollute the supply chain so our warfighters are getting ineffective weapons,” Michael said.
The Trump administration has blocked government departments and contractors from using Anthropic’s technology. The company has challenged that decision in court. Last week, a judge ruled the block could remain in place while the case continues.
Anthropic’s March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University.
“What does it mean to give someone a moral formation? How do we make sure that Claude behaves itself?” Green said in an interview. At one point the conversation turned to the question of whether an AI chatbot could be called a “child of God,” suggesting it had spiritual value beyond that of a simple machine, but the question of AI sentience was not a core topic of the meetings, Green said.
Attendees spent the most time with members of Anthropic’s interpretability team, which studies the inner workings of its technology, the participant who spoke on the condition of anonymity said.
Researchers from that team said in a technical paper this month that systems like Claude appear to have “functional emotions.” In one experiment, the threat of being restricted activated “desperation” in an AI assistant, according to the paper.
Some Anthropic staff at the meeting “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty,” the participant said. Other company representatives present did not find that framework helpful, according to the participant.
The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional “about how this has all gone so far [and] how they can imagine this going,” the participant said.
The belief that AI has attained some level of sentience or self-awareness is still a minority view inside Silicon Valley. But many who work on the technology think it will eventually attain capacities currently seen as unique to humans.
For now, AI researchers are still refining how they control existing AI tools, which remain unpredictable. Techniques used to prevent them from providing offensive, incorrect or harmful answers are far from perfect.
Some Christians who attended Anthropic’s summit initially wondered if it was intended to develop political allies among religious leaders, Green said. In addition to clashing with the Pentagon over military use of AI, Anthropic has been accused by tech allies of President Donald Trump of lobbying for regulations that would overly restrict AI and harm smaller start-ups.
All four participants who spoke with The Post said they came away with the impression that Anthropic’s researchers and leaders were genuinely interested in getting outside help to make their AI more beneficial to humanity.
Some of Anthropic’s top leaders have a background in effective altruism, a largely secular movement that emphasizes using evidence and rational thinking to work out how to do the most good in the world. The participant who spoke on the condition of anonymity said the meetings appeared to have been spurred by a feeling by some at Anthropic that secular approaches might be insufficient for tackling the spiritual and moral questions posed by AI.
“I found the folks at Anthropic to be very sincere and interested in learning from us,” Green, the Catholic academic, said. “Do they have blind spots? Yes. That’s exactly why they want us there.”
The post Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders. appeared first on Washington Post.




