OpenAI, Anthropic join Faith-AI Covenant talks

OpenAI and Anthropic discussed how faith leaders could shape AI ethics via a new global covenant, amid debate over whether it distracts from regulation.
AI’s surge into daily life is triggering an unusual kind of outreach: tech leaders are seeking guidance from religious authorities on how to embed ethics into systems that are moving fast and influencing society.
In New York, Misryoum reports that representatives from OpenAI and Anthropic met faith leaders for the inaugural “Faith-AI Covenant” roundtable.. The discussions focused on how morality and ethical safeguards could be reflected in developing technology. with the initiative framed as a response to concerns that regulation may not keep pace with AI’s rapid deployment.
This matters for markets and policy alike because “ethics” has become a competitive and reputational battleground in AI. When companies align with influential community voices, it can shape public trust, risk perception, and the direction of future product decisions.
The roundtable was organized by the Interfaith Alliance for Safer Communities. which also aims to address broader social threats such as extremism. radicalization. and human trafficking.. According to Baroness Joanna Shields. a key partner in the initiative. the underlying idea is that faith leaders possess established experience in guiding communities through questions of moral safety.. Shields argued that ongoing dialogue between builders and moral authorities is more useful than relying on later-stage fixes.
A key ambition is to develop a set of norms or principles that reflect multiple traditions, ranging from Christians to Sikhs, Buddhists, and others. Misryoum reports that the initiative is expected to expand beyond New York, with plans for additional roundtables in other major cities worldwide.
However, turning shared values into workable AI guidelines is inherently complex. Different faith communities prioritize different issues, meaning that creating broad common ground may be achievable at the level of principles, while implementation could vary widely.
The meeting also drew attention to how some religious groups have already issued their own guidance on AI.. Misryoum notes that the Church of Jesus Christ of Latter-day Saints has published qualified approval of AI in its handbook. describing it as a tool that can support learning and teaching rather than replace inspiration.. Meanwhile, other groups have pushed for proactive engagement with emerging technologies instead of responding only after harm occurs.
Still, not everyone views the Faith-AI Covenant approach as straightforward progress.. Critics question whether these efforts are substantive or simply a public relations strategy. warning that the conversation about “moral AI” could divert attention from more urgent debates about AI’s role in society and the need for stronger oversight.
At the same time. the presence of skeptics highlights a broader market reality: ethical positioning alone may not satisfy regulators. consumers. or internal governance teams unless it is translated into clear. testable controls.. If Misryoum’s reporting reflects a wider shift. the real signal to watch is whether these discussions lead to measurable changes in how AI systems are designed. restricted. and monitored.
Misryoum’s bottom line: the Faith-AI Covenant represents a bid to add moral guardrails to AI from the outside-in, but the impact will ultimately depend on whether those principles can survive the translation from dialogue into operational decisions.