
By Brian Grim, Ph.D.
In recent conversations with leaders from a major international business who were exploring how to support employee faith groups, I was asked a practical question: what kinds of topics have other companies addressed in this space? I mentioned that I had worked with Thomson Reuters Interfaith Employee Network on a program focused on Faith and Artificial Intelligence, From Values to Value: Interfaith Leadership in the Age of AI. That answer sparked immediate curiosity, but also some confusion. “What could faith have to say about AI?” they asked. Even among people already interested in faith, the connection was not obvious.
I began with the most tangible concerns. Artificial intelligence is already shaping how religion is experienced and understood in society. In some contexts, such as China, AI-driven surveillance technologies are used to monitor and exert control over ethnic and religious minorities, including Uyghur Muslims in Xinjiang. Elsewhere, generative AI systems produce content that demeans or distorts religious beliefs, reinforcing stereotypes, spreading misinformation, or presenting inaccurate portrayals of religious communities. These developments raise serious questions about religious freedom, human dignity, and the integrity of belief in a digital age.
But as the conversation unfolded, it became clear that these issues, while important, were not what most intrigued these leaders. What captured their attention was something more fundamental: not simply how AI affects religion, but what AI reveals about what it means to be human in the first place.
This question is not new. More than two decades ago, theologian and computer scientist Noreen Herzfeld explored it in her work on artificial intelligence and the image of God. Long before today’s AI boom in, she argued in 2002 that the way we design intelligent machines reflects our assumptions about human nature. Her insight is strikingly relevant today, particularly for business leaders navigating the rapid integration of AI into the workplace.
Herzfeld describes three primary ways of understanding what it means to be human.
- — The first sees humanity in terms of properties we possess — especially intelligence or reason.
- — The second defines us by what we do — our functions, capabilities, and productivity.
- — The third understands humanity as fundamentally relational, grounded in our capacity to form meaningful relationships with others.
These frameworks are not merely philosophical distinctions; they shape how we approach AI itself.
Modern business culture has overwhelmingly embraced the second view. We define people by their roles, their output, and their measurable contributions. It is no accident that one of the first questions we ask upon meeting someone is, “What do you do?” This functional understanding of human identity aligns closely with how organizations deploy AI: to improve efficiency, automate tasks, and optimize performance. In this paradigm, AI is a powerful tool precisely because it can perform functions once reserved for humans, often faster, cheaper, and at greater scale.
Yet Herzfeld warns that this functional definition carries significant implications. If human beings are defined primarily by what they do, then machines that can do those things more effectively begin to look less like tools and more like replacements. The anxiety surrounding AI — whether about job displacement or longer-term existential concerns — flows naturally from this assumption. In a purely functional framework, the question is not whether machines will compete with humans, but how long it will take before they surpass them.
Faith traditions offer a different perspective. Across religious traditions, there is a consistent insistence that human beings cannot be reduced to intelligence or productivity. Instead, they are understood as relational beings, defined by their capacity for connection, responsibility, and moral agency. Herzfeld draws particularly on this relational understanding, arguing that it provides a richer account of human dignity, one that cannot be replicated by machines. Even within the field of artificial intelligence, this insight emerges in unexpected ways. The Turing Test, long considered a benchmark for machine intelligence, evaluates not computational accuracy but the ability to engage in meaningful conversation. In other words, it measures relational capacity rather than mere functional performance.
This distinction matters deeply for business leaders. AI systems do not simply execute tasks; they embody assumptions about what counts as valuable, meaningful, and human. When those assumptions are limited to efficiency and output, the risks are significant. Systems may amplify bias, misrepresent cultural and religious identities, or be deployed in ways that erode trust and dignity. These are not merely technical failures; they are failures of understanding what human beings are.
Recognizing this, a growing number of voices from faith communities are engaging directly with the development of AI. A recent multi-faith initiative, bringing together Jewish, Christian, and Muslim leaders, issued a statement calling for “moral guardrails” in artificial intelligence, emphasizing that what is at stake includes “the sanctity of human life, the right to privacy and dignity, and the freedom of conscience.” Their concern is not abstract. It reflects a recognition that the ethical boundaries shaping AI today will define the kind of society we inhabit tomorrow.
For organizations, this has practical implications. A workplace that is open to faith perspectives is not simply more inclusive; it is better equipped to grapple with the deeper questions AI raises. Religious traditions bring long-standing reflections on human identity, responsibility, and the limits of human creation, resources that are often missing from purely technical or economic discussions. Without these perspectives, companies risk operating with an incomplete understanding of the very people their technologies are meant to serve.
Ultimately, the question facing business leaders is not only how to use AI, but what vision of humanity their use of AI reflects. Herzfeld frames this as a choice between building systems that replace human beings and those that support and enhance human relationships. That choice is already being made, often implicitly, in decisions about how AI is designed, deployed, and governed.
The conversations I had with those business leaders began with uncertainty about what faith might contribute to discussions about AI. They ended with a clearer recognition that faith does not merely offer commentary on technology; it speaks to the foundational question technology now forces us to confront. As AI continues to expand what machines can do, it simultaneously presses us to reconsider what humans are—and what they are for.
That is why this moment calls for more than reflection; it calls for engagement. Business leaders, technologists, and policymakers alike should take seriously the emerging efforts to articulate ethical boundaries for AI grounded in human dignity. One practical step is to read the Moral Guardrails in Artificial Intelligence statement issued by the Faith Family Technology Network, consider its principles, and determine whether to add your voice.
The Moral Guardrails statement emerges amid a high-profile standoff between AI company Anthropic and the U.S. Department of War, where Anthropic has refused to remove safeguards preventing uses such as autonomous lethal weapons and mass surveillance, highlighting a growing tension between technological capability, national security demands, and moral responsibility.
At a time when the trajectory of AI is still being shaped, silence is itself a decision.
Reference: Herzfeld, Noreen. 2003. Creating in Our Own Image: Artificial Intelligence and the Image of God. Zygon ®: Journal of Religion and Science 37(2): 303-316.

