In this article, I summarize the important memo by the Faith Family Technology Network (FFTN) to the Department of Justice’s Religious Liberty Commission in preparation for the 250th anniversary of America’s founding, delivered March 23rd, 2026. FFTN, with over 130 experts across Judaism, Christianity, Islam, Hinduism, and Buddhism, offered this memo from the communities whose freedoms this Commission exists to protect.
The article “Religious Freedom in the Age of Artificial Intelligence: FFTN’s Memo to the Religious Liberty Commission” presents a compelling argument that artificial intelligence is not simply a technological development but a cultural force that is reshaping the conditions under which religious freedom is exercised. As AI becomes more embedded in communication systems, governance, and everyday decision-making, it introduces both opportunities and risks that demand careful ethical and legal consideration. The memo stresses that society is at a pivotal moment where decisions about AI will directly influence whether religious liberty is strengthened or diminished in the years ahead.
A central claim of the article is that AI systems are not neutral tools. They are designed by humans and reflect the values, assumptions, and biases of their creators. As the memo explains, “AI systems increasingly act as gatekeepers of speech and access,” which means they play a powerful role in determining which religious perspectives are seen, shared, or suppressed. This is particularly important in a digital age where much religious expression takes place online. Worship services, theological discussions, and community engagement are now frequently mediated through platforms that rely on AI-driven algorithms. While this can expand access and create new opportunities for connection, it also places significant control in the hands of technology companies.
The risks associated with this shift are substantial. One major concern highlighted in the article is the potential for algorithmic bias. AI systems may misinterpret religious language or practices, especially those from minority traditions, leading to content being wrongly flagged or removed. The memo warns that “without proper safeguards, AI could inadvertently discriminate against religious viewpoints,” creating an uneven playing field in the digital public square. This raises serious questions about fairness and the protection of diverse beliefs in an increasingly automated environment.
Another critical issue discussed in the article is surveillance. AI technologies have the capacity to collect and analyze vast amounts of personal data, including information about individuals’ beliefs and religious practices. The memo notes that “the ability of AI to track and infer deeply personal convictions presents new challenges for protecting the freedom of conscience.” In contexts where governments or corporations misuse this data, the consequences could include discrimination, coercion, or even persecution. This represents a direct threat to one of the core principles of religious freedom, which is the right to hold and express beliefs without fear.
The article also explores the deeper philosophical implications of AI in religious life. While AI can generate religious content or simulate spiritual conversations, it cannot replicate the lived and relational nature of faith. The memo emphasizes that religious freedom is not only about access to information but about meaningful human experience. If AI begins to mediate or replace aspects of that experience, it could subtly alter how people understand and practice their beliefs. This highlights the need for caution in adopting AI tools within religious contexts.
Beyond these concerns, the article calls for proactive engagement. It argues that policymakers, technologists, and religious communities must work together to ensure that AI systems respect fundamental rights. As the memo states, “the development and deployment of AI must be guided by principles that uphold human dignity and freedom.” This includes greater transparency in how algorithms function, as well as accountability for their outcomes. It also requires the inclusion of diverse voices, particularly those from religious communities, in shaping the future of AI.
Business Implications
This discussion has significant implications for the business world. Companies are at the forefront of AI development and deployment, which places them in a position of great responsibility. Businesses that design or use AI systems are not only making technical decisions but also ethical ones. The memo’s concerns about bias, censorship, and surveillance are directly relevant to corporate practices. If companies fail to consider religious freedom, they risk alienating customers, damaging their reputations, and contributing to broader social harm.
At the same time, there are opportunities for businesses to lead in this area. By adopting ethical AI frameworks that respect religious diversity, companies can build trust and demonstrate social responsibility. This might include auditing algorithms for bias, ensuring that content moderation policies are fair and transparent, and protecting user data from misuse. Businesses can also play a positive role by creating technologies that support religious expression rather than restrict it. In doing so, they not only comply with legal standards but also contribute to a more inclusive and respectful society.
In conclusion, the FFTN memo highlights that the rise of artificial intelligence is a defining moment for religious freedom. The choices made today about how AI is designed and governed will have lasting consequences. While AI offers powerful tools for connection and innovation, it also introduces new risks that cannot be ignored. Protecting religious liberty in this context requires vigilance, collaboration, and a commitment to ethical principles. As the article makes clear, the future of freedom in the digital age will depend not on technology itself, but on how humanity chooses to shape and use it.


