Working for workplace religious diversity, equity & inclusion

E-NEWS ACTION DONATE

Faith, Ethics and Technology


What Faith & Belief have to say about Artificial Intelligence

by Kent Johnson

Today, business leaders are directing programmers and technologists to construct Artificial Intelligence systems that have global ethical impact. “AI” is enabling enormous medical, economic and social advances – and also enabling some alarming incivility and intolerance. Its impact on society is huge and expanding rapidly. Problem is: AI lacks the capacity for moral or spiritual discernment. We need perspectives of faith and belief in the rooms where AI decisions are being made.

Like all transformative technologies, AI capabilities that were originally intended for good can be diverted to serve destructive purposes. Computer algorithms that were designed to relentlessly improve efficiency and short-term profitability, if not informed also by ethical considerations, lead to moral, ecological and social disaster. Some AI has been designed to steer popular search engines so that sensational and divisive content is amplified, in order to increase “hits” and advertising revenue at the expense of civility. AI is increasingly being leveraged to “automatically” monitor, target, investigate, censor and even punish people who make presumably undesirable statements, including expressions deemed by those in power to be “hate speech” or “fake news.” In some countries, government officials use AI to monitor and control publicly accessible discourse by broad categories of people whom they presume to be “suspect,” without any avenue for appeal. AI has been commandeered to enable hacking and theft, to perpetrate fraud and enable hate crimes. The threat to freedom and civility is real.

In an effort to advance thinking worldwide on how to navigate the emerging world of AI, the Religious Freedom & Business Foundation worked with David Brenner at AI and Faith to bring in several highly qualified experts on the topic of AI and Faith to speak at its second Faith@Work ERG Conference in February, 2021. Our purpose in the conference (and this webpage) is not to advocate any particular faith’s perspectives on the ethical ramifications of AI, but rather to draw attention to the work that’s already underway to connect faith to work in this crucial arena, and to encourage leaders throughout commerce to purposefully and systematically seek out and thoughtfully consider perspectives of religiously diverse people on the ethical implications of AI.

Below we’ve provided just a few glimpses of some of the experts’ remarks, which you can see in the videos on this page.


At the conference, Rear Admiral Margaret Grün Kibben, Chaplain of the US House of Representatives and a founding member of the influential group AI and Faith, set the stage and led an incisive discussion on the need to consult faith perspectives on the topic of AI with speakers and participants, including:

Brian Green, Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University, and a founding member of AI and Faith described the broad conversation occurring around the country and the world concerning AI ethics, and the importance of people of faith and faith institutions using their influence and deep understanding of values essential to beneficial uses of the technology, as the Vatican is illustrating.

Michael Paulus, PhD, Director and Associate Professor of Information Studies at Seattle Pacific University and a founding member of AI and Faith, noted that though mankind has been negotiating technology and ethics for generations, what we’re going through with AI now is profoundly different and extraordinarily transformational, in that AI changes how we think of ourselves. He spoke of efforts with the World Economic Forum to consider “Ethics by Design,” an organizational approach to responsible use of technology. He elaborated on shared end goals, congruence and divergence, cultural exegesis and practices to incentivize ethical behavior. Pointing out that digital devices can prompt proactive and reactive reflection, he advocated the encouragement of personal mission statements and ethics audits. And he concluded that faith perspectives are needed in the dialogue about “What is the future we want to shape?”

Patricia Shaw, Esq., CEO and Founder of Beyond Reach Consulting Ltd, which supports organizations in the delivery of ‘ethics by design’ across the AI lifecycle, spoke about the need for ethics advisory boards, consisting of stakeholders across the reporting chain, in order to understand AI’s cultural impact. She cited examples of the disproportionately large effects that can flow from seemingly innocuous AI decisions.

Frank Torres, Director of Public Policy in Microsoft’s Office of Responsible AI, also weighed in on the topic of “How do you create ethical AI?” He described how it’s possible to have alignment among faith leaders, business leaders and others on a principled approach to AI. He spoke of the “Ether Committee,” a Microsoft group consisting of a diverse spectrum of employees, that considers questions like: Is it helpful to mankind to pursue particular research, or to create a particular product? He spoke of an internal governance structure, with people embedded in teams able to answer questions about the possible impact of different AI implementations. He affirmed the need to engage with and let faith communities have a voice in decisions.

Michael Quinn, PhD, Dean of the College of Science and Engineering at Seattle University, Director of Seattle University’s Initiative in Ethics and Transformative Technologies and author of the seminal book Ethics for the Information Age, led a workshop on real-life situations where faith based values should be translated into ethical propositions, and made a strong case for people of faith to engage in the architecting of AI.

Cory Andrew Labrecque, PhD, Director of the Master of Arts in Bioethics Program at the Center for Ethics at the University of Laval, and also a founding member of AI and Faith, related an experience with the Pontifical Academy for Life, which hosted the President of Microsoft, an IBM SVP and many other influential technologists and ethicists in a careful analysis of AI ethics.

Nicoleta Acatrinei, PhD, an economist and Project Manager at Princeton University’s Faith and Work Initiative and also a founding member of AI and Faith, spoke about how an individual can bring his or her faith to work to impact AI. She focused especially on how individuals’ relationships with God relate to their relationship with technology; and how AI can be a complement or a substitute. She raised the probing question: “Will people be replaced by algorithms?”

Paul Taylor, a founding member of AI and Faith and a teaching pastor and elder at Peninsula Bible Church in Palo Alto with a degree in industrial engineering from Stanford who blogs on tech and theology, gave an encouraging report of faith being applied in companies that dominate the social media. He offered a balanced perspective of AI, neither fearful of it nor overly enamored with its possibilities; and he encouraged people of faith to engage in social media to bless the world.

Yaqub Chaudhary, PhD, and a founding member of AI and Faith and a former/recent Research Fellow in AI, Philosophy and Theology at Cambridge Muslim College who has researched AI, cognitive science and neuroscience in connection with Islamic conceptions said that sacred writings have much to say about the ethics of AI. Noting that the Koran says work is an act of worship, he lamented the absence of an overriding moral context in companies where short-term profitability is the predominant value. As cases in point, he cited scandals like the VW emissions debacle, which persisted over 9 years and involved scores of workers and executives. He said that Islam teaches that one should consider unintended consequences of decisions, and that this principle applies to technology’s potential use for destructive purposes. He lamented the use of search tools and algorithms to target “suspect” groups and create and strengthen echo chambers that may be profitable for some, but that reinforce deep and unwarranted divisions of distrust and hatred in society. He called for companies to pursue moral vision and ethical frameworks derived from core underlying principles, and proposed specifically that Muslims are uniquely situated to speak to ethics.

Deborah Rundlett, DMin, founder of Poets & Prophets, a global community of change leaders, spoke of The Pivot Project, which involved IBM leaders and engineers wrestling with how to come out of Covid with a better world, and how faith and belief was part of that. She described how people of diverse religious traditions grappled with interdependency as they sought to nurture moral thought and imagination in pursuit of human flourishing. In this context she spoke of systems thinking, ways to foster personal agency for social responsibility, and trust, nurtured by storytelling, wisdom and spirituality.

Zahra Jamal, PhD, Associate Director of the Boniuk Institute at Rice University described how algorithms are being used to target people of various faiths and to enable and exacerbate hate crimes against them. She called for dialogue among people of faith, drawing from their sacred writings, in efforts to devise ways to guard against such abuses.

We at RFBF hope these eminent speakers’ reflections will influence companies to think more deeply about the ramifications of their use of AI, and to purposefully and systematically enlist the perspectives of their religiously diverse workforces.

We stand at a turning point for humanity. People’s faith and core beliefs carry much wisdom to help navigate the world of Artificial Intelligence. We should tap into that wisdom.