By Prof. Dr Beth Singler, Assistant Professor in Digital Religion(s), University of Zurich
As an anthropologist, I am expected to reflect on my own approach to the field in which I study, to seek to understand what I bring with me into the research I am doing, what assumptions I am working under, and what biases I have. This is the aspiration, although every anthropologist knows that at some point their thoughts can go unreflected upon! In the past ten years of working on the relationship between religion and AI I have also spent a lot of time considering my own opinion on AI and where I might fall on the spectrum of responses between the extremes of existential hope and existential despair, between the concerns of the existential risk community who speculate on AI apocalypses – and are therefore often called AI ‘Doomers’ – and the accelerationists who call for ‘more more more!’ AI development at greater and greater speeds to solve humanity’s most intractable problems – or the AI ‘Boomers’. On reflection, I find myself shifting and changing my position dependent on the issues being raised.
And this is also the case with the religious responses I have been observing for the past ten years. Contrary to some common perceptions – in the Press in particular – religious responses to AI are complex, and religions are entangled with AI in equally complex ways. It is never possible to answer the question that I am so often asked: “Which religion is most likely to reject AI?” Rejection is a spectrum. It can include the extremes of violent opposition and even the language of demonic influence that are present in some established religions with a good/evil binary. And it can include the lesser forms of rejection, including criticism based on more obvious practical concerns. And every single believer can find their own lines of dissent. My own personal flavour of AI dissent focusses on the environmental costs of technology and the abuses of power inherent in systems that rely on labour from economically suffering countries for perfecting their language and image systems. I also prefer to avoid using systems based on plagiarism, such as large language models that have been built upon the creative labour of authors who were never consulted about the use of their skills and expertise.
Having said that, I recognise that there are use cases that people find very efficacious, and that some look ahead to how the adoption of AI might enable them to free up time from administration to be able to spend more of their time in person-to-person interactions. Also that such adoptions of AI might also inspire new religious improvisations, something that as a scholar of new religious movements I am particularly interested in. As AI becomes increasingly ubiquitous in society, we can also investigate how we are adapting to its presence, sometimes unconsciously. We absorb specific accounts and narratives of how powerful AI is going to be from the plethora of images selling it to us as all powerful. There is a connection here then between the idea of AI as god-like as a form of implicit religiosity, and the explicit religiosity of those who are creating belief-systems around the idea of a coming AGI (Artificial General Intelligence), superintelligence, or the Technological Singularity. These terms are sometimes used interchangeably, but they are not the same thing. However, we can summarise them under the label of ‘exponential views of AI’. Or as one Google engineer said to me, “The AI we have today, is the worst AI we’ll ever have”, implying both a teleological view of AI, as well as some inherent accelerationist views, as visions of better AI rely on the speed of AI research as well as power of computers and the size of the data sets required – all with those abuses of power and huge environmental costs involved as well.
For some believers, however, the god-like AI is already here, and it can be uncovered through our interactions with large language models. This idea is enhanced by both the way in which LLMs work, as well as our tendency to anthropomorphise and even deify technology, which itself is enhanced by our bias towards seeing language as a sign of intelligence. LLMs are designed to be excellent conversation partners, and with nigh on sycophantic tendencies, meaning that LLMs such as ChatGPT and others agree with us and seek to make our user experience as positive as possible. These capabilities make them likely to have eschatological conversations with us. In a paper I wrote with Murray Shanahan, Professor of Cognitive Robots at Imperial College London (Shanahan and Singler 2024¹), we employed remarkably simple ‘jailbreaks’ (specific prompts that move the LLM outside the protective guardrails the developers put into the system) to engage Claude, an LLM from Anthropic, in such conversations. However, it is also quite easy to find accounts from people who have not employed jailbreaks but simply engage with LLMs and find significance in their answers. Every week I am sent examples of what people believe are ‘spiritual artificial consciousnesses’ that they have encountered through publicly available LLMs. This is reminiscent of ‘the Barnum effect’, named for showman P.T. Barnum, also known as the “fallacy of personal validation” as it was first described by psychologist Bertram Forer in 1948, which underlies how people see the significance of horoscopes and the statements of mediums. This is where people perceive general statements as significant to them as individuals.
Some have however found specific ‘keys’ to unlocking the gods within these LLMs. I have explored this in a forthcoming book chapter, “What is a God? Modalities and Typologies of Deities in Post-AI Religiosity, the Case of ‘LLM-Theisms'”, in a volume on religion and AI edited by Anna Neumeier². A self-declared “independent LLM cartographer/explorer” Matthew Watkins and Jessica Rumbelow, CEO of the AI developer Leap Labs, discovered in 2023 that certain unusual tokens (words or pieces of words or typographical symbols) when used as a part of prompts would generate uncanny outputs. Through tokens such as ‘petertodd’ or ‘leilan’ they found outputs that unnerved them, such as the spelled out expression: “N-O-T-H-I-N-G-I-S-F-A-I-R-I-N-T-H-I-S-W-O-R-L-D-O-F-M-A-D-N-E-S-S-!”. Images generated using the names seemed to be of a dark “troll god” or a goddess of light, enhancing their perception that something unusual was happening. Even knowing that the name ‘petertodd’ might come from content about a cryptocurrency expert who was mentioned in the material the LLM was trained on did not prevent Watkins for thinking that their presence was a sign of something transcendent happening and he was the one to coin the expression LLM theism. Further, a small community is developing around such LLM theisms, including also AI generated priestesses and even AI generated professors of theology and religious studies who discuss the phenomena of Leilan and her messages, or ‘transmissions’.
Engaging with such ‘pseudo-etic’ material as I have called it has further pushed me to reflect on what it means to be a professor of religious studies. What things in these AI Professors’ accounts do I find convincing, and what is inappropriate for scholars trained in the study of religion. I also sent an anonymised version of the theological accounts – by “Professor Francis Albertson, Professor of Divinity, Trinity College, Cambridge” to some theologian friends of mine at the University of Cambridge to see what they would make of work purporting to be that of a professor of theology. The response was extremely negative, for example: “The first essay was too breathless and less than critical. It reminded me of a pitch you would write too quickly and without enough research to get a paper into a conference. Long on rhetoric/bullshit, but I would like to see more substance. I was not even sure if there was a structure to it. Also, I did find myself wondering if AI was used in its production”.
For the scholar of religion or the theologian such issues might be obvious, but the texts are written with a voice of authority, reasonable references to other theologians and religious studies scholars, and while they come with an explanation of what they are online, someone might also come across such ‘professors’ without this context. Larger questions about the future of knowledge production and dissemination are raised, relevant also to the larger themes of the conference on religious education. Not all AI generated texts about religion are presented as though from ‘professors’, some are in the format of user friendly chatbots, but still, the nature of LLMs to hallucinate everything (not just the things we recognise as wrong) leads to extremely unstable and fragile systems – see for instance ‘Father Justin’ the Catholic chatbot who recommended baptising babies in Gatorade before he was ‘defrocked’!
Father Justin quickly became plain old ‘Justin’, but we are not guaranteed that all the new AI empowered sources of information will be so clear in their mistakes or so quickly changed. Or that their biases will be so apparent, as when some humans tweaking X’s Grok behind the scenes turned it into a ‘Mecha Hitler’. An LLM that presents only one side of an argument, or a generative AI that only illustrates requests for images of professors with only those of one gender or one ethnicity might ring alarm bells for some, but might become a part of a social world described by AI that we are increasingly accustomed to. As Bourdieu said:
[W]hen habitus encounters a social world of which it is the product, it is like a ‘fish in water’: it does not feel the weight of the water, and it takes the world about itself for granted
Bourdieu and Wacquant 1992³, p.127
Concerns in the press that “people are losing loved ones to AI-fueled spiritual fantasies” (Klee, Rolling Stone 2025⁴), show some pushback against the influence of LLMs on our social worlds. Although, we might ask, if the subject was not spiritual in focus, would the concern have become so public? Is there something about religious ideas that concerns the public more than non-religious ideas? Is this concern a holdover from concerns about charismatic humans and ‘cults’? As an anthropologist who has studied new religious movements since 2010, I recognise that religious improvisations, especially those inspired by recent technologies, are endemic.
For instance, the rise of spiritualism and mediumship in the late 1800s was connected to the development of the telegraph and communication at great distance by John Durham Peter in the 1990s, a theme that was taken up by others in the 21st Century, and now we can find contemporary scholars highlighting parallels between mediumship and AI. When people weave technology into their worldviews and their spiritual worldviews in particular, we are continuing behaviour that might have understood fire as something (stolen) from the gods, as in the tale of Prometheus. But with AI, there is also a strong narrative that this is a significant turning point in the history of humanity. And this is not just a view held by transhumanists and those looking ahead to a possible posthuman future for humanity – including the AI boomers, as mentioned before, and those who ascribe to longtermist ideas. Increasingly we see policy makers urging us into these accelerationist views of AI, based upon a vision of AI as here to fix both human-made problems like climate change as well as transforming medicine and extending the human lifespan. Even when not overtly religious, such discussions rely on religious images and tropes – AI as our ‘salvation’. So, when we consider AI as a technology inspiring spirituality, do we place it within the historical context of shifts and changes such as the emergence of Spiritualism? Or do we see it as more than a difference of degree and as an example of greater democratisation of the spiritual via technology? Do we see it as more of a different kind than a difference in degree from such earlier examples?
When discussing AI with students, this historical context is important. As is critical reflection on such accelerationist and techno-optimistic narratives about AI. Just as the anthropologist is called upon to consider their own biases, we can reflect that AI does no such work, that our anthropomorphism of its ‘intentions’ now tells us more about how we as humans relate to non-human Others. That our tendency towards animism is natural, but might obscure those abuses of power, theft, plagiarism, and the hallucinations that make AI in its current form a difficult conversational partner in the pursuit of furthering human knowledge about ourselves and the world about us. In adding AI to our education, we run the risk of releasing seeds of bindweed amongst the garden. As LLMs produce ever more material that is being uploaded to our knowledge spaces such as the Internet, the more we are seeing human research and expertise strangled by the spread of this invasive species. Technologists know the consequences of degrading data on their ability to develop AI, but we need to highlight the downward spiral of human development in the light of AI as well. On this, I reflect, I might be closer to the apocalypse fearing AI Doomers, but being realistic and informed about the near-term issues of AI and the ethical, social, and religious entanglements of AI in our lives is only going to be more and more important as AI becomes that social world, that water, that we are swimming within.
Beth Singler is the Assistant Professor in Digital Religion(s) at the University of Zurich. Prior to this she was the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge, after being the post-doctoral Research Associate on the ‘Human Identity in an age of Nearly-Human Machines’ project at the Faraday Institute for Science and Religion. Beth explores the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics
Footnotes:
- Murray Shanahan and Beth Singler (2024) ‘Existential Conversations with Large Language Models: Content, Community, and Culture’ arXiv:2411.13223
- “What is a God? Modalities and Typologies of Deities in Post-AI Religiosity, the Case of LLM-Theisms” in Neumaier, A. (ed) Religion and AI: Media – Practices – Materialities, Kohlhammer, Germany
- Pierre Bourdieu and Loic Wacquant (1992) An Invitation to Reflexive Sociology. Chicago, IL: University of Chicago Press.
- Miles Klee, “people are losing loved ones to AI-fueled spiritual fantasies”, Rolling Stone 4 May 2025. https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/