AI Ethics and the Humanities: A Perspective from Buddhist Studies

This contribution is based on a presentation given at The Digital Orientalist’s Virtual Conference 2025 (AI and the Digital Humanities) and was written by keynote speaker Elaine Lai (Civic, Liberal, and Global Education (COLLEGE) at Stanford University). The recording of the presentation is available here.

Introduction

Recent reports from PwC, McKinsey, and the World Economic Forum predict AI will fundamentally transform the global workforce by 2050, with about 60% of current jobs needing significant adaptation due to AI (Kelly 2025). The timeline 2050 is likely too optimistic; Anthropic’s CEO, Dario Amodei, recently estimated massive unemployment in entry-level white-collar jobs within five years, urging the U.S. government to regulate AI carefully (Binder 2025). Regardless of what statistics we consult, the dominant narrative is that AI is an inevitable reality: learn to adapt or be left behind.

As humanities scholars, we must critically consider how to ethically integrate AI—as a methodology or a topic of research—in order to meaningfully connect with the rest of the world. The premise of this keynote stems from two aspects of my own professional identity: 1) my role as a Buddhist studies scholar who uses the digital humanities to make my work accessible to a wider public, and 2) my role as a Lecturer at Stanford University, in the heart of Silicon Valley, where I often teach undergraduates about technology ethics.

This presentation is split into three sections. Part 1, “Buddhism and AI: risks and benefits” provides an overview of the ways in which Buddhist communities and translators have used AI to create religious chatbots and translation assistants. Part 2, “AI tools for humanities scholars,” provides some suggestions for how humanities scholars might ethically integrate AI into their workflow. Part 3, “Broader conversations around AI ethics,” considers eleven ethical issues related to AI that humanities scholars might further engage in.

Part 1: Buddhism and AI: Risks and Benefits

In 2019, there was a $1 million venture between the roboticist Hiroshi Ishiguro and an Associate professor of engineering at Osaka University named Kohei Ogawa to create Mindar, a human-sized robot that preaches Buddhist sermons inside the 400-year-old Kodaiji temple (Hardingham-Gill 2019). This robot was modeled after Kannon, a Buddhist bodhisattva who embodies compassion. Although Mindar did not yet use AI, it serves as a precursor to how Buddhists would later leverage this technology.

In 2023, an AI chatbot called Roshibot was created by Jiryu Rutschman-Byler, a Soto Zen Buddhist priest and co-abbot of the San Francisco Zen Center. The chatbot was trained using the writings and recorded talks of an actual Zen Buddhist teacher named Shunryu Suzuki Roshi, as well as internet data. Rutschman-Byler created Roshibot as an experiment to explore the potential of AI to benefit Buddhist teachings and practice (Rutschman-Byler 2023). Before using the chatbot, the user is required to acknowledge the limitations of the technology, including the fact that the chatbot may hallucinate or say something harmful. These warnings echo many of the ethical apprehensions connected to Buddhism and AI. That same year, in Malaysia, a Buddhist entrepreneur and teacher named Lim Kooi Fong showcased his own Buddhist chatbot in Sarnath India. NORBU, or Neural Operator for Responsible Buddhist Understanding, is built upon ChatGPT technology.

While the intentions behind the deployment of AI technology for Buddhism may be noble, it is important to ensure that the input of Buddhist communities and teachers are included in the process of creation, testing, and implementation. The 2025 Buddhabot Plus, which is a collaboration between Bhutan’s Central Monastic Body and Japan’s Kyoto University, is perhaps an example of a more inclusive approach to AI development (Lewis 2025). Like NORBU, the 2023 Buddhabot Plus was modeled using the generative AI from ChatGPT. Buddhabot Plus will undergo monitoring by monks from the Central Monastic Body of Bhutan and a rigorous safety assessment process before being released to the broader public. Although this may result in a slower release, engaging in these community-centered approaches may result in a greater net benefit and be less prone to disaster.

The second major development at the intersection of Buddhism and AI is the introduction of new translation tools. The examples below focus on translation tools from Tibetan into English. Tech companies in Silicon Valley like Anthropic and OpenAI have recently improved their translation models in Tibetan. In India, the platform Monlam AI, headed by Geshe Monlam and funded by the Tibet Fund and USAID, is developing translation, text to speech, speech to text, and OCR technologies from Tibetan into English. Most recently, a privately funded platform called Dhamramitra launched a machine translation platform from Classical Chinese, Sanskrit, Pali, and Tibetan into English, Korean and other languages. Dharmamitra’s main goal is to enable multilingual search for Buddhist source languages; for example, finding Pali sentences based on queries in Chinese, or using English to find relevant passages in the Tibetan canon.

The reception of these translation technologies has been mixed. At one extreme of the spectrum, some want to outlaw the use of AI in translation altogether. For example, on SuttaCentral, a project for early Buddhist texts based in Australia, had a viral blogpost titled “Let’s make SuttaCentral 100% AI-free forever” (Sujato 2024). There are other Buddhist platforms like 84000 and the Tsadra Foundation who are more receptive to this technology, and trying to create guidelines for the ethical use of AI (Tsadra Foundation 2024; 84000 2024). Based on the reception of these translation technologies, I would summarize the risks and benefits in the following way.

Benefits:

  1. Speed: If streamlined in a proper way, AI could speed up the process of translation. Text alignment tools are especially helpful for translators working between several languages.
  2. Enhances Accessibility: to textual sources, providing rough translations to get an idea of a text’s overall structure and content.
  3. Democratizes Education: In the best-case scenario, using this technology will attract more people (practitioners, scholars, translators) to Buddhism.

Risks:

  1. Missing Human Contact: AI might replace actual teachers or other forms of human interface that have historically been central to Buddhist communities of practice.
  2. Misinformation: Without receiving transmissions and explanations of teachings, Buddhist scriptures may be misinterpreted, mistranslated, or even worse, misused.
  3. Missing Process: What is lost in increased speed and access? Does the information gained even feel valuable anymore? From a Buddhist perspective, more knowledge does not necessarily lead to more wisdom—becoming a more aware and ethical being.

Part 2: AI Tools For Humanities Scholars

As someone who has headed several digital humanities projects (Yurek 2024), I have noticed that digital humanities methods are not yet widely adapted amongst Buddhist studies scholars. Part of the reason for this may stem from the fact that that our formal training does not typically include technology literacy, which makes the prospect of engaging in technology projects feel like extra work. Here, I would like to introduce my humanities colleagues to AI-enabled software development, sometimes known as “vibe coding,”[1] as a low-bar way to familiarize themselves with coding (Williams 2025). Rather than manually write your code, you can now implement a natural language prompt and have the AI coding platform generate the code for you.

Platforms include lovable.dev, replit.com, bolt.new, and cursor.com. They are beneficial for first-time coders who want to create a simple prototype of a project to share with others before fully committing to it. They are also learning tools that help to familiarize with different coding languages, and become a better prompt engineer. However, there are many limitations as well (Butcher 2025). Especially in projects requiring domain-specific knowledge, AI-enabled coding can fail at handling complexity, debugging, maintenance, or providing high-quality code. There are also potential security risks such as API key leakage and data privacy issues. Finally, these technologies still require basic programming knowledge—problem-solving from the perspective of computer science.

Part 3: Broader Conversations on AI Ethics

This last section considers how we, as humanities scholars, can contribute to broader conversations around AI ethics. I currently work in a university-wide initiative that aims to ensure that Stanford first-year students, most of whom will eventually become STEM (Science, Technology, Engineering or Medicine) majors, receive some kind of humanistic education with a grounding in ethics. For many of my students, my class is the one humanities course in their schedule.[2] Very few of my students have an interest in Buddhist studies, and few will pursue another humanities class unless it is mandatory. Under these circumstances, I have felt a responsibility to address topics at the intersection of humanities and technology ethics in a way that might positively impact students in their lives beyond the classroom. This work has also led to a broadening of my own research foci to consider about how Buddhist studies can help shape AI ethics (Lai 2025a; Lai 2025b). The guiding questions I have used to structure my classes on technology ethics are: Who benefits, or has benefited from, technology? Who is harmed by it? Are there unintended consequences?

One 2016 World Economic Forum article enumerates 9 major ethical issues related to AI development (Bossmann 2016): 1) unemployment, 2) inequality, 3) machines affecting human behavior, 4) hallucinations, 5) racist robots, 6) security, 7) unintended consequences, 8) singularity (the point at which humans are no longer the most intelligent), and 9) robot rights. To this list, I would add two other pertinent ethical considerations: 10) environmental costs, which I discuss below, and 11) IP rights—referring to the copyrighted works of creative artists which are used to train large language models without the consent of these artists.

Given all these ethical issues, one is left to wonder: what, exactly, is the goal of AI? Stanford’s Human-Centered AI institute offers the following mission statement, echoed by many AI enthusiasts: “We believe AI should be guided by its human impact, inspired by human intelligence, and designed to augment, not replace, people” (Stanford Human Centered AI). While such mission statements sound relatively innocuous and even altruistic in intention, I observe two main issues with the idea of human-centered AI: 1) The category of “human” has never represented all humans, and 2) centering the human problematically suggests that humans are the primary moral agents in our world.

The Category of “Human” Has Never Represented All Humans

From predictive policing to algorithms for credit loans, to search engines that reinforce racist visual associations, our technology continues to discriminate against historically marginalized communities. Scholars like Meredith Broussard (2024) and Safiya Noble (2018) have demonstrated how algorithmic bias in AI design, often stemming from biased and unrepresentative data sets, perpetuates harmful histories of Othering that are rooted in a larger global history of colonialism. The bottom line is that the category of “human” has always been a contested category of exclusion—where one group self-presents as the fully human moral agent, at the expense of labeling the “Other” as less-than, or even non-human. Most recently, Karen Hao (2025) has argued that AI companies like OpenAI have become the newest instantiation of colonialism and empire, not just in the ways that they exploit labor and resources, but also in the self-proclaimed, God-like power they profess to exercise through technology (Cornish 2025).

Beyond algorithmic bias, there is also the issue of exploited, and often invisible, labor behind AI. OpenAI relies on Kenyan workers paid between $1.32 and $2 an hour to make ChatGPT less toxic (Perrigo 2023). Workers all over the Global South, from countries like Venezuela, Bulgaria, India, and the Philippines, label and sift through all the disturbing content on the web used to train current AI models. In the process, these workers are forced to endure depraved images of murder, child abuse, and pornography—leading to long-term and devastating psychological trauma (Regilme 2024; Dzieza 2023). In these examples, it is clear that AI does not serve all humans despite claims of universal human benefit; rather, the humans used to train AI are treated like guinea pigs or lab rats, unjustly bearing the harmful consequences of this technology for the benefit of a minority of privileged humans.

Given the above history and the current labor practices around AI, it is important to sound the alarm on the ideal of “human-centered AI.” Until our algorithms and labor practices change, the “human” in “human-centered AI” will fail to reflect a meaningful reality.

Humans are not the primary moral agents in our world

To help my students think about the intersection of environmental and technology ethics, I introduce them to the concept of anthropocentrism. According to Oxford Bibliographies, anthropocentrism “refer[s] to the point of view that humans are the only, or primary holders, of moral standing. Anthropocentric value systems thus see nature in terms of its values to humans…Questions of anthropocentrism and its alternatives emerge in part from the nature/culture divide, a fault line of Western philosophy and environmental thought” (Padwe 2013).

Here, too, if we look back on history, one of the main reasons we find ourselves in our current climate crisis is that the limited definition of the human has been used to justify the exploitation and subjugation of the natural world and its resources. Scholars like Amitav Ghosh (2021) and Tao Leigh Goffe (2025)have argued that the extractive logic of colonialism ultimately spawned our climate crisis. AI is not an exception to these historical trends of environmental devastation either. To accommodate the massive amounts of data used to run AI, data centers are being built in the U.S., the United Arab Emirates, China, and India. Globally, data centers consume about 560 billion liters of water annually, with one article estimating that the numbers could rise to about 1200 billion liters by 2030 (Skidmore 2025). One 2023 study estimates that 10-50 ChatGPT queries require half a liter of water to cool the servers powering these computational demands. This statistic is likely an underestimation too (Leffer 2024). Given how much energy AI requires, and the populations that will be disproportionately affected by the water-draining data centers, it would behoove us all to consider the larger environmental and social impacts of our AI consumption and whether the benefits outweigh the costs.

Conclusion

As scholars in the humanities, we offer important perspectives that would help to steer AI ethics in alternative directions. The Center for the Study of Apparent Selves (CSAS), inspired by Buddhist philosophy and the Buddhist concept of a bodhisattva—a being who, due to the view of interdependence, acknowledges that their liberation is bound up with the liberation of all others—has redefined intelligence through the lens of “care.” CSAS defines care as the ability to detect, respond to, and alleviate stress. They argue that if care is a marker of intelligence, then AI systems that exhibit care-like behaviors may be considered moral agents (Doctor et al 2022).  Whether or not we redefine intelligence according to these parameters is not the main point. Rather, I want to highlight how the category of intelligence is not set in stone, and our interpretation of what intelligence means will have far-reaching ethical consequences on a social and planetary level. Turning to other systems of thought that challenge human-centered agency is critical in this regard. If we consider AI as a possible moral agent, existing interdependently with humans, other life-forms, and our greater ecosystem, new possibilities for ethical relationality arise.

In conclusion, AI is a reality: how do we want to be in relationship with it? In this talk, I have suggested that we can determine the ethical use of AI tools on a case-by-case basis. Collaborations with diverse stakeholders, including technologists, academics, different communities—will steer our projects in the right direction, avoiding potential disasters. Humanities scholars can also contribute to broader ethical conversations around AI through our own research and humanities training. Finally, I have suggested that when we consume or use AI, we should always be aware of the larger impacts: social, environmental, potential biases, and more.


Notes

[1] Andrej Karpath, a cofounder of OpenAI, coined this term.

[2] One statistic notes that the number of humanities degrees has dropped nearly 25% from 2012 to 2020 (Mahto 2024).


References

84000. “84000’s Position on AI and the Machine Translation of Canonical Literature.” 84000, September 9, 2024. https://84000.co/documentation/84000s-position-on-ai-and-the-machine-translation-of-canonical-literature.  

Binder, Matt. “Anthropic CEO Warns AI Will Destroy Half of All White-Collar Jobs.” Mashable, May 29, 2025. https://mashable.com/article/anthropic-ceo-warns-white-collar-unemployment-ai.

Bossmann, Julia. “Top 9 Ethical Issues in Artificial Intelligence.” World Economic Forum, October 21, 2016. https://www.weforum.org/stories/2016/10/top-10-ethical-issues-in-artificial-intelligence/.

Broussard, Meredith. More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. MIT Press, 2024.

Butcher, Mike. “TurinTech Reveals $20M in Backing to Fix Problems in ‘Vibe Coding.’” TechCrunch, March 18, 2025. https://techcrunch.com/2025/03/18/turintech-reveals-20m-in-backing-to-fix-problems-in-vibe-coding/.

Cornish, Audie. “Is OpenAI Building an Empire or a Religion?” June 5, 2025. https://www.cnn.com/audio/podcasts/the-assignment/episodes/90409e9c-4b94-11ef-ba2a-23c5b7b86337.

Doctor, Thomas Hove, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, and Michael Levin. “Biology, Buddhism, and AI: Care as the Driver of Intelligence.” Entropy 24, no. 710 (2022). https://doi.org/10.3390/e24050710.

Dzieza, Josh. “AI Is a Lot of Work As the Technology Becomes Ubiquitous, a Vast Tasker Underclass Is Emerging — and Not Going Anywhere.” Intelligencer, June 20, 2023. https://nymag.com/intelligencer/article/ai-artificial-intelligence-humans-technology-business-factory.html.

Ghosh, Amitav. The Nutmeg’s Curse: Parables for a Planet in Crisis. University of Chicago Press, 2021.

Goffe, Tao Leigh. Dark Laboratory: On Columbus, the Caribbean, and the Origins of the Climate Crisis. First edition. Doubleday, 2025.

Hao, Karen. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. 1st ed. Penguin Publishing Group, 2025.

Hardingham-Gill, Tamara. “The Android Priest That’s Revolutionizing Buddhism.” CNN, August 28, 2019. https://edition.cnn.com/travel/article/mindar-android-buddhist-priest-japan.

Kelly, Jack. “These Jobs Will Fall First As AI Takes Over The Workplace.” Forbes, April 25, 2025. https://www.forbes.com/sites/jackkelly/2025/04/25/the-jobs-that-will-fall-first-as-ai-takes-over-the-workplace/.

Lai, Elaine. “An Intertextual Heatmap: Tantra of the Sun’s Reception in 14th Century Tibet.” The Digital Orientalist, November 5, 2024. https://digitalorientalist.com/2024/11/05/an-intertextual-heatmap-tantra-of-the-suns-reception-in-14th-century-tibet/.

Lai, Elaine. “It Is Becoming Easier to Create AI Avatars of the Deceased − Here Is Why Buddhism Would Caution against It.” The Conversation, July 29, 2025. https://theconversation.com/it-is-becoming-easier-to-create-ai-avatars-of-the-deceased-here-is-why-buddhism-would-caution-against-it-261445.

Lai, Elaine. “Will the next Dalai Lama Be a Machine?” Religion News Service, June 24, 2025. https://religionnews.com/2025/06/24/will-the-next-dalai-lama-be-a-machine/.

Leffer, Lauren. “Generative AI Is an Energy Hog. Is the Tech Worth the Environmental Cost?” Science News, December 9, 2024. https://www.sciencenews.org/article/generative-ai-energy-environmental-cost.

Lewis, Craig. “AI: Japanese-Developed ‘BuddhaBot Plus’ to Debut in Bhutan.” Buddhistdoor Global, February 17, 2025. https://www.buddhistdoor.net/news/ai-japanese-developed-buddhabot-plus-to-debut-in-bhutan/.

Mahto, Neil. “The Humanities Are Being Neglected in American Universities.” The Johns Hopkins News-Letter, March 14, 2024. https://www.jhunewsletter.com/article/2024/03/the-humanities-are-being-neglected-in-american-universities.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.

Padwe, Jonathan. “Anthropocentrism.” In Oxford Bibliographies. Oxford University Press, 2013. https://doi.org10.1093/obo/9780199830060-0073.

Perrigo, Billy. “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” Time, January 18, 2023. https://time.com/6247678/openai-chatgpt-kenya-workers/.

Regilme, Salvador Santino F. “Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South.” SAIS Review of International Affairs 44, no. 2 (2024): 75–92. https://dx.doi.org/10.1353/sais.2024.a950958.

Rutschman-Byler, Jiryu Mark. “Can a Chatbot Share True Dharma?” March 27, 2023. Lion’s Roar, n.d. https://www.lionsroar.com/can-a-chatbot-share-true-dharma/.

Skidmore, Zachary. “AI Data Center Growth Deepens Water Security Concerns in High-Stress States – Report.” Data Center Dynamics, May 12, 2025. https://www.datacenterdynamics.com/en/news/ai-data-center-growth-deepens-water-security-concerns-in-high-stress-states-report/#:~:text=An%20April%20report%20from%20the%20International%20Energy,high%20as%201%2C200%20billion%20liters%20by%202030.

Stanford Human Centered AI. “Our Mission (Stanford Human Centered AI).” https://hai.stanford.edu/about?__hstc=167200929.8bbb7f3a5412b223a4960d1349efc734.1746144000318.1746144000319.1746144000320.1&__hssc=167200929.1.1746144000321&__hsfp=1721781979.

Sujato, Bhante. “AI-1: Let’s Make SuttaCentral 100% AI-Free Forever.” Suttacentral, April 2024. https://discourse.suttacentral.net/t/ai-1-let-s-make-suttacentral-100-ai-free-forever/33374.

Tsadra Foundation. “Exploring AI Tools for Buddhist Translation.” November 2024. https://www.tsadra.org/ai-tools-for-tibetan-buddhist-translation/.

Wiliams, Rhiannon. “What Is Vibe Coding, Exactly?” MIT Technology Review, April 16, 2025. https://www.technologyreview.com/2025/04/16/1115135/what-is-vibe-coding-exactly/.

Yurek, Eren. “Interview with Elaine Lai and Aftab Hafeez: Exploring Digital Humanities Projects in Buddhist Studies.” Stanford Center for Spatial and Textual Analysis, December 9, 2024. https://cesta.stanford.edu/news/interview-elaine-lai-and-aftab-hafeez-exploring-digital-humanities-projects-buddhist-studies.

Leave a Reply