This contribution is based on a presentation given at The Digital Orientalist’s Virtual Conference 2025 (AI and the Digital Humanities) by Shuohong Lyu (University of East Anglia). The recording of the presentation can be found here.
Artificial Intelligence Hegemony (AIH) is a phenomenon that has become increasingly visible in public discourse, yet remains little theorised as a hegemonic form in its own right. AI is frequently depicted (especially by popular media) in discourse as a competition for national gain, productivity, or geopolitical dominance. This framing is prominent in the accounts of technological competition involving China and the United States, “AI superpowers” (Lee 2018), and in policy discourse advocating strategic competition and national innovation capacity (National Security Commission on Artificial Intelligence 2021). While some such scholarship does pursue this direction, focusing on geopolitical competition and economic restructuring (Crawford 2021), the question I am asking is different: what happens when AI systems become the organisers of the conditions of meaning-making, credibility, and intellectual labour in a manner resembling cultural hegemony (Gramsci 1971; Lears 1985), but, instead, operate through automated patterning rather than conscious ideological leadership?
In this article, first, I reexamine “hegemony” to demonstrate what shifts when algorithmic systems become engaged in cultural leadership. Second, I address research practice and argue that AIH has a particular impact on humanities and cultural research, which are characterised by independent judgment, interpretive sensitivity, and conceptual risk-taking.
1. Reconsidering Hegemony
1.1 Traditional Hegemony
In a Gramscian framework, hegemony is not mere domination. It is the stabilisation of power through consent where particular meanings and values are made naturalised, obvious, and “common sense” (Gramsci 1971). It operates through culture: narratives, symbols, classifications, routines and institutions that subtly define what is normal, what is reasonable and what is thinkable. Lears (1985) describes this as the process by which cultural forms serve to harden ongoing power relations by recasting them as a matter of natural consequence rather than being produced.
This is important for the culture of research. Academic communities engage in arguments about evidence while also inheriting implicit assumptions about what constitutes an appropriate question, what qualifies as a legitimate theory, and what counts as “rigour.”
1.2 AI and Its Impact on Hegemonic Structure
Generative AI accelerated very quickly in 2023, with a remarkable impact on the intersection between computation and cultural production. Large language models and diffusion-based image systems, for example, transitioned from helping to compute or retrieve to play an active role in writing, designing, translating, ideating, and simulating discourse at scale (OpenAI 2023; Stability AI 2022; Midjourney 2023). The divide between “the real,” and “the produced” became more difficult to police, not because humans went from interested in truth to not interested at all, but because the sheer volume and fluency of synthetic output changed the informational environment.
This shift matters for hegemony because cultural leadership increasingly depends on visibility and plausibility under platform conditions. Generative systems can now churn out “evidence-like” goods: photorealistic images, confident summaries, persuasive scripts, swiftly and cheaply, and these outputs pass between ranking and recommendation systems that reward attention capture. In other words, AI systems do more than simply transmit ideology. They transform the speed, scale and form with which ideological narratives can be produced and shared.
1.3 Evidence of AIH From 2024
The “evidence” here is not that AI builds a monolithic ideology. The evidence shows that AI systems, by developing plausibility and redistributing communicative power, can speed up an ideological contest.
First, during the 2024 US election context, AI-generated pictures of Donald Trump in the company of smiling Black supporters circulated online. These images may mislead voters by being viewed as showing demographic endorsement and by generating a “lifelike” form of persuasion that is not likely to be matched by such verification standards (Brown and Klepper 2024).
Second, in the UK, reporting on far-right mobilisation described how bots and artificial-intelligence-generated content spread inflammatory narratives and fanned the flames of protest, particularly around Southport stabbings. Its importance lies not only in the falsification of particular claims but how such AI content ecosystems can amplify grievance stories and thus coordinate attention in high emotional volatility (Quinn and Milmo 2024).
Third, in France, a review of the 2024 snap legislative election period identified far-right use of AI-generated images and videos to advocate anti-immigration narratives, often without clear labelling even when political actors had committed to transparency. This is important because it demonstrates how synthetic imagery can serve as inexpensive “cultural infrastructure” for political messaging, bypassing traditional gatekeepers and saturating social platforms with emotive scenes that feel documentary (Tual 2024).
A fourth case puts AIH at work outside of electoral politics, within popular culture and memetic circulation. The AI-produced Japanese meme song “YAJU&U,” made by a pseudonymous uploader (mochimochi), exemplifies how AI-generated content potentially become dominant by virtue of platform dynamics, dance routines, and algorithmic spread. Press-Reynolds (2025) tracks the ways in which the song’s dissemination has been enormously spread via social media circulation and choreography, making the case that AI-produced cultural artefacts can rival, and sometimes outpace, human production with regard to speed, volume and memetic stickiness.
1.4 Oddly New Type of Hegemony
These cases highlight what I describe as a strange new hegemonic formation: the “director” of culture and ideology can move, subtly, from human institutions to machine-mediated infrastructures. This doesn’t mean that AI has ideology, consciousness, or intent. AI systems are trained with big data on architectures that optimise statistical prediction; they don’t have subjective agency. Yet the absence of intention does not prevent hegemonic effects.
The key is that AI can function “without conscious will” while yet reorganising what becomes visible, convincing and repeatable. A small human intent, such as selecting a target, selecting a prompt, or strategically deploying content, can enable the machine to produce a vast amount of culturally consequential material. In this view, AIH is not “AI as political actor.” It is hegemony as automated reproduction and amplification: pattern-based generation + platform distribution, creating a new kind of cultural normalisation that’s hard to detect precisely for its proximity to ordinary content.
In my working definition, AIH is an unprecedented, unusual, undetectable process increasingly shaped by algorithmic systems with minimal human intervention. It contrasts with traditional hegemonic dynamics by shifting key parts of cultural reproduction from human elite manipulation toward a mostly machine-based process.
The emphasis here is “undetectable” not because AI is invisible, but because influence can be ambient. Over time, algorithmically favoured narratives, aesthetics, and reasoning styles can become the default background of culture, shaping what feels natural and what feels marginal.
2. AIH in Research Practice
2.1 Algorithmic Productivities and Bias in Academia
Prior to criticising AI in research, one must first recognise the productivity benefits. Research systematic review work has postulated that AI tools can support literature synthesis, form and structure drafts, manage information, edit, and automate workflows that reduce time burdens (Khalifa and Albadawy 2024). Those affordances are actual, and they matter in an academic economy of time constraints and publishing pressure.
However, productivity is not epistemic neutrality. Generative tools are trained using their training data and often reproduce predominant frames of thought. This can be problematic if academic writing and theorising start to merge with model-friendly forms of expression (smooth, balanced, conventional, and statistically central). The risk is that academic voice is limited, and there’s a soft standardisation of what is considered “good” argument, as soon as early thinking is handed to the tool.
This narrowing has uneven consequences. For fields of study that study Asia, Africa and Oceania, or any research tradition that relies on culturally specific concepts and interpretive nuance, the concern is that the AI-generated “general explanations” would quietly prioritise Anglophone, Western and high-frequency frameworks. When a model does not just result in one particular perspective, it is constrained as to what is most adequately represented, clearest and most consistently repeated in the data under the model. The result can be epistemic smoothing: difference gets “explained away” rather than inquired into and explored.
This makes the point as concrete as an example from cross-cultural privacy research. The Mandarin Chinese language word privacy (yǐnsī 隐私) has meanings and moral registers that do not translate straight onto liberal English privacy speak. In certain contexts, yǐnsī involves a moral code related to shame, or improper concealment. If a researcher starts on AI equivalence—for instance, treating yǐnsī just as the English “privacy”, they can flatten the culturally based tension that should inform a research question at first. In that flattening, AIH shows up not in that academic way: It is not censorship, but automated normalisation of conceptual frames.
What is more profound is cognitive displacement at work. With repeated AI use to provide questions, provide summaries of debates, and generate theoretical framings, the tool can start to replace exactly those practices conducive to independent thinking: slow reading, discomfort with easy equivalence, perceiving deviations and risks. Over time, sustained reliance on generative systems may contribute to a gradual attenuation of independent critical capacities, as academic environments increasingly reward speed, efficiency, and fluency over slower practices of conceptual struggle, interpretive hesitation, and sustained analytical engagement.
2.2 How Can AI Tools Be Used Properly in Research?
We do not reject AIH in research; we need clear methodological boundaries to resist AIH in research. AI can be an add-on technology for retrieval, writing assistance, or exploratory mapping, with conceptual framing, question formation, and interpretive judgement still guided by humans. Scholars must instigate epistemic friction by testing outputs against primary sources and diverse literatures and by verifying all citations. Generative fluency should not be treated as authority, particularly in culturally complex or multilingual inquiry where nuance, context, and translation inevitably require sustained human analysis.
Conclusion
AIH represents a structural reconfiguration of cultural power whereby pattern-based generation intersects with amplification via platforms to normalise what appears plausible, credible, and repeatable without the conscious intention to do so. In the practice of research, the principal risk is epistemic substitution: AI’s fluency can supplant slow reading, conceptual hesitation, and interpretive risk. Convenience and optimisation pressures undermine critical thinking. Preserving intellectual autonomy requires methodological discipline, stewardship of AI as an instrument, preservation of human inquiry, and subjecting fluent outputs to rigorous verification.
References
Matt Brown and David Klepper, “Fake images made to show Trump with Black supporters highlight concerns around AI and elections,” AP News, March 7, 2024, https://apnews.com/article/deepfake-trump-ai-biden-tiktok-72194f59823037391b3888a1720ba7c2.
Kate Crawford, Atlas of AI: Power, politics, and the planetary costs of artificial intelligence (Yale University Press, 2021).
Antonio Gramsci, Selections from the prison notebooks, translated by Quintin Hoare and Geoffrey Nowell Smith (International Publishers, 1971).
Mohamed Khalifa and Mona Albadawy, “Using artificial intelligence in academic writing and research: An essential productivity tool,” Computer Methods and Programs in Biomedicine Update, no. 5 (2024): 100145, https://doi.org/10.1016.
Kai-Fu Lee, AI superpowers: China, Silicon Valley, and the new world order (Houghton Mifflin Harcourt, 2018).
T. J. Jackson Lears, “The concept of cultural hegemony: Problems and possibilities,” American Historical Review 90, no. 3 (1985): 567–593.
Midjourney, “Midjourney V5 model release notes,” 2023, https://docs.midjourney.com/.
National Security Commission on Artificial Intelligence, “Final report,” 2021, https://www.nscai.gov/2021-final-report/.
OpenAI, et al., GPT-4 technical report. arXiv, 2023, https://arxiv.org/abs/2303.08774.
Kieran Press-Reynolds, “The baffling, X-rated story of the world’s most popular AI song,” Pitchfork, May 14, 2025, https://pitchfork.com/thepitch/the-baffling-x-rated-story-of-the-worlds-most-popular-ai-song/.
Benn Quinn and Dan Milmo, “How TikTok bots and AI have powered a resurgence in UK far-right violence,” The Guardian, August 2, 2024. https://www.theguardian.com/politics/article/2024/aug/02/how-tiktok-bots-and-ai-have-powered-a-resurgence-in-uk-far-right-violence.
Stability AI Ltd., “Stable diffusion 2.0 release,” Stability.ai, November 24, 2022, https://stability.ai/news/stable-diffusion-v2-release.
Par Morgane Tual, “Comment l’extrême droite a utilisé l’intelligence artificielle pour faire la campagne des législatives,” Le Monde, July 4, 2024, https://www.lemonde.fr/pixels/article/2024/07/04/legislatives-2024-comment-l-extreme-droite-a-utilise-l-intelligence-artificielle-pour-faire-la-campagne-des-legislatives_6246600_4408996.html.
Cover Image: “Vast & Complex Network of Blue & Green Nodes” by Easy-Peasy.AI.
