Global DH vs. Area Studies: Rethinking “China and the West (Rest)” Beyond Lenticular Logic

In her essay “Why Are the Digital Humanities So White? or Thinking the Histories of Race and Computation,” Tara McPherson argues that the very architecture and logic of computation, particularly as it developed in the post-WWII era in the United States, are deeply intertwined with and reflective of the era’s emergent racial logics, specifically the shift towards a “colorblind” yet still segregational racial formation.

McPherson’s is a structuralist argument: the modular, compartmentalized, and seemingly neutral design principles of systems like UNIX are homological with how race has been managed and discussed in post-Civil Rights America. As UNIX breaks down complex tasks into discrete, independent modules and separates the shell from the kernel, society treats racial issues as isolated incidents or individual prejudice, rather than systemic phenomena. This “lenticular logic”—called so after 3D postcards which show different images depending on the viewing angle, with only one being visible at any given moment—allows for coexisting racial realities while obscuring their interconnectedness and overarching power systems, thereby stifling collective struggle for justice.

Much has changed in the field since 2012, when the essay was first published in the Debates in the Digital Humanities (ed. Matthew K. Gold; available open-access). In particular, the global landscape of digital humanities (DH) has seen a rapid development of DH-infused area studies scholarship in East Asia, i.e., far beyond the anglophone heartlands of “white DH.” In this blog post, I want to use this new context to extend and ultimately reverse McPherson’s argument. While the academic field of area studies, with its modular division of the world, embodies the lenticular logic she describes, I want to suggest that the new wave of DH can actually help us bridge the divides that area studies’ own partitioning of the world has helped to generate and sustain.

I have just come back from Shanghai, where I had an opportunity to give a talk at Tongji University’s Center for German Studies (同济大学德意志联邦共和国问题研究所), a site of many burgeoning DH initiatives. A series of hands-on workshops that preceded the lecture introduced students to computational methodologies, including AI-enhanced topic modeling, collocation analysis, and multilingual sentiment classification. While Baidu AI Studio is still a few steps behind Google Colab and Kaggle, and PaddleNLP is unlikely to replace Huggingface Transformers any time soon, the willingness among scholars to apply DH methods to humanities research surpassed anything I have experienced outside China.

Tongji is just one example: all over the PRC, new area studies and digital humanities centers emerge with scholars eager to employ AI and statistical tools to research cultural phenomena beyond China’s borders. With state-sponsored digitization of resources at a scale incomparable to any other part of the world, large language models orders of magnitude cheaper than their Western counterparts, and active support for cross-disciplinary learning (all undergraduates at Peking University, for instance, learn to code), the institutional and scholarly support for digital initiatives is something that many non-China-based researchers can only dream of. It remains to be seen how much interpretive effort will be put into those massive resources, but already now the enthusiasm is striking.

At the same time, this area studies boom in China, supported by its rapidly growing digital infrastructure, is not immune to the critiques of imperial complicity previously leveled against its Western counterparts. While the latter emerged without AI and petabytes of data, “foreign studies” in the PRC can forge an even more seamless alliance with the state and reduce the understanding of other peoples to statistics. It is also bound to produce what Willem van Schendel called “area lineages”—self-enclosed scholarly communities interacting solely within their academic fiefdoms. While the kind of DH McPherson critiqued might reinforce this modular logic, however, I believe today’s computational tools also hold the promise of bringing those different modules (or areas) back into conversation.

This promise is embedded in the very architecture of the new computational tools themselves. For instance, modern language models consist of multiple layers and subnetworks, each of them aggregating different information and paying attention to different aspects of the data. Each word generated by a language model is a result of millions of neurons interacting with each other. The networked structure of LLMs invites us to reconsider cultural phenomena as multi-layered and poly-temporal, reversing the lenticular logic of segregation and challenging the East-West divide. In a recent article (“Poly-Temporal, Multi-Layered: A Techno-Cognitive Theory of Narrative Experience in Literature”), I analyze a passage from Zhang Xianliang’s 1985 novel Half of Man is Woman to explore this confluence, suggesting how computational approaches, informed by cognitive science, can help us trace such co-existing layers and their distinct, yet interwoven, temporal signatures. The novel is simultaneously an expression of authorial needs, shaped by Zhang’s immediate experience of the Cultural Revolution; a product of specific political-cultural backgrounds, reflecting the opening-up of new writerly interests (such as religion and sexuality) during the early post-Mao era; an articulation of bodily patterns anchored in evolutionary history, influencing neuroaffective responses and schematic understanding; and even a manifestation of the grammatical and linguistic properties of the language in which the novel has been composed, which themselves have their own distinct historical trajectories.

The self-attention mechanism in BERT. Each token in the input sequence pays different amount of attention to other tokens. This process is distributed across multiple layers and multiple "heads" within each layer.

Figure 1. The self-attention mechanism in BERT. Each token in the input sequence pays different amount of attention to other tokens. This process is distributed across multiple layers and multiple “heads” within each layer.

The multi-layered, poly-temporal approach also speaks to the question of how to engage with the persistent Euro-Amerocentrism of theory. Should we develop altogether new models that generalize from the empirical distinctiveness of Asian histories? The “Asia as Method” strategy is easily appropriated by nativist and nationalist agendas, with some reviewers arguing that “fictionality” is not a Chinese concept and therefore should not be used to analyze the PRC culture, and others claiming that “neurons” are not universal.

Unlike the identity-based methodologies, the techno-cognitive theory is an attempt to consider both native and non-native perspectives; such a model holds the promise of broadening interpretive authority and creating space for new thoughts, shifting our understanding of literature and culture from linear mappings between texts and History towards dynamic, recursive, and poly-temporal phenomena where here and there meet.

Or at least, such is my hope.

Leave a Reply