Event details

Date:
10/06/2026
Time:
16:00 - 17:30
Venue:
Room 0.21, University Library City Centre
Drift 27, Utrecht, 3512 BR
For:
All students (BA, MA, PhD)

Have you ever wondered how you can make digital research methods useful for your studies? Or are you already using digital methods and are you curious about the experiences of others? At The Digital Humanities Dialogue, fellow students and researchers who have used digital methods for their projects will share their experiences. There will be coffee, cookies, and plenty of room for discussion!

There are many digital methods out there that can enrich research in the humanities. Whether it’s for a thesis, another project, or even a small assignment, digital methods can be great tools to unlock new avenues for your research.

At the same time, digital methods can feel like a daunting topic to pick up: they seem complicated and inaccessible, or simply difficult to imagine how to use. Our speakers will demonstrate how they use digital methods in their research projects in the humanities.

The event offers plenty of room for discussions, so that you can exchange ideas with other students and inspire each other to try out new methods.

Speakers

Aoife Buckley & Sofia Kusch – Modern and Contemporary History

‘What happens when political authority becomes the subject of online erotic parody? The paper presented uses quantitative text analysis to explore thousands of fanfiction works about Donald Trump, showing how digital communities transform political power into satire. We suggest that such fanfiction operates as a form of grassroots political critique that destabilizes hegemonic masculinity and reimagines the presidency through humor, intimacy, and excess. In doing so, we rethink the archive not as a static repository of knowledge, but as a participatory space where meaning is collectively produced and contested.’

Alissa Vavinova – Linguistics

‘To what extent can a persona-prompted Large Language Model align with domain-specific linguistic profiles in realistic online contexts? Using principal component analysis and linguistic feature extraction, we compared comments generated from OCEAN-based LLM personas and a neutral agent against real human Reddit comments. The results show that while LLM personas did adapt to different online domains, they consistently occupied more formal, lexically dense positions than human comments, meaning their stylistic alignment remains imperfect. These findings demonstrate how computational methods can evaluate the authenticity of AI-generated language, pointing toward more nuanced and reliable AI communication.’