Location: Zoom – Link provided upon registration.
Following the rapid evolution and diversification of computational communication research approaches that was spurred by the increasing availability of digital trace data from platform Application Programming Interfaces (APIs) and other sources, 2018’s Cambridge Analytica scandal served as a convenient excuse for many social media platforms to severely curtail access to their APIs. This severely restricted the critical, independent, public-interest scrutiny of public communication on social media, and of the platforms’ management and moderation of such communication, at a time of increasing polarisation, disinformation, toxicity, and general dysfunction in public debate. At the time, four broad pathways appeared available to researchers seeking to address this ‘APIcalypse’: to walk away, to lobby for change, to accommodate and acquiesce, and to break the rules; all four paths have been explored in the contexts of different platforms, but the severe disruption to established research approaches has also led many of us to critically evaluate how and why we have used the social media activity data that APIs (used to) provide. In my contribution to this roundtable, I will review recent developments across a range of leading and emerging platforms, explore the current situation in our field, and discuss possible future developments.
Dr. Bernhard Rieder, Mediastudies Department, University of Amsterdam
For years, the application of computational methods in internet research was implicitly synonymous with quantitative analysis, with large datasets and a focus on counting and plotting. However, the arrival of transformer architectures like BERT and large pre-trained models (LLMs and VLMs) has begun to disrupt this assumption. Rather than merely counting units, these advanced models parse semantic context, interpret latent meanings, and adapt to goal-driven tuning, effectively acting as qualitative lenses operating at a large scale. This shift forces us to re-evaluate traditional methodological boundaries. We are realizing that the quantitative-qualitative divide has not disappeared; rather, it has relocated inside computational methods. Today, researchers can apply either a deductive, quantitative logic, using prompts as rigid codebooks and establishing AI inter-coder reliability, or an inductive, qualitative logic to explore emergent themes using similar computational toolchains. In my introduction to the roundtable, I will explore this “Post-Quantitative” era, examine how the quant-qual divide has resurfaced within computational tooling, and discuss how we can maintain rigor and transparency when delegating interpretive labor to powerful, yet opaque, models.
Dr. Annette N. Markham, Futures + Literacies + Methods Lab, Utrecht University
I will focus my contribution on the ethics of human-centered inquiry, as impacted or highlighted by the rise of automation of many research procedures. As genAI replaces, shortcuts, or brings new capabilities for certain aspects of the research process, it influences how we think about the distinctions between machinic and human logics. When we witness acceptable and viable cognition from non conscious entities, what strengths remain human? What does it mean to preserve the “human in the loop?” This is not a new question, but the idea of ‘complementary intelligences’ is useful in re-mapping how we conceptualize basic research processes, at a very practical level, drawing on the ethics as methods principles in the AoIR guidelines. Once mapped, we can more adequately highlight and scrutinize uniquely human decision making and sensemaking practices. Many of these human elements remain invisible, dismissed as habits of practice rather than operationalized in research design and methodology. Regeneration of methods becomes a matter of remembering and reviving distinctly human ways of making sense of the world around us. Since internet studies methods have long attended to how different technologies impact our practices and vice versa, this is an apt moment to reflect on what we already know about these distinctly human practices and build these more clearly into broader frameworks and discussions about the role of AI and humans in research design and methodologies.



