Preconference Workshops

Wednesday, 15 October 2025

There are two different registrations for this conference – one for in person attendees and one for remote attendees.
In person only: https://members.aoir.org/event-6216401
Online only: https://members.aoir.org/event-6216484

AoIR hosts several preconference workshops the day before the main conference. Attendees must register for the full conference to attend a preconference workshop. All workshops have a USD$8 fee applied charged during registration. Refunds on workshop fees are only available if entire registration is cancelled, and then will be subject to the cancellation and refund policies of the organization. This fee does not apply to the Doctoral Colloquium.

If you are accepted into the Doctoral Colloquium do not register for any preconference workshops. The Doctoral Colloquium and the preconference workshops occur simultaneously on 18 October 2025. In order to attend a workshop you do have to register for the full conference.

  • AoIR Early Career Scholars Workshop
  • Chronic Problems. Getting to Terms with the Temporality of Algorithmic Media
  • Creative Labour In Rupture? Gen-AI And Future Research Directions
  • The Model and the Reactor: Artificial Intelligences Infrastructure from Public, Private and Beyond
  • Online Hate Speech in Brazil: Methodological and Conceptual Challenges
  • Rethinking AI from the Ground Up: Building Sustainable AI Ecosystems for Local Communities
  • Ruptures in Algorithmic Surveillance: How to Resist?
  • Undergraduate Teaching Workshop

 


AoIR Early Career Scholars Workshop

Morning Session – Wednesday, 15 October 2025

Organizers:

Purpose: These half-day workshops bring early career researchers together to address unique issues they face, develop strategies to achieve career goals, and foster a professional network. We define early career scholars as people who have finished the requirements for their terminal degree but have not yet advanced to the next level in their field or industry (e.g. in North America this would be tenure). AoIR’s strength is its communication. This workshop fosters community among emerging scholars and bridges the divide between junior and senior scholars. We aim to continue working toward making this community as inclusive and representative as possible.

The workshop addresses both challenges and opportunities unique to early career scholars in the many fields and forms of scholarship represented at AoIR. First, we have to negotiate the transition from graduate student to early career professional that requires a higher level of autonomy and to meet the challenge of figuring out the pragmatic and social aspects of a new work environment. Second, we must work quickly to establish ourselves in our fields and, often, secure funding. Third, we have increased service responsibilities. Fourth, after being guided by our advisors and committees for several years, we transition into mentorship roles. Fifth, we must learn to navigate to the next level of our careers while managing various degrees of precarity and ensuring time with family and friends. Being a junior scholar also comes with unique opportunities that we will explore. While recognition of internet scholarship has come a long way since AoIR’s inception, junior scholars still may find themselves facing certain hurdles in gaining recognition for their research (i.e. subject, method, etc.) in terms of promotion. In fact, some of the challenges we face are also opportunities to work towards changing the ways in which internet scholarship is perceived and valued within the academic structure.

The issues we will cover depend greatly on the participants and will be driven by your questions and concerns. AoIR is an international and diverse organization, and we know that our experiences as scholars and educators vary by country, institution type, and field and are framed by our own identities (race, gender, etc.). Our goal is to discuss shared challenges and opportunities while understanding differences so that we can build our own professional networks at the same time that we create a diverse and inclusive community of scholars who will eventually become future career mentors within AoIR.

Format: Based on feedback from previous workshops, we will maintain a three-session format. The first session will consist of a fish-bowl discussion for workshop participants. This discussion is intended as a get-to-know others event as well as an opportunity to discuss the issues and opportunities we face collectively. The second session will be a panel of established scholars who can share their insight and experiences. We define established scholars as those individuals who have continued to research and publish within their field, and who have been promoted within their given professional system. In the final session, participants will form small groups with senior scholars to address topics relevant to them (type of institution, academic system, etc.). Time will be left for follow-up questions and group discussion. We also are planning an informal social activity following the workshop.

Audience: This workshop is geared toward early career scholars who have completed their doctoral degree. Please do not register for this workshop if you have not completed your degree.

Goals:

1) Provide a safe, inclusive and accepting space for the next generation of AoIR scholars to start building strong ties with each other, more established researchers, and the AoIR community..

2) Promote understanding of the breadth of academic work, including our shared experiences and differences.

3) Connect with established academics to build a stronger community of support for our careers.

4) Develop strategies to build and maintain a junior scholar community outside of the annual conference.

back to top


Chronic Problems. Getting to Terms with the Temporality of Algorithmic Media

Afternoon Session – Wednesday, 15 October 2025

Organizers: Ludmila Lupinacci-University of Leeds, UK; Ignacio Siles-Universidad de Costa Rica, Costa Rica; Christian Pentzold-Leipzig University, Germany

Time is a troubled terrain. For algorithmic media and users alike, how much time is spent online is of the utmost importance. The algorithms of social media platforms configure users through patterns of timings and tempos. Their operations aim to deliver content at moments that feel right. This means they add time parameters to their calculations, and personalization also works in the pacing and ordering of content. Time on platforms does neither happen naturally, nor is there a shared time for all. Categories like real-time or liveness are not simply given but exist in the interplay of algorithms and engagement data. To users, algorithmically sequenced content may sometimes be surprising and ingenious, sometimes boring and repetitive, sometimes overwhelming and frantic. It disrupts the sense of real-time connection and instantaneity but instead requires perpetual synchronization and opens opportunities for dissent and resistance (Coleman, 2020; Lupinacci, 2024). Users care about the time they gain or waste, enjoy or fritter away online as much as they care about the messages and (mis)information they encounter.

Research has only begun to understand algorithmic media temporalities and how they condition the rhythms, timings, and tempo of digital lives (Kaun et al., 2020; Kitchin, 2023; Pentzold, 2018). How they experience and manage the temporalities of algorithmically produced synthetic media (e.g., deathbots) is underexplored, too. This gap becomes most pronounced when moving away from Western contexts and their chrononormativity (Freeman, 2010).

This half-day workshop opens up a forum to think theoretically and empirically about digitally mediated temporalities. It features contributions that examine how and under what conditions users experience, assess, and deal with the ambivalent opportunities and constraints of having time and losing time that are an innate element of their daily engagement with algorithmic – or, algo-rhythmic – media (Miyazaki, 2012).

Goals of the Workshop

The workshop aims to bring together and connect scholars of temporality and algorithmic media so as to identify common ground, stimulate exchange, and initiate critical comparative research. Organizing the workshop in connection to this year’s AoIR will allow us to particularly involve researchers from Latin America and interconnect their work and insights with the research drawn out by colleagues joining from other parts of the Global South as well as Europe, Australia, and North America.

With this setup, the workshop will enable participants to engage with chronometric inequalities and chrononormative concepts and how they play out in shaping the experience, assessment, and management of time in an environment of pervasive algorithmic media. This is important and urgently needed because time use is a critical factor for well-being, mental and physical health, and relationship quality (Giurge et al., 2020), with algorithmic media being blamed for eating up excessive amounts of screen time thus inducing depression, anxiety, distress, and weak social connections (Vanden Abeele, 2021). Thus, next to fostering academic exchange and forging connections among researchers of time and algorithmic media, the workshop will serve to identify directions for advice and interventions of how to help users gain temporal agency vis-à-vis abundant, attention-hungry, and always-available algorithmic media.

Workshop Organization and Participants

The workshop is organized and will be facilitated by Ludmila Lupinacci (University of Leeds, UK), Christian Pentzold (Leipzig University, Germany), and Ignacio Siles (University of Costa Rica, Costa Rica). It includes 10 abstract-based short talks that alternate with two rounds of discussions and a world café format that are guided by a set of questions shared by the facilitators. They involve workshop contributors and participants. The workshop will explore possibilities for multilingual discourse.

Workshop Contributions

The speakers and facilitators are scholars at different career stages (from early-career researchers to full professors) coming from and working in various parts of the world. Contributions address the social experience of and affective responses to mediated temporalities, the time dimension of AI and automation, and the management of time with and through algorithmic technologies.

The contributors are:

Rebecca Coleman (UK), Riccardo Pronzato (Italy), Rodrigo Muñoz-González (Costa Rica), Ludmila Lupinacci (UK), Vanessa Valiati (Brazil), & Felipe Soares (UK), Taylor Annabell (Netherlands), Peter Nagy (US) & Maria Goldshtein (US), Esther Weltevrede (Netherlands) & Anthony Burton (Canada):, Bjørn Nansen (Australia), Anne Kaun (Sweden), Ignacio Siles (Costa Rica) & Christian Pentzold (Germany)

back to top


Creative Labour In Rupture? Gen-AI And Future Research Directions

Morning Session – Wednesday, 15 October 2025

Organizers: Tom Divon-The Hebrew University of Jerusalem, Israel; Zoë Glatt-Microsoft’s Social Media Collective; Rafael Grohmann-University of Toronto; M.E. Luka-University of Toronto

Since the spectacular launches of ChatGPT from OpenAI in 2022 and of Deep Seek R1 in 2025, we’ve seen an explosion of hype around––and financial, emotional, and creative investment in––generative AI systems that produce synthetic media in the form of text, images, videos, music, and voices. The growing adoption of these technologies in cultural spheres has widespread implications for the landscape of creative labour across legacy and platformized industries. Many fields are interrogating the impacts of GenAI on creativity and industries, including but not limited to creative intellectual property (IP) being scraped to train Large Language Models (LLMs), job loss and transformation of the creative workforce, and the reproduction of existing global inequalities and biases by these softwares across creative labour systems.

As these challenges unfold, urgent questions emerge about accountability, labour rights, and the broader cultural consequences of integrating GenAI into creative work. Without critical inquiry, these shifts risk deepening existing power imbalances while consolidating control in the hands of a few dominant tech entities. This pre-conference brings AoIR scholars together to shape future research on creative industries and labor in the GenAI era, fostering idea exchange, methodological refinement, and collaboration.

This half-day workshop will focus on two key areas of enquiry:

(1) Research Ruptures, Reactions, and Resistances

  • Ruptures: How is GenAI already reshaping creative workflows, from content production to audience engagement?
  • Reactions: What are the current local, national, and global policies, frameworks, and opportunities regarding AI for creative workers?
  • Resistances: How are creators and workers navigating these instabilities, forging new methods of bargaining, reappropriation, adaptation, solidarity and refusal?

(2) Shared Methodologies and Approaches

  • What methodological approaches enable nuanced and rigorous research on GenAI and cultural work and industries?
  • What can researchers learn from each other about experimental methodologies on GenAI and cultural work?
  • What are the possibilities and challenges for methodologically conducting policy and community-oriented research on GenAI and cultural work?

Format and Structure

This pre-conference is an interactive workshop designed to accommodate a maximum of 30 participants. The goal is to generate a shared vocabulary and to identify overlaps and collaborations across a growing field of research priorities. The workshop will conclude with a hands-on session where participants will collaboratively map out future research pathways and policy priorities.

Introductions: Research and Methodology

Each participant is invited to prepare a concise three-minute presentation outlining their current and anticipated interdisciplinary AI research methodological approaches, framed within the workshop’s themes: ruptures, reactions, and resistances. Those not currently engaged in AI research are welcome to bring a pressing question they are eager to explore in the future. The following confirmed participants—representing diverse perspectives from creative labour research—have agreed to offer their three-minute contributions first, to set the pace for the workshop: Nancy Baym, Arturo Arriagada, David Craig, Roseli Figaro, Alessandro Gandini, Daphne Idiz, Annette Markham, Vicki Mayer, David Nieborg, Caitlin Petre, Thomas Poell, Robert Prey, Godwin Simon, Jiaru Tang, and Julia Ticona.

Working Session: Roundtables

Participants will break into roundtables to explore shared interests from the first session in greater depth, with facilitators overseeing the discussions. Participants will discuss key concepts related to creative work and GenAI, including (but not limited to) methodologies, intellectual property and legislative shifts, influencer economies, algorithmic curation, and/or AI-generated content, with a focus on creative labour.

Plenary Discussion: Collaboration “Sprint”

Participants will reconvene to define how ruptures, reactions, and resistances to GenAI should shape future research. This session will focus on refining research agendas, developing projects and publications, sharing policy recommendations, and exploring innovative methodologies. This session will start with three-minute highlights from each roundtable. 

Target Audience

This workshop is designed for researchers working at the intersections of media, labour, and critical AI studies, digital platforms, and cultural industries. Particularly, we aim for those grappling with the growing interest in AI within creative industries and now seeking ways to critically investigate these evolving intersections. Early-career researchers, PhD students, and those developing new methodologies are especially encouraged to participate.

back to top


The Model and the Reactor: Artificial Intelligences Infrastructure from Public, Private and Beyond

Full day session – Wednesday, 15 October 2025

Organizers: Fenwick McKelvey-Concordia University, Canada; Mónica Humeres-Universidad de Chile; Claudia López-Universidad Técnica Federico Santa María; Veridiana Domingos Cordeiro-University of São Paulo; Luciano Frizzera-University of Waterloo; Nicolas Chartier-Edwards – Institut national de la recherche scientifique

Big AI’s demands for this world are becoming clearer. Microsoft, in 2023, announced plans to build new data centers with nuclear power to fuel new energy-hungry models (Calma, 2023). Google and Amazon made similar announcements subsequently (da Silva, 2024; Olick, 2024). Plans to build nuclear-powered AI data centers clearly illustrate the scale and consequences of AI as a social blueprint — rendering clear “the choices (implicit or explicit) made in the course of technological innovation” and demanding reflection on “the grounds for making those choices wisely” (Winner, 1986, p. 18). Building on the material turn in Internet Studies (Hesmondhalgh, 2022; Sandvig, 2013), our preconference gathers scholars to explore ruptures against the growing cyberphysical project of “Big AI” (van der Vlist et al., 2024) or “AI as platform” (Mahnke & Bagger, 2024).

Our preconference has three objectives:

  1. Share findings and digital methods that expose AI’s global technological footprint with an emphasis on the Americas or engaged and speculative research on alternative AI infrastructures that may include local or regional infrastructure, the fediverse, frugal AI infrastructures, decentralized, and/or distributed infrastructures;
  2. Facilitate comparative policy research on measures to promote alternative AI infrastructures as well as benefit public interest and community benefits for these alternative infrastructures;
  3. Develop a joint statement about recommendations for a new infrastructure for AI to be written collaboratively by discussants.
    Together, our pre-conference seeks to cultivate an international research community dedicated to understanding AI’s infrastructural impact and its alternatives. The conference offers international scholars a chance to develop collaborative projects as well as shape collective policy recommendations. Outputs directly advance the annual call for “strategies and tactics to address the ruptures caused by platformization” in this case of AI?

We focus our call on questioning what public interest infrastructure would look like for AI. Public interest AI refers to “support those outcomes best serving the long-term survival and well-being of a social collective construed as a ‘public’” (Public Interest AI, n.d.). The Paris Charter on Artificial Intelligence in the Public Interest (2025), published after the Paris AI Summit, aims to “encourage a more comprehensive and inclusive design of AI in the public interest, in terms of technology, organization and institutions that serve different jurisdictions and communities in attaining similar success.” Public interest AI, however, is already a contentious term and not dissimilar to other terms “AI for Good” or “Responsible AI” that can act as ethics washing (Bourne, 2024; Wagner, 2018). Scholarly attention is required to define public interest AI as a critical concept.

The preconference will be a full day to ensure there is time for presentations, networking, and collaborative activities. Participants will be selected through a peer-reviewed call with two tracks, one for presentations and a second for discussants and facilitators. Presentations will advance objectives 1 and 2 in the morning. The afternoon will leverage discussants and facilitators to develop collaborative research projects and synthesize research into a joint statement.

Our schedule can accommodate two parallel tracks to welcome 20 presentations and a total of 50 participants. A closing plenary will showcase key themes, with discussants offering reflections and key insights from the day.

Our preconference intends to be multilingual, with collaborators working in English, French, Portuguese, and Spanish. The steering committee will accept abstracts in all four languages.

The preconference is made possible by the Chaire de recherche du Québec sur l’intelligence artificielle et le numérique francophones and the Social Sciences and Humanities Research Council of Canada.

We welcome all attendees to participate in our preconference. To help us organize the pre-conference, please complete this survey to request a presentation slot, or to be a facilitator:
https://forms.cloud.microsoft/Pages/ResponsePage.aspx?id=hfFpVS_SE06YUM5bGrzS6BGNzWCmCiNHj1pdGOnUcTpUQlRIWlpDUkFWQjRaNDFJUUtFQVE2SFBHMS4u

We ask potential presenters and facilitators to please complete the form by 27 June 2025 to give us time to organize.

back to top


Online Hate Speech in Brazil: Methodological and Conceptual Challenges

Morning Session – Wednesday, 15 October 2025

Organizers: Gabriella Costa-ITS Rio, Brazil; Karina Santos-ITS Rio, Brazil;  João Guilherme Santos-Instituto Democracia em Xeque; Leticia Sabbatini-Universidade Federal Fluminense; Tatiana Dourado-Instituto Nacional de Ciência e Tecnologia em Democracia Digital

The workshop will be conducted through thematic presentations, case studies, and interactive panel discussions. Facilitators will foster a collaborative environment for experience sharing and the development of practical solutions.

Target Audience

This workshop is intended for researchers, academics, technology experts, and other individuals interested in studying and combating online hate speech.

Abstract

This workshop will explore methodological challenges in the analysis and response to online hate speech in Brazil. It will discuss key conceptual issues and monitoring approaches related to this phenomenon, considering the national context and the specific characteristics of digital platforms. The event will bring together experts to share experiences and methodologies applied in research and initiatives aimed at mitigating hate speech.

The first challenge is conceptual: both international and Brazilian academic literature adopt a wide range of definitions to categorize harmful, uncivil, and dangerous speech. However, there is no consensus on the most appropriate terminology and classifications (Dourado et al., 2023; Da Silva & Neto, 2021; Silva & Francisco, 2021; Schäfer, Leivas, & dos Santos, 2015).

In this context, panelists Tatiana Dourado and Viktor Chagas are co-authors of an article proposing a typology of four types of harmful online discourse: uncivil speech, conspiratorial speech, hate speech, and dangerous speech (Dourado et al., 2024).

Additionally, the Pegabot Project, developed by the Institute for Technology and Society and which supports victims of online hate speech, has engaged in debates about the challenges of defining this phenomenon. The lack of a widely accepted academic definition hinders the development of effective solutions to ensure victims’ access to justice.

Another important contribution to the conceptual debate comes from researcher Letícia Sabbatini, who conducts studies on gender-based political violence. Her work includes relevant definitions of hate speech, as presented in studies such as the Map of Gender Political Violence Against Women (Sabbatini et al., 2023). Thus, the workshop aims to foster debate on conceptual definitions, questioning whether the classifications currently used are the most appropriate.

The second major challenge is methodological. The discontinuation or monetization of data access by platforms such as X (formerly Twitter) and Meta has made it increasingly difficult to analyze online hate speech. Previously, it was possible to track, map, and visualize these occurrences with greater ease, but changes in data access policies have made monitoring more opaque. In light of this scenario, the workshop will promote a discussion on alternative methodologies, including data scraping techniques and new strategies for social media analysis.

Even on platforms that still provide some access to data, such as YouTube, challenges remain in efficiently mapping, categorizing, and filtering hate speech. To address this issue, facilitators Gabriella da Costa and Karina Santos, from the Pegabot project, along with researcher Fábio Malini, have been exploring the use of artificial intelligence for the automated analysis of hate speech.

Furthermore, the workshop will feature insights from João Guilherme Santos, director of Democracia em Xeque and author of various studies on social media platforms such as WhatsApp and YouTube. His work involves the use of large-scale data for analyzing online discourse. With these experts, the workshop will offer a collaborative space for developing innovative solutions to monitor online hate speech.

References

Da Silva, G. N. P., Silva, T. H. C., & Neto, J. D. C. G. (2021). Freedom of expression and its limits: An analysis of hate speech in the era of fake news. Revista Argumenta, (34), 415–437.

Dourado, T., Chagas, V., et al. (2024). Online disinformation in Brazil: A typology of discursive action of harmful political content on WhatsApp and Facebook. International Journal of Communication, 18, 25.

Sabbatini, L., et al. (2023). Map of Gender Political Violence on Digital Platforms. Niterói: coLAB/UFF.

Schäfer, G., Leivas, P. G. C., & dos Santos, R. H. (2015). Hate speech: From conceptual approach to parliamentary discourse. Revista de Informação Legislativa, 52(207), 143–158.

Silva, L. R. L., Francisco, R. E. B., & Sampaio, R. C. (2021). Hate speech on digital social networks: Types and forms of intolerance on Jair Bolsonaro’s official Facebook page.

 

back to top


Rethinking AI from the Ground Up: Building Sustainable AI Ecosystems for Local Communities

Afternoon Session – Wednesday, 15 October 2025

Organizers: Anjali Mazumder-The Alan Turing Institute, United Kingdom, University of Cambridge, United Kingdom; Nyalleng Moorosi-DAIR (South Africa, Global); Jatinder Singh-University of Cambridge, United Kingdom, RC-Trust, UA Ruhr, Germany; Luciana Benotti-University of Cordoba, Argentina, Fundacion De Libre, Argentina; Nina Da Hora-Instituta da Hora, Brazil; Caleb Moses-Mila, Quebec, Canada; Shaimaa Lazem-City for Scientific Research and Technological Applications, Egypt

*The risk that ‘large-scale’ AI won’t serve local communities is real!*

Excitement over generative AI and ever-larger models, typically delivered through networked platform infrastructures, has overshadowed the importance of building systems for specific communities and local needs. That is, AI systems built for the “majority” often fail minority communities. With 88% of the global population living outside of the West, many systems transplanted will not work effectively and may harm the many diverse local communities within a country or across the “global south”. Further, as states debate regulation and companies offer “assurance”, these are often far from or exclude minority communities, context, needs and other considerations.

Towards this, we will convene technologists, researchers, civil society and others advocating for and building AI-related technologies for local communities. The goal is to share experiences, foster dialogue, and generate a corpus of lessons learned – from different perspectives, approaches, contexts and geographies – to explore what works and what doesn’t in designing AI systems for and with local communities and the roles of networked platforms within this space.

Workshop will include real-world stories, interactive exercises, and a “learnings from practice” fireside chat, with narratives from Brazil, Argentina, Oceania, South Africa and Egypt, by people who have built AI driven systems for their local community. We will facilitate exercises and discussion that gets attendees to consider approaches they would take regarding tech, considering specific roles, needs, barriers, and tensions (e.g. resources, power, system dependencies, political economy, etc). It culminates with insights from different engagements, providing approaches and understandings on how to “build for the long tail” – the many different local contexts and consideration of tensions in value sensitive design.

The workshop aims to “rupture” the dominance of the networked platforms of big tech and dominant Western views. We will do this by spurring collaboration, insights and perspectives across the range of participants to (a) re-think AI development and digital governance for local communities, (b) provide practical ways forward on approaches for understanding and building human-centred technology in and for local communities, (c) encourage coalitions including diverse voices in small, local contexts in AI and digital platform research, development and governance.

Session 1 (Interactive Storytelling & Surveying):

There will be 3 to 4 stories. For each story, encompassing a real-world situation, storytellers will first provide a brief synopsis of the situation in which digital technology was affecting their local communities. After, attendees will be surveyed on what they would do in that situation and why. Storytellers will then share what they did and why, bringing to light successes and challenges. Attendees will be re-polled to learn if they would change their approach, after having heard storytellers’ journeys.

Situations have inherent tensions between local values and Western views that may or may not be explicit, and may also be interpreted differently according to one’s sociocultural background, value systems, etc.

Session 2 (Facilitated Discussion):

We will have a facilitated discussion, identifying opportunities and challenges to building locally. In small groups, attendees will explore what is needed to build locally, rupturing the current ecosystem and supply chain of big tech and dominant Western views. A representative will report back to the larger group in an interactive discussion.

Exercises will help surface and interrogate discussions in an interactive participatory environment.

Wrap-up: Facilitators will summarise key takeaways and opportunities to rupture the status quo, fostering approaches for AoIR community, in collaborating with and building sustainable AI and supporting digital technologies for local communities.

Broader aim of this workshop is to produce a paper, summarise findings and learnings that will be helpful for researcher and practitioner communities.

back to top


Ruptures in Algorithmic Surveillance: How to Resist?

Afternoon Session – Wednesday, 15 October 2025

Organizers: Kainen Bell-University of Illinois Urbana-Champaign, USA; Pablo Nunes-Centro de Estudos de Segurança e Cidadania (CESeC), Brazil; Gabriel Pereira- University of Amsterdam, Netherlands

Investment in algorithmic surveillance—ranging from facial recognition to Automated License Plate Recognition—has skyrocketed across the world over the past decade (see e.g., Pereira et al., 2025; Nunes et al., 2022). In response, a multitude of resistance strategies have emerged. Activists have mobilized public campaigns, proposed legislative reforms, and developed community-led interventions aimed at challenging and curtailing the expansion of these technologies. These efforts have created ruptures in algorithmic surveillance by exposing its limitations, harms (Redden, 2022), and biases (Browne, 2015; Benjamin, 2019), as well as the significant financial costs associated with its implementation (Nascimento, 2023).

While much of this resistance has been documented in specific regional contexts (see e.g. Young et al., 2019; Bell, 2023; Whitney et al., 2021), there is a growing need to understand how these struggles connect across geographies and how lessons from one region can inform resistance efforts elsewhere. This workshop departs from the Brazilian context to critically examine global resistance to algorithmic surveillance, drawing on experiences from multiple regions where activism, policy, and technological interventions have sought to mitigate surveillance harms.

By bringing together researchers, activists, and practitioners from diverse backgrounds, this workshop aims to map common challenges and strategies, facilitate cross-regional dialogue, and collectively envision pathways for strengthening resistance movements worldwide. Participants will explore the intersection of surveillance with race, class, and gender, highlighting how marginalized communities bear the brunt of these technologies while also leading innovative forms of resistance.

The session will be structured into three interactive components: (1) Roundtable Discussion, (2) Working Groups, and (3) Informal Networking. Through these interactive elements, the workshop will function as both a knowledge-sharing platform and an incubator for cross-regional collaboration, providing concrete takeaways that participants can apply to their advocacy and research. Ultimately, this pre-conference workshop contributes with an unique opportunity to consider the new meanings of resistance today, and to explore our role as scholars in this context.

Part 1: Roundtable – Resistance to Algorithmic Surveillance in Latin America

The workshop will begin with a roundtable discussion featuring activists and researchers engaged in resistance efforts across Latin America. The organizers will join invited panelists, including: Horrara Moreira, Lawyer, Researcher, Activist, and former coordinator of the national anti-surveillance campaign in Brazil, Tire Meu Rosto Da Sua Mira; and Débora Pio, journalist and digital rights advocate based in Rio de Janeiro, with expertise in exposing the socio-political impacts of surveillance in Brazil.

The discussion will explore: (1) What are the diverse and integrated tactics of resistance, and how do they lead to different forms of change? (2) How are activists creating and building upon networks to support their work? (3) How does resistance connect to broader issues of gender, race, and class in the Latin American context? (4) What strategies can be employed to combat the increasing use of Artificial Intelligence in public security? Following the panel, a Q&A session will allow participants to engage directly with speakers and contribute their perspectives.

Part 2: Small Working Groups – Strategizing Resistance

Participants will break into small groups to collaboratively discuss how key themes from the roundtable relate to their work and practice. Groups will brainstorm on the issues of: counter-surveillance strategies, legal challenges, and community mobilization. These discussions will be facilitated to encourage knowledge-sharing and the co-creation of actionable insights that are cross-regional.

Part 3: Informal Networking & Collaboration Building

The workshop will conclude with an informal networking session, providing space for attendees to connect, exchange ideas, and identify opportunities for collaboration beyond the session.

By fostering dialogue between scholars, activists, and policymakers, this workshop aims to advance ongoing discussions on algorithmic surveillance and resistance in Latin America, equipping participants with practical tools and networks to strengthen their work in the field.

back to top


Undergraduate Teaching Workshop

Afternoon Session – Wednesday, 15 October 2025

Organizers: Holly Kruse-Rogers State University, United States of America; Kelly Boudreau-Harrisburg University of Science and Technology, United States of America.

Teaching is central to many of our academic lives, whether we are graduate teaching assistants or junior or senior faculty members; tenure-track, tenured, or contingent; or experienced educators or instructors relatively new to teaching. In the classroom – on campus or virtual – our students’ understandings of social media and internet use don’t always align with broader press or research narratives. Moreover, and in response to this year’s conference topic, new media technologies have been central to and within ruptures in undergraduate teaching environments and approaches. At no time in recent history was this disruption more evident than when classes moved online at the onset of the COVID-19 pandemic: a move that exposed ruptures between and among students in terms of, for instance, access to broadband technology that wasn’t evident in the classroom.

This workshop brings educators together to discuss the difficulties and joys of teaching in, on, and around the internet. What do we learn from our students about the internet, how are we using the internet to teach, and what’s the best way of bringing AoIR research into our classrooms? How do we use the internet in teaching when our students don’t have broadband access, aren’t digitally-savvy, and as well as when our institutions do not offer robust technical infrastructures or support? For what kinds of creative, information, or other industries are our students really prepared?

This workshop allows organizers and participants with a range of teaching experiences that span types of institutions, student populations, and institutional support roles to discuss issues like teaching loads, expectations of service to students and administration, and international institutional terminologies. For that reason, the workshop is discussion-based, with a different broad topic covered each hour.

Logistics

Prior to the workshop, participants fill out a questionnaire so that we have a sense of the teaching contexts and expectations of participants. We use the shared Google document as a resource that participants can refer to after the event, and all registered participants can share their thoughts on the workshop’s a shared Google doc.

We tailor the workshop to focus on experiences and resources brought forth in responses to the questionnaire and expand on them through discussion. The first hour focuses on introductions, and on outlining the key concerns, questions, and issues resulting from questionnaire responses. The second hour focuses on sharing concerns and successes in teaching, and strategies, assignments or techniques employed that center around digital media and internet research in a pedagogical setting. During the third hour, participants will perhaps work in smaller groups, the topics of which are determined by workshop participants. Each participant would then join the group that best addresses their needs and expectations. The fourth hour includes the summation of the group work and of all the afternoon’s discussion, and also plans for documenting and sharing strategies and materials that were mentioned during the workshop.

The organizers intend to adhere as closely to the structure described above as possible and to give participants substantive takeaways at the end of the workshop, while also being flexible so that the workshop best fits the participants’ needs. We hope in Rio de Janeiro to attract and benefit a range of people that represents the global reach of internet research and pedagogies.

This workshop adheres to AoIR’s Statement of Principles and Statement of Inclusivity (https://aoir.org/diversity-and-inclusivity), which is a commitment to academic freedom, equality of opportunity, and human dignity, and which supports at its conferences “A civil and collegial environment rooted in a belief of equal respect for all persons. Such an environment, among other things, should encourage active listening and awareness of inappropriate or offensive language.”

back to top


Conference registration will be available soon.