Preconference Workshops

30 October 2024

Conference registration is available here.

AoIR hosts several preconference workshops the day before the main conference. Attendees must register for the full conference to attend a preconference workshop. All workshops have a US$15 fee applied charged during registration. Refunds on workshop fees are only available if entire registration is cancelled, and then will be subject to the cancellation and refund policies of the organization. This fee does not apply to the Doctoral Colloquium.

If you are accepted into the Doctoral Colloquium do not register for any preconference workshops. The Doctoral Colloquium and the preconference workshops occur simultaneously on 30 October 2024. In order to attend a workshop you do have to register for the full conference.

 


AoIR Early Career Scholars Workshop

Morning Session – 30 October 2024

Organizers:

Purpose: These half-day workshops bring early career researchers together to address unique issues they face, develop strategies to achieve career goals, and foster a professional network. We define early career scholars as people who have finished the requirements for their terminal degree but have not yet advanced to the next level in their field or industry (e.g. in North America this would be tenure). AoIR’s strength is its communication. This workshop fosters community among emerging scholars and bridges the divide between junior and senior scholars. We aim to continue working toward making this community as inclusive and representative as possible.

The workshop addresses both challenges and opportunities unique to early career scholars in the many fields and forms of scholarship represented at AoIR. First, we have to negotiate the transition from graduate student to early career professional that requires a higher level of autonomy and to meet the challenge of figuring out the pragmatic and social aspects of a new work environment. Second, we must work quickly to establish ourselves in our fields and, often, secure funding. Third, we have increased service responsibilities. Fourth, after being guided by our advisors and committees for several years, we transition into mentorship roles. Fifth, we must learn to navigate to the next level of our careers while managing various degrees of precarity and ensuring time with family and friends. Being a junior scholar also comes with unique opportunities that we will explore. While recognition of internet scholarship has come a long way since AoIR’s inception, junior scholars still may find themselves facing certain hurdles in gaining recognition for their research (i.e. subject, method, etc.) in terms of promotion. In fact, some of the challenges we face are also opportunities to work towards changing the ways in which internet scholarship is perceived and valued within the academic structure.

The issues we will cover depend greatly on the participants and will be driven by your questions and concerns. AoIR is an international and diverse organization, and we know that our experiences as scholars and educators vary by country, institution type, and field and are framed by our own identities (race, gender, etc.). Our goal is to discuss shared challenges and opportunities while understanding differences so that we can build our own professional networks at the same time that we create a diverse and inclusive community of scholars who will eventually become future career mentors within AoIR.

Format: Based on feedback from previous workshops, we will maintain a three-session format. The first session will consist of a fish-bowl discussion for workshop participants. This discussion is intended as a get-to-know others event as well as an opportunity to discuss the issues and opportunities we face collectively. The second session will be a panel of established scholars who can share their insight and experiences. We define established scholars as those individuals who have continued to research and publish within their field, and who have been promoted within their given professional system. In the final session, participants will form small groups with senior scholars to address topics relevant to them (type of institution, academic system, etc.). Time will be left for follow-up questions and group discussion. We also are planning an informal social activity following the workshop.

Audience: This workshop is geared toward early career scholars who have completed their doctoral degree. Please do not register for this workshop if you have not completed your degree.

Goals:

1) Provide a safe, inclusive and accepting space for the next generation of AoIR scholars to start building strong ties with each other, more established researchers, and the AoIR community..

2) Promote understanding of the breadth of academic work, including our shared experiences and differences.

3) Connect with established academics to build a stronger community of support for our careers.

4) Develop strategies to build and maintain a junior scholar community outside of the annual conference.

back to top


AI, Ethics, and the University

Afternoon Session – 30 October 2024


Organizers: Sarah Florini, Alexander Halavais, Jaime Kirtz, Nicholas Proferes, Michael Simeone, Shawn Walker; Arizona State University, United States of America

Like other organizations, universities are actively investigating the ways in which generative AI and large learning models can be integrated into their work. Initial concern over plagiarism and cheating has been joined by opportunities to personalize learning, to automate administrative and instructional processes, and perhaps most importantly to help individuals and organizations make use of these new technologies in ethical ways. Many universities–including that of the facilitators–are seeking to rapidly adopt and proliferate these nascent technologies, often in partnership with existing and emerging commercial providers. Earlier this year, Arizona State University entered into an agreement with OpenAI to provide a site license for ChatGTP Enterprise, and is actively implementing this into instruction, research, and administration. However, there are significant potential pitfalls in this new gold rush. For example, universities may fail to prioritize the safety and privacy of users (including those in vulnerable positions), or consider the potential dangers or deleterious effects of experimenting with these approaches. Universities must contend with the complex and troublesome political economy of these tools, in addition to their environmental consequences. And, universities must consider how adoption of these technologies creates ontological problems in terms of regimes of truth and expertise. Given that the ways universities engage in these new technologies are likely to act as a template for wider adoption, getting it right in these contexts is important.

We are at an inflection point. There is a limited window during which technology scholars can shape the deployment of these tools before they become obdurate (Pfaffenberger, 1992, p. 498). During a period when there are more questions than answers about how the use of these technologies affect legal structures, government action, and the structure and function of industries and knowledge work, there is a desperate need for scholars of technology to contextualize these changes against a broader history of technology adoption, to weigh the ethical challenges they present, and to act as a counterweight to calls to, once again, move fast and break things. To help build a network of scholars interested in shaping this inflection point, we propose a half-day preconference on AI, Ethics, and The University at AoIR.

Attendees & Organization

Many AoIR attendees are likely to be involved in how AI is being used at their own universities already, or wish to play a more active part in that work. Our aim is to gather these voices to share experiences, stances, and aims. By the end of the preconference we hope to have established a set of shared core questions we should be addressing as scholars and public intellectuals, and a way forward for establishing frameworks for adoption, appropriate restrictions on data collection and use, and guidelines for adoption within the university and how universities may leverage their social position to shape the ways in which publics, governments, and industry use AI

Attendance is open to all AoIR delegates. We will contact registrants ahead of the workshop and ask participants to provide answers to a short set of questions, along with a brief position statement. The facilitators will then use this initial information to organize a set of guided conversations.

We anticipate discussion points may include:

  • In what ways may stakeholders in AI-mediated contexts be better informed about the ways in which their creative efforts can be used and misused by generative AI systems?
  • How do we appropriately indicate our use of new AI tools in our own work as faculty, students, or administrators in ways that are easily discoverable?
  • How might those stakeholders be better represented in decisions related to the adoption of AI systems?
  • When should students or faculty be able to choose to use AI tools and what are the conditions under which they may choose not to use these tools?
  • If contracting with commercial suppliers of AI, to what degree can and should we insist on elements of transparency, portability, intellectual property, privacy, and control?
  • What role should universities play in promoting non-commercial alternatives to various forms of artificial intelligence?
  • How best might we explore the possibilities of new AI technologies within constrained spaces before adopting them at scale?
  • Should universities play an important role in modeling ethical adoption and non-adoption of AI tools, and how do we better document and communicate these processes and their value to industry and to policymakers?
  • How could universities take a leadership role in evaluating AI systems for more than simply perceived performance? What non-performance related success measures should be taken into account when evaluating AI systems?

AoIR provides an ideally situated space in which to share these efforts and engage in coordination with global partners. Our aim is to close the workshop with a roadmap to move toward a collective or consensus statement that may be shared more widely.

back to top


Alternative Platform Archives: Methods, Politics, Impact

Full Day Session – 30 October 2024


Organizers: Robyn Caplan-Duke University, US, João C. Magalhães-University of Groningen, The Netherlands

We are at a critical juncture for the future study of platforms and other Internet infrastructure. Researchers who study platforms have always had difficulties navigating access. As noted by Bonini and Gandini (2020), it is not just the proprietary algorithms owed by technology that are “black boxed,” but rather the industry itself. While there was a brief period between 2016 and 2018 where technology companies appeared to open themselves up for external researchers to conduct ethnographic work or interviews, access to these companies remains rare and is increasingly limited. Quantitative data are also becoming scarcer, as major platforms such as X (formerly Twitter) and Reddit have made their platforms more difficult to study, severely limiting free-API access for researchers (Gilbert and Geurkink, 2024), or eliminating free access altogether (Gotfredsen, 2023).

In this context, the need for building and maintaining alternative archives about platforms becomes urgent. Indeed, a number of scholars have been dealing with these problems by creating their own archives, circumventing platforms and asking people to do things like donate their own data, such as Meredith Clark’s Archiving Black Twitter project (https://www.archivingtheblackweb.org/); keeping track of public statements, such as Michael Zimmer and team’s Zuckerberg Files (​​https://zuckerbergfiles.org/); and collecting, curating, and giving access to sets of platform public documents such as the Platform Governance Archive (https://www.platformgovernancearchive.org/). For historians, meanwhile, questions of archives and access to documents have always been central. But these questions take on new meaning with born digital sources.

This will be a full-day workshop on alternative archives for platform governance research. Workshop participants’ include creatives of alternative archives, such as Meredith Clark (Northeastern University, Black Twitter Project), Laura Manley (Harvard University, Facebook Files), and Christian Katzenbach, Dennis Redeker, and Daria Dergacheva (University of Bremen, Platform Governance Archive). We will be joined by media historian Heidi Tworek (University of British Columbia) to guide us in thinking about what we need to be collecting now to study platforms ten, twenty-five, or fifty years from now.

The workshop will be split into two sessions of three hours each. In the first session, participants will explain their initiatives, unpacking the methods they developed to both create their archives and circumvent (or not) the roadblocks they faced while doing so. Then, participants will discuss two main topics. Firstly, the politics of archiving, that is, how decisions around archiving and enabling access intersect with gender, race, and class, and with researchers’ own positionality. Secondly, the impact of alternative archives. In reflecting on who has used or not their archives and for what, participants will be invited to consider the concrete steps needed to make their work useful for multiple audiences. Throughout the workshop, we will ask participants to imagine the new types of archives we could create using the data they are collecting for their own research, including policy documents taken over time, screenshots from platforms of various points (i.e. version histories), oral histories, public statements, and other media coverage.

We hope the workshop can not only educate scholars (particularly young scholars) about the types of alternative archives already available, but that these scholars can be trained in how to create and share data as they progress through their work. The goal of this day is field-building, and identifying a common need to share resources to foster the development of platform governance research in the future. We invite scholars at every level who do work on existing platforms, as well as “dead and dying” platforms that are being lost to Internet history as they “fail, decline, or expire” (McCammon & Lingel, 2022).

This workshop builds on an event organized in June 2023 by the “Data & Society Institute” on “Access, Archives, and Workarounds”. This new version of the workshop is being hosted by the Platform Governance Research Network (PlatGovNet).

back to top


AoIR Ethics: Ethics & Literacies for AI Usage in the Research Process

Morning Session – 30 October 2024


Organizers: Michael Zimmer-Marquette University, USA; Ylva Hård af Segerstad-University of Gothenburg, Sweden; Kelly Quinn-University of Illinois at Chicago, USA;, Heidi A. McKee-Miami University, USA.

Artificial intelligence (AI) is a pervasive phrase today, often used to describe a wide range of technologies, including large language models/deep learning image processing, augmented reality, and systems that can generate text, imagery, audio and synthetic data. Ubiquitous and yet novel, AI is rapidly advancing academic research, influencing not only its outcomes but also its praxis. What are the current uses of AI in internet research? What is the impact of AI on internet research practices? What are the ethical implications of such use and how should we, as members of a research community, attend to these? The focus of this workshop is to address these questions by providing an opportunity to examine and discuss the use of AI in research and its impact on research practices and ethics. This is a stand-alone workshop but its outcomes will partner well with the proposed afternoon workshop “AoIR Ethics: Where Do We Go from Here?”

AI-powered tools are intended to advance all stages of the academic research process, from aggregating and summarizing extant literature, to generating and collecting data, to the writing and visualization of results. Institutional guidance on using AI in each of these stages is slowly emerging, often in support of the use of such tools in a ‘responsible and ethical manner,’ but lacking detail on the research-specific considerations that should be made. In a similar vein, publishers have issued directives requiring acknowledgement of the use of AI in research reporting, but frequently do not opine on the ethical use of such tools. To address this lacuna, this workshop will focus on the ethics and literacies involved in the use of AI-powered tools in the research process.

This half-day workshop will be organized in three modules, and begin with opening remarks from Dr. Casey Fiesler (tentatively accepted invitation; https://caseyfiesler.com/about/), who will lay the groundwork for the session by describing current uses of AI in research and the associated ethical implications. The second module will proceed with small group work in which participants will address cases and provocations solicited in advance from members of the AoIR Ethics Working Committee and broader AoIR community, covering such topics as the use of generative tools for writing and data visualization, developing AI models to detect patterns and features in data, the use of AI-generated synthetic data, the implications of AI tools for data privacy and property rights, and the use of virtual and augmented reality in experimental settings. Finally, the third module will offer guidance and strategies for enhancing AI literacy among researchers at all levels to aid in their understanding of the practical and ethical implications associated with the adoption and use of emerging AI technologies. Throughout the session, participants will be empowered to share their experiences with AI, along with any attendant questions and controversies they have encountered.

The workshop facilitators bring a range of experience with internet research and applying AI tools in research and classroom settings, designing programs for AI literacy, and addressing the ethical dimensions of new research tools and methodologies.

back to top


AoIR Ethics: Where Do We Go From Here?

Afternoon Session – 30 October 2024


Organizers: Michael Zimmer-Marquette University, USA; Ylva Hård af Segerstad-University of Gothenburg, Sweden;, Kelly Quinn-University of Illinois at Chicago, USA; Aline Shakti Franzke-University Tübingen, Germany.

Since the inception of the AoIR ethics guidelines, several adaptations have been made to accommodate emerging technological advancements and evolving research contexts (Ess, 2002; Markham & Buchanan, 2012; franzke et al. 2019; Zimmer, 2022). These advancements have incorporated important values of context sensitivity, cross-cultural awareness and research phase sensitivity into the guidelines, during a period in which internet-related technologies grew in prominence and research significance. More recently, internet researchers have faced new challenges stemming from the rise of generative artificial intelligence platforms, reduced access to data through more restrictive content-sharing/API policies, open hostility towards research on particular topics and communities, and threats of harm to researchers themselves. Such events require renewed discussion on how to strengthen and adapt the AoIR ethics guidelines to meet this developing environment.

In addition, more and more disciplines outside AoIR’s traditional communities rely on data and methods that would benefit from AoIR’s ethics guidance but are hindered by the inability to best apply our guidelines within their field. Meanwhile, new regulatory frameworks for research have fostered expanded implementations of the AoIR ethics guidelines, along with some disturbing reports of ‘ethics washing.’

To foster further dialogue and action on the future directions for the AoIR ethics guidelines–both in terms of content as well as how to disseminate to broader communities–the AoIR Ethics Working Group proposes this half-day pre-conference workshop to bring together all interested members of the AoIR community to discuss and organize how this work might be carried out.

The goals of this workshop will be to consider how the various AoIR ethics documents have been put into practice, to identify existing gaps and limitations, and to create an action plan for developing the next iteration of ethical guidance. In addition, this workshop will also contemplate the format of future guidelines, for example, whether they should be static documents, interactive decision-support tools, accompanied by instructional videos or case studies, and so on. We will consider the development of an outreach plan and accompanying materials. Modes of sharing resources related to teaching ethical approaches to the research and use of emerging technologies will also be discussed.

To facilitate the goals of the session, three distinct workshop blocks will be planned. The first block will include a general discussion of the existing AoIR ethical guidance and its adequacy to meet the needs of today’s research environment. Time permitting, this discussion will also include a brainstorming segment on the research challenges that participants are currently facing. The second workshop block will include a strategy session to deliberate and develop a plan to address how the guidelines might be further adapted to incorporate the emergence of artificial intelligence technologies as they are used in research praxis. The third workshop block will have a tactical focus, developing priorities for a new iteration of the AoIR ethics guidelines, development of an outreach plan along with accompanying materials, and the identification of any additional extensions requiring development or elaboration. The primary outcome of this workshop will be an action plan for the next iteration of the AoIR ethics guidelines.

back to top


Generative Artificial Intelligence as a Method for Critical Research

Full day Session – 30 October 2024


Organizers: Minna Vigren-Aalto University, Finland; Nabila Cruz-University of Sheffield, UK; Elina Sutela-University of Turku, Finland.

The development of artificial intelligence (AI) systems and their deployment in society has given rise to serious ethical dilemmas and existential questions. The previously unimaginable scale, scope, and speed of mass harvesting of data, the black-boxed classification logics of the data, the exploitation of ghostworkers, and the discriminatory uses of the systems have brought up concerns that the AI systems reproduce and amplify social inequalities and reinforce the existing structures of power (see e.g. Benjamin, 2019; Gray & Suri, 2019; Crawford, 2021; Heilinger, 2022). Moreover, AI contributes in a significant way to the planetary crises, from the mining of raw minerals and massive energy consumption to the vast amount of e-waste (Perkins et al., 2014; Crawford, 2021; Taffel, 2023; de Vries, 2023). AI-generated images have flooded social media feeds (Kell, 2023; Lu, 2023), deeply impacting access to information, knowledge generation, and the spreading of misinformation (Partadiredja, Serrano & Ljubenkov, 2020; Whittaker et al., 2020). The lack of regulation and security concerns has led to policies to curtail GenAI use in news organisations and universities (Weale, 2023; WIRED, 2023). Due to these problems, the adoption of GenAI in academia needs careful and critical deliberation.

In the workshop, we engage in reflecting on the ways generative artificial intelligence (GenAI) could be used as a method for critical research, and what ethical and practical considerations are implied. We approach GenAI following Kate Crawford’s definition of AI as “a technical and social practice, institutions and infrastructures, politics and culture” (Crawford, 2021: pp. 8). The focus of the workshop is on GenAI (applications such as ChatGPT, Dall-E, Midjourney, Gemini) which use complex algorithms to learn from training data libraries and, when prompted by users, produce media outputs such as text, images, and sounds.

The concerns for critical researchers interested in using GenAI as a method are numerous: The development of the systems and applications has been widely driven and controlled by the tech industry. Their monopoly in the market means that the companies profit not only from the services they sell but also from the technological knowledge they produce (Baradaran, 2024). Simultaneously, the proprietary systems limit the choices for researchers and users, making it almost impossible to investigate the social and ecological sustainability of the systems or the ethics of their technical construction. Given the massive budget that the tech industry has spent in recent years and its exponential projections for the near future (e.g. Nienaber, 2024; Grace et al., 2024), it can be assumed that the mainstreaming of AI applications from predictive technologies to GenAI will continue their domestication to users’ everyday lives and different fields including administration, policing, health care, education, journalism, and academic research. Thus, there is a need for a deeper understanding of the entire AI system, and us as researchers thoroughly engaging in a process of self-reflection on what our role in it is and should be. In the workshop, we will address questions such as:

  • How does GenAI reflect and produce social relations and understandings of the world? How do we unpack what is and is not meaningful to understand in the datasets and classifications?
  • What are the political economies of the construction of AI systems? What are the wider planetary consequences? How should researchers address these issues when working with GenAI?
  • How can we resist the hegemonic and often naturalised narratives of the AI industry and provide alternatives with critical research? How can critical researchers engage in decolonising AI systems?
  • How can critical research interventions participate in the radical reimagining of AI’s technological development and role in society?

We invite everyone interested in the topic (no previous experience with GenAI is required) to come and explore possibilities and concerns of using GenAI in research, share ideas, identify alternatives, experiment with a GenAI method, and network. The full-day workshop will consist of three interlinked parts. In the morning, we review some of the central questions collaboratively using the method of ’a world café’ (https://theworldcafe.com), followed by a summary of the key concerns in the field of critical GenAI research. In teh afternoon, the participants have a chance to experiment with a workshop method developed by the facilitators in which an AI image generator is used to imagine sustainable digital futures. Organisers of the workshop will provide tablets with the GenAI application. In the final session, we share reflections and further elaborate on the method experiment, discuss the needs and concerns for critical research, and have space for networking and exchanging ideas.

back to top


Hostile responses to research on online communities: how can we safeguard researchers?

Afternoon Session – 30 October 2024


Organizers: Helena Webb, Liz Dowthwaite, Virginia Portillo, Peter Craigon, Ephraim Luwemba, University of Nottingham, United Kingdom

Suggested audience Researchers and research managers involved in the study of online communities. All levels of experience and expertise welcomed.

Workshop aims To discuss the risks involved when researchers study potentially hostile online communities and identify steps that research institutions can put in place to safeguard them

We are researchers at the University of Nottingham conducting a project on practices for effective, safe and responsible research on online communities. Our project is called ‘EFRESH’ and is being conducted in collaboration with the Internet Society. We believe that online communities are an essential topic of research focus. These communities provide a digital space for users with shared interests to meet. Researching how they are formed and organised helps us understand contemporary phenomena such as (online) identity formation, collective discourses and action, and the spread of (mis)information.

However, researchers in this area increasingly report experiencing a hostile response when researching online communities. This particularly occurs when the communities being studied share extreme/outsider viewpoints or behaviours – for instance, white supremacist groups, conspiracy theorist groups, cheating forums for online gamers, COVID denial forums etc. Members of these communities may react negatively to awareness of being studied and undertake actions such as:

  • sending abusive/threatening messages to the researcher and/or the researcher’s family members;
  • sharing personal information about the researcher online (doxing);
  • creating embarrassing Photoshopped images of the researcher and sharing them online;
  • making complaints to the researcher’s university;
  • posting false information about the researcher online.

These actions are most likely to occur during data collection and the dissemination of findings. They are also most likely to occur in environments in which negative behaviours are normalised. Some platforms (e.g. Kiwi Farms, 4chan) have reputations for minimal content moderation and some communities incorporate the harassment of outsiders as part of their group identity. Junior researchers, female researchers, researchers of colour, and researchers in the LGBTQIA+ community are most vulnerable to these kinds of hostile responses. Experiencing them can cause psychological trauma and fatigue, as well as reputational damage. They can also delay the research and publication process, potentially harming researchers’ career progression.

The presence of these risks means it is necessary to safeguard researchers when they study online communities. Current literature shows that at present, researchers and research groups are often required to develop their own strategies for this, with little organised input or support from research institutions. In this interactive workshop we will explore steps we are taking in a current project to address this current absence of institutional-level support.

Using a range of research activities, we seek to identify best practices for researcher safeguarding during this research, particularly emphasising the need for institutions to acknowledge the extent of the problem and develop proactive strategies to protect researchers. We will use our findings to prepare safeguarding guidance to share across research institutions.

We welcome all researchers and research managers with an interest in this issue to join this workshop. We want to share our project findings with the internet research community in order to receive feedback on them and maximise the contribution they can make. The workshop will be highly interactive. We will present our emerging findings and draft institutional guidance and open them up for discussion. Break-out discussions will provide an opportunity to identify improvements to the guidance and strategies to promote their adoption across research institutions. We will also assess interest in a creating a joint report on the workshop for publication.

Proposed workshop outline

  • Introductions, project overview, workshop overview
  • Small group discussion activity: risks involved in the study of online communities. The scope and scale of the problem
  • Presentation: project findings and draft guidance
  • Small group discussion activity: reflecting on the guidance. What can be improved and what can be added?
  • Plenary discussion: how can we encourage research institutions to acknowledge the issue and put appropriate safeguarding in place? Next steps for progress in this area.

back to top


Undergraduate Teaching Workshop

Afternoon Session – 30 October 2024


Organizers: Holly Kruse-Rogers State University, United States of America; Emily van der Nagel-Monash University, Australia; Kelly Boudreau-Harrisburg University of Science and Technology, United States of America.

Teaching is central to many of our academic lives, whether we are graduate teaching assistants or junior or senior faculty members; tenure-track, tenured, or contingent; experienced educators or instructors relatively new to teaching. In the classroom (on campus or virtual), our students’ understandings of social media and internet use don’t always align with broader press or research narratives. Moreover, and in response to this year’s conference topic, as the mission of universities becomes ever more vocation-focused, our roles as educators often include preparing students for careers in “industry.” University marketing material highlights the career opportunities for which undergraduates will be prepared, and there is a push to include ‘industry experiences’ within degrees.

This workshop brings educators together to discuss the difficulties and joys of teaching in, on, and around the internet. What do we learn from our students about the internet, how are we using the internet to teach, and what’s the best way of bringing AoIR research into our classrooms? How do we use the internet in teaching when our students don’t have broadband access, aren’t digitally-savvy, and when our institutions do not offer robust technical infrastructures or support? For what kinds of creative, information, or other industries are our students really prepared?

As professors with teaching experience that spans types of institutions, student populations, and institutional support, we understand that there are no one-size-fits-all solutions to teaching in ever-changing technological and social contexts. Also, and building on the last three years of workshops, this year’s workshop attends to our growing knowledge of the roles that teaching loads, expectations of service to students and administration, and institutional terminologies, and more differ around the world. For that reason, the workshop is discussion-based so we can all learn from, and with, one another.

The workshop will focus on experiences and resources brought forth in responses to the questionnaire and expand on them through discussion. The first hour focuses on introductions, and on outlining the key concerns, questions, and issues resulting from questionnaire responses. The second hour focuses on sharing strategies, assignments or techniques employed in teaching that center around digital media and internet research in a pedagogical setting. During the third hour, participants work in smaller groups, the topics of which are determined by workshop participants. Each participant joins the group that best addresses their needs and expectations. The fourth hour includes the summation of the group work and discussion of plans for documenting and sharing of strategies and materials that were discussed throughout the workshop.

The organizers intend to adhere strictly to the structure described above and to give participants substantive takeaways at the end of the workshop.

This workshop adheres to AoIR’s Statement of Principles and Statement of Inclusivity (https://aoir.org/diversity-and-inclusivity), which is a commitment to academic freedom, equality of opportunity, and human dignity, and which supports at its conferences “A civil and collegial environment rooted in a belief of equal respect for all persons. Such an environment, among other things, should encourage active listening and awareness of inappropriate or offensive language.”

back to top


Using Lego to Visualise Sociotechnical Challenges: A Creative Methodology Workshop

Morning Session – 30 October 2024


Organizers: Alexander Hardy-University of Liverpool, United Kingdom; Suzanne McClure-University of Exeter, United Kingdom, Simeon Yates-University of Liverpool, United Kingdom; Jeanette D’Arcy1, Gianfranco Polizzi-University of Liverpool, United Kingdom; Rebecca Harris-University of Liverpool, United Kingdom.

Our half-day, three-hour workshop employs the methods developed over the last year by our team at the Universities of Liverpool and Exeter, working on attitudes to data across varying areas of the UK workforce and research landscape. As a team, our shared interests lie in the sociotechnical – the everyday but complex interactions between people, technical systems and devices. By playing ‘seriously’ (de Saille et al, 2022) with LEGO to create models of complex sociotechnical phenomena, participants are encouraged to creatively visualise their relationships with data. This workshop may appeal to those interested in creative methodologies, interdisciplinary research, data ethics, and sociotechnical interaction. This workshop is designed to demonstrate, through active participation and engagement, rich qualitative research data which compares favourably with other qualitative approaches.

This approach builds upon the work of Coles-Kemp, Jensen, & Heath (2020) who held workshops for participants to outline their perceived cyber (in)securities. Other studies have focused on risk visualisation (Hall, Heath, & Coles-Kemp, 2015) and everyday data security. Asprion et al. (2020) similarly utilised LEGO Serious Play as an educational tool for visualisation, De Saille et al. (2022) used LEGO as a participatory method for health and social care, and Rashid et al (2020) used LEGO as a tool for wargaming cyberattacks. LEGO has been used across numerous disciplines due to its flexibility and utility as a method to engage with diverse groups.

In this workshop, we will introduce the research of our team, which has focused on attitudes to data in the workplace to inform future policy direction in the UK government with our partners and funders DSTL. Additionally, colleagues have sought to illuminate enablers for better access to smart data services for researchers in their advisory role for the Smart Data Research UK (Formerly Digital Footprints) programme, using creative workshops as a tool to engage simultaneously with experts and laypersons alike. Our research emphasises the importance of sociotechnical factors in decision-making and highlights important attitudes to the use of data, the accessibility of data, data awareness levels, and perceived security threats.

Participants in our workshop are given an introductory skills-based task, then encouraged to build a larger model visualising sociotechnical challenges based on set research questions in groups of 4+. This task is complex and multifaceted, requiring collaborative work and discussion among groups. Upon the completion of this task, volunteers are sought from each group to narrate their model, exploring the meaning behind key components.

Time is allowed for brainstorming and participants will be asked to reflect on a series of prompts to assist with their task. The participants are provided with a colour-coding guide, to help visualise their attitude towards different types of data. The exercise should also demonstrate how these attitudes can vary based on the context of a particular data flow and whether that data is for personal or professional consumption. Participants are encouraged to map their data use in their personal and professional lives while using a colour-coded guide and annotation to answer a series of research questions. This frames the exercise, and each group receives a different task, allowing for additional variety and experience sharing.

The goal is for each group to produce personal/professional visualisations highlighting attitudes to data. Groups are asked to share and discuss their models, which serves as a reflective process while also providing rich, ethnographic data. The workshop length is three hours, and all necessary equipment is provided. The goal of the workshop is to explore data use among research participants as well as to share our reflections on the development of LEGO workshops and associated best practices.

Our workshop has a wider conceptual value. We aim to offer key contributions to debates around the flow and interactions between personal and professional data; the use of data in the workplace; concerns of surveillance; everyday security dilemmas, accessibility and availability of data for innovation and beyond. Furthermore, we aim to demonstrate the value of creative methodologies in exploring complex phenomena and to provide an innovative experience for all participants.

References

Asprion et al, 2020. Exploring Cyber Security Awareness Through LEGO Serious Play Part I: The Learning Experience. Management, 20, p. 22

Coles-Kemp, L., Jensen, R.B. and Heath, C.P., 2020, April. Too much information: questioning security in a post-digital society. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14).

de Saille, S., Greenwood, A., Law, J., Ball, M., Levine, M., Vallejos, E.P., Ritchie, C. and Cameron, D., 2022. Using LEGO® SERIOUS® Play with stakeholders for RRI. Journal of Responsible Technology, 12, p.100055.

Hall, Heath & Coles-Kemp, 2015. Critical Visualisation and rethinking how we visualise risk and security, Journal of Cyber Security, 1 (1), pp 93-108

Rashid et al, 2020. Everything is Awesome! Or is it? Cyber Security Risks in Critical Infrastructure, In Critical Information Infrastructure Security: 14th International Conference, CRITIS 2019.