Jonathan Gray’s #AoIR2016 Presentation: Reshaping Data Worlds for AoIR 2016

Reshaping Data Worlds for AoIR 2016

By jwyg | Originally Published: October 5, 2016

The following is a short video on Reshaping Data Worlds prepared for the 17th annual meeting of the Association of Internet Researchers – AoIR 2016 on “Internet Rules!” – which takes place in Berlin on 5-8th October 2016. It is part of a session on Big Data Meet Grassroots Activism organised by the DATACTIVE project.

 

Reshaping Data Worlds? from Jonathan Gray on Vimeo.

Original post and full video transcript available here.

Interested in more content like this? Join us for #AoIR2017 in Tartu, Estona! Submissions now open!

Posted in Uncategorized

#AoIR2017 Keynote Speakers ~ We have 2 this year!!

We are doubly thrilled to announce the two keynote speakers at #AoIR2017 in Tartu Estonia.

Keynote #1: Andrew Chadwick

Andrew Chadwick (PhD, London School of Economics) is Professor of Political Science in the Department of Politics and International Relations at Royal Holloway, University of London, where he founded the New Political Communication Unit in 2007. His latest book is The Hybrid Media System: Politics and Power (Oxford University Press, 2013), which won the 2016 International Journal of Press/Politics Book Award for an outstanding book on media and politics published in the previous ten years, and the American Political Science Association Information Technology and Politics Section Best Book Award, 2014. His book Internet Politics: States, Citizens, and New Communication Technologies (Oxford University Press, 2006) won the American Sociological Association Best Book Award (Communication and Information Technologies Section) and is among the most widely-cited works in its field. Andrew is also Editor of the book series Oxford Studies in Digital Politics, which currently features 20 titles. At Royal Holloway, by whom he has been awarded two Teaching Excellence Prizes, he teaches courses on the internet and politics, digital political communication, and the politics of democracy. Andrew is currently writing his next book, Social Media and the Future of Democracy (Oxford University Press). You can visit his website and follow him on Twitter.

 

Keynote #2: Marju Lauristin

Marju Lauristin is a Professor of Social Communication at the Institute of Social Studies at the University of Tartu (since 1995). Her main research interest covers social and cultural transformations on the way from post-communist to information society.

Prof. Lauristin was one of the establishing members of ‘Rahvarinne’ in 1988, the first large-scale independent political movement in Estonia since the beginning of the Soviet occupation. She has since been Chairman of the Estonian Social Democratic Party, deputy speaker of the Estonian parliament, and minister of Social Affairs of Estonia. Since 2014 she is a Member of European Parliament, where she is involved as a rapporteur in the area of data protection and development of European digital economy and society.

Posted in Uncategorized

Submissions for #AoIR2017 Open

We are pleased to open submissions for proposals for #AoIR2017: Networked Publics to be held in Tartu, Estonia, 18 – 21 October, 2017. To re-familiarize yourself with the call for proposals and types of submissions solicited, please see here.

When submitting, please take the time to read the submission categories and topics carefully. In the interest of diversity and collegiality, each conference participant is limited to presenting one individual paper and one paper in a panel, and to participating in one roundtable. You can be a co-author on additional papers, but you must not be the scheduled presenter of these papers.

If you have any questions about the submission process, please email aoir2017 (at) aoir dot org. We look forward to seeing your proposals.
 
Click here to go to the submission site: https://www.conftool.com/aoir2017.
 
We look forward to your proposals and to a vibrant and stimulating conference in Tartu!

Posted in Conferences

#AoIR2016 – Your First AoIR Conference?

#AoIR2016 had 535 full conference attendees from 30 countries. This included 331 first time attendees! Each year AoIR welcomes many new researchers.

Take a peek at the CFP for #AoIR2017.  Will we meet you this October?

Posted in Uncategorized

Facebook’s accidental ‘death’ of users reminds us to plan for digital death

Facebook’s accidental ‘death’ of users reminds us to plan for digital death
(republished with permission from The Conversation.)

Tama Leaver, Curtin University

The accidental “death” of Facebook founder Mark Zuckerberg and millions of other Facebook users is a timely reminder of what happens to our online content once we do pass away.

Earlier this month, Zuckerberg’s Facebook profile displayed a banner which read: “We hope the people who love Mark will find comfort in the things others share to remember and celebrate his life.” Similar banners populated profiles across the social network.

After a few hours of users finding family members, friends and themselves(!) unexpectedly declared dead, Facebook realised its widespread error. It resurrected those effected, and shelved the offending posthumous pronouncements.

For many of the 1.8-billion users of the popular social media platform, it was a powerful reminder that Facebook is an increasingly vast digital graveyard.

It’s also a reminder for all social media users to consider how they want their profiles, presences and photos managed after they pass away.

The legal uncertainty of digital assets

Your material goods are usually dealt with by an executor after you pass away.

But what about your digital assets – media profiles, photos, videos, messages and other media? Most national laws do not specifically address digital material.

As most social networks and online platforms are headquartered in the US, they tend to have “terms of use” which fiercely protect the rights of individual users, even after they have died.

Requests to access the accounts of deceased loved ones, even by their executors, are routinely denied on privacy grounds.

While most social networks, including Facebook, explicitly state you cannot let another person know or log in with your password, for a time leaving a list of your passwords for your executor seemed the only easy way to allow someone to clean up and curate your digital presence after death.

Five years ago, as the question of death on social media started to gain interest, this legal uncertainty led to an explosion of startups and services that offered solutions from storing passwords for loved ones, to leaving messages and material to be sent posthumously.

But as with so many startups, many of these services have stagnated or disappeared altogether.

Dealing with death

Drowning in social media.
mkhmarketing/Flickr

Public tussles with grieving parents and loved ones over access to deceased accounts have led most big social media platforms to develop their own processes for dealing with digital death.

Facebook now allows users to designate a “legacy contact” who, after your death, can change certain elements of a memorialised account. This includes managing new friend requests, changing profile pictures and pinning a notification post about your death.

But neither a legacy contact, nor anyone else, can delete older material from your profile. That remains visible forever to whoever could see it before you die.

The only other option is to leave specific instructions for your legacy contact to delete your profile in its entirety.

Instagram, owned by Facebook, allows family members to request deletion or (by default) locks the account into a memorialised state. This respects existing privacy settings and prevents anyone logging into that account or changing it in the future.

Twitter will allow verified family members to request the deletion of a deceased person’s account. It will never allow anyone to access it posthumously.

LinkedIn is very similar to Twitter and also allows family members to request the deletion of an account.

Google’s approach to death is decidedly more complicated, with most posthumous options being managed by the not very well known Google Inactive Account Manager.

This tool allows a Google user assign the data from specific Google tools (such as Gmail, YouTube and Google Photos) to either be deleted or sent to a specific contact person after a specified period of “inactivity”.

The minimum period of inactivity that a user can assign is three months, with a warning one month before the specified actions take place.

But as anyone who has ever managed an estate would know, three months is an absurdly long time to wait to access important information, including essential documents that might be stored in Gmail or Google Drive.

If, like most people, the user did not have the Inactive Account Manager turned on, Google requires a court order issued in the United States before it will consider any other requests for data or deletion of a deceased person’s account.

Planning for your digital death

The advice (above) is for just a few of the more popular social media platforms. There are many more online places where people will have accounts and profiles that may also need to be dealt with after a person’s death.

Currently, the laws in Australia and globally have not kept pace with the rapid digitisation of assets, media and identities.

Just as it’s very difficult to legally pass on a Kindle library or iTunes music collection, the question of what happens to digital assets on social media is unclear to most people.

As platforms make tools available, it is important to take note and activate these where they meet (even partially) user needs.

Equally, wills and estates should have specific instructions about how digital material – photos, videos, messages, posts and memories – should ideally be managed.

With any luck the law will catch up by the time these wills get read.

The Conversation

Tama Leaver, Associate Professor in Internet Studies, Curtin University

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized

AoIR’s YouTube Channel and Travel Scholarships

We are happy to announce the newest AoIR social media platform – YouTube! Subscribe to our YouTube channel get first access to our videos.

 

As we approach the holiday season, a time of great generosity, we want to show you a video interview with one of #AoIR2016’s Travel Scholarship Award attendees – Chung-hong Chan, University of Hong Kong. Your past generosity enabled his conference attendance.

AoIR needs you to help scholars like Mr. Chan attend our conferences and benefit from our community. Please take a moment and click the donate link. Any amount helps!

Reminder: The Call for Proposals for #AoIR2017 is here.


If you are interested in sharing a video with us that is Creative Commons licensed about all things related to the Internet, please send us a brief description as well as a link to the content ac @ aoir dot  org OR prez @ aoir dot org.

 

Posted in Community, Conferences, Publications, Uncategorized Tagged with: , ,

Three ways Facebook could reduce fake news without resorting to censorship

Three ways Facebook could reduce fake news without resorting to censorship
(republished with permission from The Conversation.)

Jennifer Stromer-Galley, Syracuse University

The public gets a lot of its news and information from Facebook. Some of it is fake. That presents a problem for the site’s users, and for the company itself.

Facebook cofounder and chairman Mark Zuckerberg said the company will find ways to address the problem, though he didn’t acknowledge its severity. And without apparent irony, he made this announcement in a Facebook post surrounded – at least for some viewers – by fake news items.

Other technology-first companies with similar power over how the public informs itself, such as Google, have worked hard over the years to demote low-quality information in their search results. But Facebook has not made similar moves to help users.

What could Facebook do to meet its social obligation to sort fact from fiction for the 70 percent of internet users who access Facebook? If the site is increasingly where people are getting their news, what could the company do without taking up the mantle of being a final arbiter of truth? My work as a professor of information studies suggests there are at least three options.

Facebook’s role

Facebook says it is a technology company, not a media company. The company’s primary motive is profit, rather than a loftier goal like producing high-quality information to help the public act knowledgeably in the world.

Nevertheless, posts on the site, and the surrounding conversations both online and off, are increasingly involved with our public discourse and the nation’s political agenda. As a result, the corporation has a social obligation to use its technology to advance the common good.

Discerning truth from falsehood, however, can be daunting. Facebook is not alone in raising concerns about its ability – and that of other tech companies – to judge the quality of news. The director of FactCheck.org, a nonprofit fact-checking group based at the University of Pennsylvania, told Bloomberg News that many claims and stories aren’t entirely false. Many have kernels of truth, even if they are very misleadingly phrased. So what can Facebook really do?

Option 1: Nudging

One option Facebook could adopt involves using existing lists identifying prescreened reliable and fake-news sites. The site could then alert those who want to share a troublesome article that its source is questionable.

One developer, for example, has created an extension to the Chrome browser that indicates when a website you’re looking at might be fake. (He calls it the “B.S. Detector.”) In a 36-hour hackathon, a group of college students created a similar Chrome browser extension that indicates whether the website the article comes from is on a list of verified reliable sites, or is instead unverified.

These extensions present their alerts while people are scrolling through their newsfeeds. At present, neither of these works directly as part of Facebook. Integrating them would provide a more seamless experience, and would make the service available to all Facebook users, beyond just those who installed one of the extensions on their own computer.

The company could also use the information the extensions generate – or their source material – to warn users before they share unreliable information. In the world of software design, this is known as a “nudge.” The warning system monitors user behavior and notifies people or gives them some feedback to help alter their actions when using the software.

This has been done before, for other purposes. For example, colleagues of mine here at Syracuse University built a nudging application that monitors what Facebook users are writing in a new post. It pops up a notification if the content they are writing is something they might regret, such as an angry message with swear words.

The beauty of nudges is the gentle but effective way they remind people about behavior to help them then change that behavior. Studies that have tested the use of nudges to improve healthy behavior, for example, find that people are more likely to change their diet and exercise based on gentle reminders and recommendations. Nudges can be effective because they give people control while also giving them useful information. Ultimately the recipient of the nudge still decides whether to use the feedback provided. Nudges don’t feel coercive; instead, they’re potentially empowering.

Option 2: Crowdsourcing

Facebook could also use the power of crowdsourcing to help evaluate news sources and indicate when news that is being shared has been evaluated and rated. One important challenge with fake news is that it plays to how our brains are wired. We have mental shortcuts, called cognitive biases, that help us make decisions when we don’t have quite enough information (we never do), or quite enough time (we never do). Generally these shortcuts work well for us as we make decisions on everything from which route to drive to work to what car to buy But, occasionally, they fail us. Falling for fake news is one of those instances.

This can happen to anyone – even me. In the primary season, I was following a Twitter hashtag on which then-primary candidate Donald Trump tweeted. A message appeared that I found sort of shocking. I retweeted it with a comment mocking its offensiveness. A day later, I realized that the tweet was from a parody account that looked identical to Trump’s Twitter handle name, but had one letter changed.

I missed it because I had fallen for confirmation bias – the tendency to overlook some information because it runs counter to my expectations, predictions or hunches. In this case, I had disregarded that little voice that told me this particular tweet was a little too over the top for Trump, because I believed he was capable of producing messages even more inappropriate. Fake news preys on us the same way.

Another problem with fake news is that it can travel much farther than any correction that might come afterwards. This is similar to the challenges that have always faced newsrooms when they have reported erroneous information. Although they publish corrections, often the people originally exposed to the misinformation never see the update, and therefore don’t know what they read earlier is wrong. Moreover, people tend to hold on to the first information they encounter; corrections can even backfire by repeating wrong information and reinforcing the error in readers’ minds.

If people evaluated information as they read it and shared those ratings, the truth scores, like the nudges, could be part of the Facebook application. That could help users decide for themselves whether to read, share or simply ignore. One challenge with crowdsourcing is that people can game these systems to try and drive biased outcomes. But, the beauty of crowdsourcing is that the crowd can also rate the raters, just as happens on Reddit or with Amazon’s reviews, to reduce the effects and weight of troublemakers.

Option 3: Algorithmic social distance

The third way that Facebook could help would be to reduce the algorithmic bias that presently exists in Facebook. The site primarily shows posts from those with whom you have engaged on Facebook. In other words, the Facebook algorithm creates what some have called a filter bubble, an online news phenomenon that has concerned scholars for decades now. If you are exposed only to people with ideas that are like your own, it leads to political polarization: Liberals get even more extreme in their liberalism, and conservatives get more conservative.

The filter bubble creates an “echo chamber,” where similar ideas bounce around endlessly, but new information has a hard time finding its way in. This is a problem when the echo chamber blocks out corrective or fact-checking information.

If Facebook were to open up more news to come into a person’s newsfeed from a random set of people in their social network, it would increase the chances that new information, alternative information and contradictory information would flow within that network. The average number of friends in a Facebook user’s network is 338. Although many of us have friends and family who share our values and beliefs, we also have acquaintances and strangers who are part of our Facebook network who have diametrically opposed views. If Facebook’s algorithms brought more of those views into our networks, the filter bubble would be more porous.

All of these options are well within the capabilities of the engineers and researchers at Facebook. They would empower users to make better decisions about the information they choose to read and to share with their social networks. As a leading platform for information dissemination and a generator of social and political culture through talk and information sharing, Facebook need not be the ultimate arbiter of truth. But it can use the power of its social networks to help users gauge the value of items amid the stream of content they face.

The Conversation

Jennifer Stromer-Galley, Professor of Information Studies, Syracuse University

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized

Trump and why emotion triumphs over fact when everyone is the media

Trump and why emotion triumphs over fact when everyone is the media
(republished with permission from The Conversation)

Alfred Hermida, University of British Columbia

The playwright Arthur Miller mused in 1961: “A good newspaper, I suppose, is a nation talking to itself.” The assertion seems oddly quaint now – at a time when the US elected a president who was continually at odds with the press. Donald Trump intentionally positioned himself as an outsider of the established institutions of democratic deliberation.

Trump bypassed the media to connect directly with his supporters, while simultaneously benefiting from the media to spread his message. Supporters and opponents became the media themselves, spreading and amplifying subjective and emotional affective news – news designed to provoke passion, not inform.

The triumph of Trump signals the contested nature of the media due to tectonic shifts in the mechanisms and pathways for news. The once privileged position of media organisations as the primary gatekeepers of news flows to the public has been undermined by the industry’s economic woes, the emergence of digital information merchants, shifting audience practices and the spread of social media platforms.

The ability to decide “all the news that’s fit to print” is shared now between traditional and new media outlets, activist groups, celebrities, citizens and computer code. News exists in a contested, chaotic and circular environment where emotion often overrides evidence, fuelling the rise of polarised, passionate and personalised streams of information.

As newsrooms across “Middle America” are hollowed out, most new digital media outlets are concentrated along the blue-tinged coasts of east and west. The result is a media that only sees a wide swath of US voters from 35,000 feet, as it flies overhead from one coast to the other. These voters did not see themselves reflected in the mainstream media and instead identified with Trump’s outsider message of defiance.

The loss of influence is even more apparent given the high number of newspapers that endorsed Hillary Clinton. Endorsements do not define the outcome but can help to build momentum behind a candidate.

Clouds of dust

The waning authority of newspapers is unsurprising given that no more than 3% of Americans named local and national print outlets as the most helpful source for election news. News websites fared slightly better, cited by 13%. Instead, cable news and social media emerged as the two “most helpful” sources of election news. Arguably, they were also the worst.

Cable news is a misnomer. These networks are not in the business of evidence-based reporting. They are in the emotion business. And emotion sells. Ratcheting up anger and outrage on cable makes business sense. Trump’s fiery and obnoxious rhetoric was a ratings bonanza, spurring a growth in viewership for the first time in three years and, with it, rising revenues. Viewers tune into the channel that mirrors their personal political leanings, as audiences gravitate towards media that reflects and reinforces their biases and beliefs.

Note: In 2015, Nielsen provided Pew Research Center with weighted annual averages for each network starting with 2007. Weighted averages better account for slight differences in the number of programming hours in each broadcast month.
Source: Pew Research Center analysis of Nielsen Media Research data, used under license.

Pew Research Center, CC BY-SA

Social media offers a space for voters to find, support and share facts, falsehoods or feelings. The impact of Facebook is remarkable given that more than 40% of Americans get their news from the social media behemoth. Facebook doesn’t just bring together audiences for the news. It shapes the news for audiences, drawn from the choices of their social connections and regurgitated by algorithms to match personal preferences. It is a space designed to envelop users in the cosy embrace of the familiar, not challenge misinformed views or address unsubstantiated rumours.

Fake news: this never happened but plenty thought it did.
Snopes.com

Conspiracy theories about politics flourish on social media, where the currency is virality not truth. People will share false information if it fits their view of the world. Even if some don’t quite believe it, they will share an article with the aim of entertaining, exciting or enraging friends and acquaintances. Fake news spreads so fast that potentially hundreds of thousands of people could have seen it by the time it gets debunked. Facebook was criticised for failing to stem the rise of fake news before the election results came in, with even Barack Obama talking about a “dust cloud of nonsense”.

Frenzied groundswell

When everyone can be the media, both left and right sought to be the media. Sometimes it was through the use of automated propaganda bots on Twitter. One study found bots were behind 50-55% of Clinton’s Twitter activity. That’s nothing compared to the 80% for Trump. Such frenzied tweeting is intended to create the impression of a groundswell of public opinion.

At other times, it was engaged publics who took to social media to craft their own election narrative. For example, Clinton supporters appropriated the #nastywoman hashtag to show their support for a female candidate. Trump supporters took to #repeal19, the amendment that gave women the right to vote.

Such a media diet of affective news designed to stir up passions, feed prejudices and polarise publics is a far cry from the practices of institutional journalism. Reporting is kept separate from opinion and commentary. Facts are prized, with emotion finding its place in features, rather than the news. Looking back, facts never stood a chance.

Beyond the weaknesses and failings of the news industry, in a smackdown between emotion and evidence, emotion always wins. Audiences swim in a media blend of tumbling facts, comment, experience and emotion, resulting in a news cocktail tailored to individual tastes.

The Conversation

Alfred Hermida, Associate professor, Graduate School of Journalism, University of British Columbia

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized

Travel Scholarship Recipient – Suwana

Each year, through the generous donations of AoIR conference attendees, we are able to fund several travel scholarships for junior scholars to attend the conference. We want to recognize our scholarship recipients and share with you a little bit about them and their interests.

suwanaheadshotFiona Suwana

 

Who are you?
Fiona Suwana (@fionasuwana)

Where are you from?
I am from Indonesia but now, I am doing postgraduate study in Digital Media Research Centre at Queensland University of Technology, Brisbane.

What is your current area of study?
My current area of study is civic engagement, democracy,digital activism, digital media, digital media literacy, digital methods, Indonesia, political participation, and young people.

Describe the research you will present at AoIR 2016.
My research title is Digital Media and Indonesian Young People: Building Sustainable Democratic Institutions and Practices. I am interested in doing research on digital media and democracy for my PhD, so this research focuses on the motivation, capacities, and barriers to Indonesian youth using digital media to support civic engagement and political participation in Indonesia. While, Indonesian young activists and university students have different preferences for digital platforms and digital content to undertake online civic engagement and political participation, but they have similar barriers to it. Those findings can be useful not only for Indonesia, but also for other countries that still have problems with democratic practices and institutions.

Have you presented at AoIR in the past? If yes, what has been your experience? If #AoIR2016 Berlin is your first AoIR conference, what made you choose this conference?
#AoIR2016 Berlin is my first AoIR Conference. I choose AoIR because this conference is one of the top associations in my research field, so attending and participating in the AoIR 2016 is a precious opportunity for me to share my research and receive useful feedback from the participants. Also, this conference would give me the chance to connect with international researchers from Asia, Europe and the US and to get another perspective on democratic development with the support of the digital media. I believe that if I participate and attend the AoIR 2016, I will be able to engage with other Internet researchers and discuss my experiences of digital activism and digital media used by Indonesian young people to support and maintain democracy in Indonesia. Moreover, I can get inspiration and do some networking to continue my research collaboration at the global level. Therefore, the AoIR 2016 conference will support me in my research journey and networks.

Posted in Awards, Conferences Tagged with: , ,

Travel Scholarship Recipient – Pruchniewska

Each year, through the generous donations of AoIR conference attendees, we are able to fund several travel scholarships for junior scholars to attend the conference. We want to recognize our scholarship recipients and share with you a little bit about them and their interests.

PruchniewskaHeadshotUrszula Pruchniewska

Who are you?

Urszula Pruchniewska (@urszie on twitter)

Where are you from?

I currently live in Philadelphia, USA, but I’m Polish by birth and grew up in South Africa and New Zealand.

What is your current area of study?

Gender, digital media, cultural production/labor.

Describe the research you will present at AoIR 2016.

At AoIR 2016, I will be presenting “Production Politics: Gender, Feminism, and Social Media Labor,” a paper I collaborated on with Dr. Brooke Erin Duffy (Cornell University). For this project, we interviewed female entrepreneurs and cultural producers about their experiences using social media for career promotion. We identify a series of tensions that structure how women working online articulate their experiences.  Our findings map onto a tension between feminist politics and post-feminist sentiment—one that has been rendered all the more salient by social media’s injunction to “put yourself out there.”

Have you presented at AoIR in the past? If yes, what has been your experience? If #AoIR2016 Berlin is your first AoIR conference, what made you choose this conference?

I have not presented at AoIR in the past, but I am very excited to take part in AoIR2016 Berlin. Mentors and peers have repeatedly recommended this conference to me, describing it as an event that boasts a supportive community of international scholars and that showcases innovative research in the fields in which I am interested.

Posted in Awards, Conferences Tagged with: ,