The Future of Eating Disorder Content Moderation

This guest post is the second in a series of writings on the state of content moderation from ATM participants and colleagues engaged in examining content moderation from a variety of perspectives. We welcome Dr. Ysabel Gerrard in the post below.


In recent months, there have been various public furores over social media platforms’ failures to successfully catch — to moderate — problematic posts before they are uploaded. Examples include two YouTube controversies: vlogger Logan Paul’s video of a suicide victim’s corpse which he found hanging in a Japanese forest, and other disturbing videos found on the YouTube Kids app, featuring characters from well-known animations and depicting them in upsetting and frightening scenarios. By and large, it’s easy to see why these cases hit the headlines and why members of the public called on platforms to rethink their methods of moderation, however high their expectations may be. But it’s harder to straightforwardly categorise other kinds of content as ‘bad’ and decide that they don’t have a place on social media, like pro-eating disorder (pro-ED) communities.

In early 2012, Tumblr announced its decision to moderate what it called ‘blogs that glorify or promote anorexia’. Its intervention came five days after a widely-circulated Huffington Post exposé about the ‘secret world’ of Tumblr’s thinspiration blogs. Facing mounting public pressures about their roles in hosting pro-ED content, Instagram and Pinterest introduced similar policies. All three platforms currently issue public service announcements (PSAs) when users search for terms related to eating disorders, like #anorexia and #thinspo, and Instagram blocks the results of certain tag searches. They also remove users and posts found to be violating their rules.

“Important questions remain unanswered.”

Almost six years later, important questions remain unanswered: why did platforms decide to intervene in the first place, given their hesitancy to remove other kinds of content despite public pressures? How often do moderators get it wrong? And, perhaps most crucially, what are the consequences of platforms’ interventions for users? I can’t answer all of these questions here, but I will address some of the issues with platforms’ current moderation techniques and propose some future directions for both social media companies and researchers.

The politics of the tag.

Platforms’ decisions to moderate hashtags implies that people use certain tags to ‘promote’ eating disorders. Hashtags tell moderators and users what a post is about, and moderated tags currently include ‘#proana’ and ‘#proed’, but they also include more generic phrases that aren’t explicitly pro-ED, like ‘#anorexia’ and ‘#bulimia’. A blanket ban on all eating disorder-related tags — ‘pro’ or otherwise — implies a disconnect between what platforms think people are doing when they use a certain tag and what they are actually doing and why they are doing it. In a forthcoming paper, I show how Instagram and Tumblr users continue to use ED-related tags despite their awareness of content moderation and of the volatility of ED hashtags. Without speaking to users (an ethically tricky but nonetheless important future research direction), it is difficult to know why they continue to use tags they know are banned. Perhaps their actions are not intended to ‘trigger’ or encourage other users — if indeed people can ‘catch’ an eating disorder solely by looking images of thin women — but to find like-minded people, as so many other social media users do. After all, this is the ethos on which social media companies were built.

For example, one anonymised Tumblr user addressed one of her posts to users who report ‘thinspo’ blogs. She called these blogs a ‘safe place’ for people with eating disorders, warning them that reporting posts can be triggering and assuring them that she will always create replacement blogs. Her post was re-blogged over 10,000 times and seemed to echo the sentiment of other Tumblr users, ‘pro-ED’ or otherwise.

Tumblr said it would not remove blogs that are ‘pro-recovery’, but this user’s feed — along with many others’ — is a tangled web of pro-eating disorder, pro-recovery and other positionalities. Users do not always conform to a stereotypically and recognisably ‘pro’ eating disorder identity, if indeed platforms (and anyone, for that matter) would know how to recognise it when they saw it. If social media offer ‘safe’ spaces to people whose conditions are socially stigmatised and marginalised, then platforms need to exercise greater care to understand the content of social media posts. But as I explain below, this might not be feasible for platforms, whose moderation workforces are already pushed to their absolute limits.

Why blanket rules don’t work.

The fact is that blanket rules for content moderation do not work, and nor should we expect them to. They always and inevitably miss the subtleties; the users who sit somewhere in the murky middle-ground between acceptability, distaste and often illegality. In social media’s eating disorder communities, various discourses — pro-ED, pro-recovery, not-pro-anything, amongst many others — are entangled with each other. But while platforms took measures to address eating disorders, they evidently did not adjust their own moderation mechanisms to suit such a complex issue. As Roberts explains, Commercial Content Moderators (CCMs) have only a few seconds to make a decision about moderation. It is concerning that individual posts can be de-contextualised from a user’s full feed — through no fault of a CCM’s, given the speed with which they must make decisions — and be removed.

For example, in its Community Guidelines, Pinterest includes an example of an image of a female body that would be ‘acceptable’ in a moderator’s eyes. The image’s overlaid text: ‘It’s not a diet, it’s a way of life. FIT Meals’ de-situates it from pro-eating disorder discourses:

Picture1

(Image source: https://www.pinterest.co.uk/pinterestpolicy/self-injury-examples/).

“Perhaps we are expecting too much of moderators, but not enough of platforms.”

But in the absence of hashtags and a text overlay, would a content moderator know not to categorise this as ‘pro-ED’ and ban it? How do they know how to interpret the signals that ED communities have become infamous for? How can a moderator do all of this in only a few seconds? And what happens to users if moderators get it wrong, something Facebook recently admitted to in relation to hate speech posts? Perhaps we are expecting too much of moderators, but not enough of platforms.

The future of eating disorder content moderation.

If eating disorder content continues to be moderated, are the current approaches appropriate? Perhaps social media companies should not encourage users to ‘flag’ each other in their public-facing policies, given the historical and problematic surveillance of girls’ and women’s bodies. Maybe Instagram should not continue to chase hashtags and ban the ones the ones emerge in their place, given the minimal space they occupy at the margins of social networks. Platforms could also provide full and exhaustive lists of banned tags to help users navigate norms, vocabularies and cultures. In the follow-up to its original policy, Tumblr admitted that it was ‘not under the illusion that it will be easy to draw the line between blogs that are intended to trigger self-harm and those that support sufferers and build community’. This was admirable, but what if Tumblr became more transparent about how its moderators make these decisions?

Moving forward, one suggestion for social media companies to foster collaborations with social scientists to research and better understand these cultures (a point made recently by Mary Gray). Those with an understanding of eating disorders know that their stigmatisation has historically prevented people from seeking treatment; a problematic and gendered issue that platforms’ panicked decisions to intervene have potentially worsened. An in-depth exploration of online eating disorder communities and a more open dialogue between researchers and platform policy-makers might help social media companies to promote an alternative view of eating disorders than simply being ‘bad’.


Ysabel-Gerrard-smallDr. Ysabel Gerrard is a Lecturer in Digital Media and Society at the University of Sheffield. She co-organises the Data Power Conference and is the current Young Scholars’ Representative for ECREA’s Digital Culture and Communication Section. One of the aims of her next project is to talk to social media workers who have been/are involved in making decisions about pro-eating disorder and self-harm content moderation. If you are, or know of anyone who might want to share their experiences, please email her at: y.gerrard@sheffield.ac.uk.

Content Moderation and Corporate Accountability: Ranking Digital Rights at #ATM2017

This entry is the first in a series of posts recapping portions of #ATM2017 from the perspective of participants. More to come.


rankingrights

Who gets to express what ideas online, and how? Who has the authority and the responsibility to police online expression and through what mechanisms?

Dozens of researchers, advocates, and content moderation workers came together in Los Angeles this December to share expertise on what are emerging as the critical questions of the day. “All Things in Moderation” speakers and participants included experienced content moderators — like Rasalyn Bowden, who literally wrote the moderation manual for MySpace — and pioneer researchers who understood the profound significance of commercial content moderation before anyone else, alongside key staff from industry. After years of toiling in isolation, many of us working on content moderation issues felt relief at finally finding “our people” and seeing the importance of our work acknowledged.

If the idea that commercial content moderation matters is quickly gaining traction, there is no consensus on how best to study it — and until we understand how it works, we can’t know how to structure it in a way that protects human rights and democratic values. One of the first roundtables of the conference considered the methodological challenges to studying commercial content moderation, key among which is companies’ utter lack of transparency around these issues.

While dozens of companies in the information and communication technology (ICT) sector publish some kind of transparency report, these disclosures tend to focus on acts of censorship and privacy violations that companies undertake at the behest of governments. Companies are much more comfortable copping to removing users’ posts or sharing their data if they can argue that they were legally required to do it. They would much rather not talk about how their own activities and their business model impact not only people’s individual rights to free expression and privacy, but the very fabric of society itself. The data capitalism that powers Silicon Valley has created a pervasive influence infrastructure that’s freely available to the highest bidder, displacing important revenue from print journalism in particular. This isn’t the only force working to erode the power of the Fourth Estate to hold governments accountable, but it’s an undeniable one. As Victor Pickard and others have forcefully argued, the dysfunction in the American media ecosystem — which has an outsized impact on the global communications infrastructure — is rooted in the original sin of favoring commercial interests over the greater good of society. The FCC’s reversal of the 2015 net neutrality rules is only the latest datapoint in a decades-long trend.

The first step toward reversing the trend is to get ICT companies on the record about their commitments, policies and practices that affect users’ freedom of expression and privacy. We can then evaluate whether these disclosed commitments, policies and practices sufficiently respect users’ rights, push companies to do better, and hold them to account when they fail to live up to their promises. To that end, the Ranking Digital Rights (RDR) project (where I was a fellow between 2014 and 2017) has developed a rigorous methodology for assessing ICT companies’ public commitments to respect their users’ rights to freedom of expression and privacy. The inaugural Corporate Accountability Index, published in November 2015, evaluated 16 of the world’s most powerful ICT companies across 31 indicators, and found that no company in the Index disclosed any information whatsoever about the volume and type of user content that is deleted or blocked when enforcing its own terms of service. Indeed, Indicator F9 — examining data about terms of service enforcement — was the only indicator in the entire 2015 Index on which no company received any points.

We revamped the Index methodology for the 2017 edition, adding six new companies to the mix, and were encouraged to see that three companies — Microsoft, Twitter, and Google — had modest disclosures about terms of service enforcement. Though it didn’t disclose any data about enforcement volume, the South Korean company Kakao disclosed more about how it enforces its terms of service than any other company we evaluated. Research for the 2018 Index and company engagement is ongoing, and we are continuing to encourage companies to clearly communicate what kind of content is or is not permitted on their platforms, how the rules are enforced (and by whom), and to develop meaningful remedy mechanisms for users whose freedom of expression has been unduly infringed. Stay tuned for the release of the 2018 Corporate Accountability Index this April.

Our experience has proven that this kind of research-based advocacy can have a real impact on company behavior, even if it’s never as fast as we might like. Ranking Digital Rights is committed to sharing our research methodology and our data (downloadable as a CSV file and in other formats) with colleagues in academia and the nonprofit sector. The Corporate Accountability Index is already being cited in media reports and scholarly research, and RDR is working closely with civil society groups around the world to hold a broader swath of companies accountable. All of RDR’s methodology documents, data, and other outputs are available under a Creative Commons license (CC-BY) — just make sure to give RDR credit.


Marechal headshotNathalie Maréchal is a PhD candidate at the University of Southern California’s Annenberg School for Communication and Journalism. Between 2014 and 2017, she held a series of fellowships at Ranking Digital Rights, where she authored several white papers and scholarly articles, conducted company research for the Corporate Accountability Index, and spearheaded the expansion of the Index’s methodology to include mobile ecosystems starting in 2017.

Keynote and Plenary Videos Now Available

After wildfires gripped the area adjacent to UCLA in the early morning hours of ATM 2017, it was not clear that we could go on. Despite the unplanned and frightening conditions, we opened the conference the morning of December 6th to a full house, and to a fabulous keynote from UN Special Rapporteur David Kaye, one of four all-hands events that took place during the two days. We are excited to now offer three of those events to you now, via video. They are:

David Kaye – UN Special Rapporteur on Freedom of Expression – Keynote, Wednesday, December 6 – All Things in Moderation – UCLA. Introductory remarks from Assistant Professor Sarah T. Roberts, Professor Jonathan Furner, and Dean Marcelo Suárez-Orozco, UCLA.

 

A plenary discussion featuring journalists who cover content moderation as a part of their journalistic beat. Panelists are David Ingram of Reuters news service, Olivia Solon of the Guardian, and Lauren Weber of the Wall Street Journal, in conversation with Safiya Noble (USC) and Sarah T. Roberts (UCLA) at All Things in Moderation, December 6, 2017.

 

A watershed panel featuring content moderators Roz Bowden, formerly of MySpace, and Rochelle LaPlante, currently freelance on Amazon Mechanical Turk (AMT), in conversation with Sarah T. Roberts (UCLA) and Safiya Noble (USC) at All Things in Moderation, December 7, 2017.

Note: Attorney Rebecca Roe delivered our Day 2 keynote but video for that event is not available due to the pending litigation in her client’s case. We thank her for her participation.