At All Things in Moderation 2017, a group of people came together in the final breakout session to brainstorm the way forward for content moderation and platform governance. Working at breakneck pace (and into the final plenary – we had to go get them so they wouldn’t miss it!), Jillian C. York (EFF), Sarah Myers West (USC), Tarleton Gillespie (Microsoft Research), and Nicolas Suzor (QUT) put forth a set of brainstormed principles that they wrestled with and worked through with each other and with the assembled group. At our request, they authored the following as a document of the day’s endeavors in order to capture the substance and the spirit of the session. We anticipate that this conversation has only just begun, and, in that light, are pleased to share the fruits of it with you now.
—
Guiding Principles for the Future of Content Moderation
With increasing attention to the labor, criteria, and implications of content moderation, come opportunities for real change in the ways that platforms are governed. After high profile exposés like The Guardian’s “Facebook Files,” it is becoming more difficult for platforms to regulate in secret. Governments around the world are increasingly seeking to influence moderation practices, and platforms now face substantial pressure from users, civil society, and industry groups to do more on specific issues like terrorism, hatred, ‘revenge porn’, and ‘fake news.’
In light of this pressure and the opportunities it implies, we hosted a roundtable discussion at All Things in Moderation to consider possibilities for the future of content moderation and to ask not just how the moderation apparatus should change, but what principles should guide these changes?
A conversation between four researchers and advocates, Tarleton Gillespie, Nic Suzor, Jillian York and Sarah Myers West, sought to bring together perspectives from media and information studies, law, and civil society to explore a variety of approaches to envisioning a set of high level principles that could guide the interventions of different actors in content moderation processes in future.
Due Process
Jillian C. York
The origins of due process are generally understood to be contained in chapter 39 of the Magna Carta, which declares that “No free man shall be arrested, or detained in prison, or deprived of his freehold, or outlawed, or banished, or in any way molested; and we will not set forth against him, nor send against him, unless by the lawful judgment of his peers and [or] by the law of this land.” This entered into the realm of English common law, trickling down to the American Constitution in the Fifth Amendment, which provides “”No person shall…be deprived of life, liberty, or property, without due process of law,” and the Fourteenth Amendment, which applies the concept to all states.
On platforms, no such due process is required to exist—and in most cases, it doesn’t. In the early days of Facebook and other platforms, decisions handed down to users were often final, with users being told they had no opportunity to appeal. As time went on, and advocates caught wind of the issue, companies began to change their tune, but even today, most companies only offer a limited set of options for appealing takedowns—for example, Facebook users can only appeal the removal of an account or a page, but not individual pieces of content. They also cannot appeal during a temporary suspension, even if that suspension was made in error.
On the day of All Things in Moderation, Jillian was contacted by a well-known and verified Instagram user, whose account had been hijacked (or hacked). The user had tried reaching out to the company to fix the situation, but to no avail. It was only after Jillian tweeted about it, and was contacted by a Facebook employee who knew the right person at Instagram, that the problem was resolved.
We know that companies make a lot of errors, and that most users—unless they’re famous or have proximity to a company’s employees—cannot access the help they need. It’s incredible that customer service has all but disappeared in the age of social media. Due process is a key principle that should be available to all users in all instances, regardless of how or whether they have violated community standards.
Transparency
Sarah Myers West
A common – and still unaddressed – critique of content moderation systems is that they lack transparency. Though many social media companies produce transparency reports, almost none of them include any information about content taken down as a result of Terms of Service or content policies. Journalists have played an important role in shedding light on content moderation processes through investigative projects, some of which relied on leaks of confidential material – such as the operational guidelines provided to moderators – from within companies.
More transparency in this space is critical if platforms are to be held accountable for how they moderate content. However, this may be an important moment to consider transparency as research question to be investigated rather than only as a policy goal. What does meaningful transparency look like in the context of content moderation? It may require more than another data point in a transparency report.
For example, often when there is a crisis over how a particular instance of content moderation is handled, the response from the company will be that the takedown was a mistake, the result of moderator error. Understanding what proportion of terms of service takedowns occur in error would be an important step toward accountability. However, this data would not tell us much about other, just as critical, elements of content moderation systems: who are the moderators of our online content? What guidelines are they provided for interpretation? What conditions do they work under, and how does this shape moderation outcomes?
Transparency includes not only the content that is moderated (though this is important), but also transparency of the process and of the broader content moderation system. The process of making content moderation processes transparent may mean changing content moderation systems themselves. Given the high level of complexity – both social and technical – of the process of content moderation, mandating more transparency may mean reframing the question that guides this principle. Instead, we might ask, “how can content moderation systems be made legible, both to us as users and to the companies that are running them?
Custodianship
Tarleton Gillespie
Platforms sometimes describe their content moderation as a kind of ‘custodianship’ – relatively hidden janitorial work, sweeping out the ‘bad stuff’ as it is flagged. This crystallized in a recent case, when Facebook repeatedly deleted a photo of a nude Vietnamese girl running from a napalm attack taken by Nick Ut at the Associated Press, commonly described as the ‘napalm girl’ photo.
The press presented the deletions as a failure on Facebook’s part. And it was mishandled. But this photo has always been immensely hard to deal with, and is without a doubt upsetting and shocking. Editors at the Associated Press debated whether to distribute the photo at all, and provided a blurred version to newspapers. After it was published in its original form by the New York Times and elsewhere, many newspapers received letters of complaint from readers, calling it obscene. The photo is obscene, but the question is, should it be published? These challenging edge cases don’t have any clean answer. But they point to the limits of the ‘custodian’ model as it’s understood by the platforms.
We are at an inflection point in which social media platforms are increasingly understood as responsible for public scale consequences on essential functions. The harms caused by these systems are no longer limited to a single user, but to the public itself, even those who do not directly participate on the platform. For example, fraudulent news circulating on social media platforms has a wide effect that includes even those who do not directly participate on these platforms: it impacts the entire democratic process.
What would mean for social media platforms to take on responsibility for their role in curating content and profiting off of it? These platforms have become a cultural forum where conflicting values must encounter one another, and where malicious actors want to take advantage of their proximity to their intended targets.
We should consider whether these companies can be custodians—not in the janitorial sense, but in the sense of guardianship where they are responsible for facilitating processes for working out these unresolved tensions, publicly. One possibility would be to hand back – with care – the agency to users to address these questions ourselves, rather than platforms reserving the right to handle it for us. What would have happened if some of the innovation that has gone into designing the existing systems of moderation had instead gone into developing tools to support users’ efforts to collectively figure that out for themselves?
Human Rights
Nicolas Suzor
The Magna Carta isn’t a bad starting point for how we want our social spaces to be governed. Currently, however, we’re still trapped in thinking about governance in ways that pose a binary of self-regulation vs. full government control, and we have good reasons to be distrustful of government regulation: It is often not particularly thoughtful about how technology works.
Perhaps a better role for law when it comes to technology is a more protection-oriented mindset, and laws that protect what we love about the internet. Law is, inherently, a system for developing rules on the fly: It has flexibility to address new circumstances; it can be developed quickly through parliamentary or congressional processes; and there is largely transparency throughout this system. It is nevertheless still expensive, however.
One of the big challenges we face when dealing with platforms, however, is that they’re often governed outside of the law. Companies are allowed through Section 230 of the Communications Decency Act (CDA 230) to impose rules as they see fit, and aren’t subject to any real oversight. Fundamentally, this is not what we would recognize as legitimate governance in any legal sense. With these restrictions in place, is it possible to imagine a better future where we can see more public intervention in internal processes?
Among the biggest challenges we have is that whenever debates about how technology companies should regulate speech or data come up in governance fora, the core concern of tech companies is whether regulation will “break the internet,” so to speak.
It is possible, however, to create regulations that don’t violate the core principle of CDA 230; that is, protection from liability for companies when they make mistakes. We can regulate standards of transparency or due process that might allow for escalation of disputes and that would allow NGOs or other groups to monitor how these systems are working—which is how regulation works best.
This can be depressing work in the United States, because human rights don’t really enter the discourse. Globally, we have a set of consensus-based institutions (such as the UDHR, ICCPR, and the wide range of monitoring and enforcement organizations that support these instruments), that set out what we think of as fundamental rights of human beings, and we have a whole set of international organizations designed to monitor how well various entities are doing, and of course, we have enforcement mechanisms. These are flawed, but they can apply to the digital realm just as they have to everything that came before it.
As Kate Klonick has said, moderation is a new system of governance, but these platforms govern the same kind of space that has traditionally been public. So what do we do when private actors are doing public things? We’re at a “Magna Carta moment,” and it’s increasingly clear that public values are at stake. It’s myopic to see CDA 230 as the only way to govern speech. For example, the DMCA sets out a set of processes for regulating copyright material, and tries to set out some protection for disputes and due process. It’s not perfect, but we can imagine how we could construct a system that works at scale (like the DMCA) with escalating levels of due process to handle problems when things go wrong.
So what can we do? We can look at other structures of governance, ones that protect competition and the freedom of platforms to innovate but that also set out certain responsibilities to govern in a legitimate way. We can look to other types of institutions and administrative bodies that do adjudicative work, to find new mechanisms of due process that work at scale. These may help us imagine different systems that can help people find redress for their grievances without sacrificing efficiency or innovation. The language of human rights helps here : it helps to understand that businesses have an obligation to respect rights of the people they affect. The international human rights framework rights, which has developed to build consensual norms and set out multiple responsibilities of state and non-state actors, can help us move past limitations on our thinking about how we want to think about legitimacy in our social spaces.
Toward a more principled future
A lively discussion was held among participants from various backgrounds after these interventions, details of which can be found in the live notetaking document for the session. While the discussion covered a variety of issues, most participants agreed with the importance of the above principles, and of reconsidering the role that these spaces inhabit in our societies. More specifically, participants expressed a desire for a more holistic approach to advocacy, that better considers the needs of marginalized or targeted communities, and that ensures human rights principles are considered throughout every aspect of how these companies operate.