Comment: Content Moderation by Online Platforms: ‘don’t be an *******’

 

 

* Published as part of the 2023 CoI conference 'Towards just institutional approaches to conflict prevention and resolution' *

 

 

Under the influence of the Digital Services Act community guidelines may increasingly change from principle-based (for example, “You should behave well”) guidelines to more rule-based (for example, “You cannot post hateful conduct targeting specific groups”) prohibitions. How will this development impact conflict resolution by online platforms in light of freedom of expression rights? 

 

“Sir, someone filed a report, and I need you to ask to come with me to the police station.” Rob stood perplexed and stuttered: “Wh… what did I do?” The police officer responded: “Someone filed a report that you behaved, I quote ‘like an *******.’ We have sufficient evidence that this is indeed the case.” Rob – now more on the angry side – responded: “What does this even mean?” For a criminal law context, arresting someone for violating such a broad principle is unthinkable, but for how online platforms enforce(d) community guidelines, this example is not unrealistic. 


Examples of community guidelines 


What does it mean when someone tells you to “[r]emember the human” when posting online, as Reddit sets out in its content policy? Or what does “Respect everyone on Instagram” means for its moderators? While both Reddit and Instagram elaborate more on the community guidelines, the LinkedIn Professional Community Policies include a prohibition that reads, “Do not share content that directly contradicts guidance from leading global health organizations and public health authorities”. This last prohibition is nowhere explained in more detail.


Comparing community guidelines with criminal law


In the context of criminal law, state legislation has to be sufficiently clear in advance before someone can be held liable. How Rob is confronted with arrest based on such a vague criterion is highly unthinkable in criminal law. In the context of online platforms regulating user-provided information (text, images, video, etc.), such ad hoc moderation can be based on loosely defined guidelines. 


Where content moderation policies did (and sometimes do) express a couple of loosely defined principles, modern content moderation policies increasingly take the form of contracts. However, a difference exists between clearly defined legislation and the more discursive form moderation policies take. Criminal law definitions have absolute wording: if x does y, he will be punished with a penalty of z.  Community guidelines often take the form of a declaratory statement about what conduct is not desired (the principle). With a few examples of what is always in violation of the rules but also a few examples of (in principle undesirable) conduct that may be allowed under circumstances.


Content moderation is, therefore, more principle- and example-based than rule-based. Basing content moderation decisions on broad principles and examples gives online platform providers some flexibility but offers little clarity and legal protection to users of these services. Of course, content moderation by online platforms does not result in jail time, high fines, or a community service penalty. However, for some users losing access to a social media account or seeing their content sanctioned, this has nonetheless serious consequences. An influencer may lose their only source of income. A journalist or scholar may suffer reputational damages when their content is labelled or removed as disinformation. 


Regulating community guidelines into rules


The EU Digital Services Act seeking to regulate online platforms may give an impulse to this development. Article 14 of the DSA sets out that any restrictions on the content that is allowed on the service should be set out in the terms and conditions, providing information on the “policies, procedures, measures and tools used for the purpose of content moderation, including algorithmic decision-making and human review, as well as the rules of procedure of their internal complaint handling system.” The terms of services must be “clear, plain, intelligible, user-friendly and unambiguous language, and shall be publicly available in an easily accessible and machine-readable format.” The DSA thus not only regulates what the terms of service should set out about content moderation but also the form in which it is communicated.


When the DSA comes into effect in February 2024, online platforms cannot fulfil their obligations by stating that users should behave well (the ‘don’t be an ******* rule’). While more detailed rules may contribute to ‘juridification’ of the relationship between users and platforms, which may frame disagreements between the user and the platform in legal terminology, the potential productive value of ‘constitutionalising’ community guidelines must not be underestimated. I will first elaborate some more on the downside (juridification) and then turn to the upside for freedom of expression rights. 


Juridification of content moderation?


According to The New Oxford Companion to Law juridification means that “social and economic activities come increasingly to be governed by legal rules at the expense of the values and principles that develop within those social and economic spheres themselves”. Clarifying and explaining how users should behave on online platforms by putting down legally binding rules would indeed bring the discussion down to did user x violate the rules instead of is such conduct in this community desirable? Discussions around content moderation may shift from normative discussions about good behaviour to legal discussions. In the worst case, users no longer feel free to call out users for their undesirable conduct when the rules do not sanction such conduct. As long it is legal, it is allowed, right?


The upside of elaborate community guidelines in the form of legal rules is clearly also the downside of juridification. Because the rules are set in advance, the user can orientate on these rules. When users can adjust their conduct to clear rules, they are less likely to be confronted with unwritten or unclear rules of the community. In the long run (when combined with proper enforcement), users are less likely to be confronted with (what they at least perceive as) arbitrary decisions. 
The perks of ‘constitutionalising’ community guidelines
While a generic ‘don’t be an *******’-rule may contribute to subsuming inappropriate or even outright harmful behaviour by users, it also introduces uncertainty and unclarity. As noted, the upside of ‘constitutionalising’ community guidelines from a user perspective is clearly that users can orientate on these rules. Because the sanctions for violating the community rules vary between a warning and permanent account suspension, such clarity forms a necessary safeguard for ensuring that users feel free to exercise their freedom of expression rights.


In a world that is increasingly relying on online platforms for (mass)communication, losing the privilege of posting to an online platform is a potential tragedy with unforeseen consequences for the personal and work life of the user. Moderation based on principles can easily lead to unforeseen overmoderation that may even hurt the democratic participation of (groups of) users. Unclarity and uncertainty about (the application of the) rules brings the risk of overregulation on the part of the platform and (unnecessary) self-censorship on the part of the user.  


Community guidelines fit for freedom of expression


In sum, clarifying community guidelines into legal rules may foster an environment where users are able to express themselves more freely. The importance of online platforms combined with the potential severity of the sanctions warrants clearer rules. Especially since such rules can contribute to resolving conflicts between users and online platforms. 


Two downsides have been mentioned. While rule-guided content moderation may be helpful in preventing and resolving conflicts between users, it places the user ‘hurt’ by content moderation in direct legal confrontation with the platform. Especially now, online platforms are required by coming EU legislation to clarify their terms of service, such conflicts increasingly may arise. A second downside is that principle-based moderation gives more leeway in combating new occurrences of harmful conduct. Rule-based moderation requires alteration of the rules before they can be enforced. 


However, as this short contribution hopes to provoke, a ‘don’t be an ******* rule’ is no real alternative when the freedom of expression rights of users and conflict resolution between users and online platforms are seriously taken into account. The productive value of rule-based content moderation of principle-based content moderation is to safeguard the weaker (user) party against arbitrary moderation by the stronger (provider) party. Only setting out clearer rules, however, does not cut it: clear rules must be accompanied by proper public oversight and enforcement. The question is whether the DSA is sufficient to foresee in such enforcement is the question that still lingers. Telling service providers, ‘don’t be an *******’ is simply not enough. 

 

Dr. Michael Klos LLM is assitant professor at the Department for Jurisprudence (Leiden University). He will present at the 2023 CoI Conference