Business Ethics & Corporate Crime Research Universidade de São Paulo
FacebookTwitterGoogle PlusYoutube

Facebook’s Tolerance Code: A Commentary

Image retrieved from G1 Globo

Author: Carolina Christofoletti

Link in original: Click here

 

In an article released yesterday in Facebook’s newsroom, the platform announced some new features aimed at improving their groups safety. Although it is great to know that Facebook is constantly trying to improve its safety policies, how good those policies work depend on how good the public feedback is in finding their gaps.

In order to understand what the claim of this article is, we must first understand that according to Facebook, the platform content moderation practices relies mainly on Artificial Intelligence (AI) tools that are able to crawl policy violations both in private and in public groups. We will not question, at this time, the validity of those tools so to follow with a further argument.

When we are talking about group moderation and platform safety, criminal content is the first pair of words policy writers might want to keep in mind. To evaluate the proposed policy consistence and risk-solution adequacy, policy writers need to think (as cybersecurity teams do) as criminals for a while. The reason of that is that without any earlier mapping on platforms specific risks, written policies risk being dangerously insufficient or even, in the worst of the scenarios, completely inadequate.

Research on how social media platforms is exploited by criminals is but a rich domain expected to be better exploited by platforms themselves. Rather than compelling well-known research on the field, they should start extracting, compelling, cataloguing and properly analyzing data that was gathered in their own platforms. Platforms themselves may be surprised with the result.

The purpose of this article is confronting the intersection between Facebook group’s problem with other well-known problematic activity going on the platform: Child Sexual Exploitation (CSE). More specifically, sharing of Child Sexual Abuse Material (CSAM).

  • Missing Data

If we start to think about it for a minute, we will notice that even though Facebook’s Child Sexual Abuse Material (CSAM) and Child Nudity Material keep being reported with still (very) high numbers, we have no idea of how those numbers are distributed. Insufficient data makes it harder for outsiders to find where the possible gaps are, what could be highly beneficial for the platform itself.

Even without the data, policy rationality is still a fresh and informative material for analysis. The inadequate nature of some of them show us how much research is still needed, and specially, how much communication between social media platforms and other content moderation stakeholders (ex. law enforcement and academia) is still missing.

For example, while a certain degree of tolerance may work for having escalation within some groups and to regarding to some very specifics and still legal policy violations, it may be a catastrophe if we consider that, maybe, CSAM groups are not centralizing their activities so much as platforms ideally wanted (at least, not on the Open Web).

Also, even though groups violating Facebook’s Policies can be less visible, for example, according to Facebook’s new features, we must not forget that invitations can still be sent and running a search about it will still bring positive results.

For criminals that is more than great news for, together with a new toleration model (which will be further explained), it contributes to avoiding their secretive group being accidentally leaked-out by platforms’ algorithm, what only makes the already existent ‘show your credentials’ model even stronger.

Brief, there is an urgent need of researching what is going on with Facebook numbers. It is empirical data, rather than theoretical speculations, what should be informing their policies

  • Indiscriminate inverse proportion rule

Facebook’s strategy is reducing groups and members privilege (meaning, how visible the group is and how frequent member can post) according to the proportion of policy violations that is found.

The problem with Facebook’s inverse proportion rule is that it applies the same metric to all variables, no matter of what kind of content we are talking about. Concisely, it turns out that it, algorithmically, does not matter if we are talking about Child Sexual Abuse Material or violation of Intellectual Property, the toleration rules are the same, even though the severity of the content is not. Escalation control policy is hereby perverted.

While immediate removal is not the rule, it can still happen, if necessary, and in cases of severe harm. But figuring out what necessity is and what severe violation means is a far more challenging task than it should be, even if it is the very heart of the new functionality.

If you click the “severe harm” link shown by Facebook, you will end up in a page named “Understanding the Community Standards Enforcement Report” where, instead of pointing us what severe violations are, is explaining how their Transparency Reports are written. The decision rule of severe harm and necessity stays, finally, as a mysterious paradigm.

  • Indiscriminate “Policy Violation Alerts”

Knowing that one is about to join a group where Child Sexual Abuse is being shared may have quite different psychological effects than knowing that the group was “censored” for posting music without paying for copyrights. Legally, accepting to enter a group knowing that CSAM has already been hosted there has also a vastly different consequence than joining, for example, a fake-news group.

And if the purpose here is deterrence, things need to be clearly and worldly discriminated. That means, for example, that also pointing out policy violations with numbers (ex. See Terms of Community II.14) is equally insufficient. It is the right of Facebook users to know what is going on in the (most of the time, private) groups they are about to enter.

  • Temporary moderation through administrators

Asking for someone to approve the posts prior to their posting is a great deterrence policy for criminal postings. Whoever approved the post is (legally) attached to it, being in this case an active and necessary condition for its existence on the group.

The unwelcome news is that, according to the new model, mandatory moderation is only temporary, and it’s only set out when Facebook ‘intelligence tools’ alert that either the group has a substantial number of members who have already broken Facebook rules once or members that were part from other groups that were ‘shut down’ for also violating platforms policies.

Als,o in this case, there is no discrimination of what article of Facebook’s Code was violated. If we are lucky enough to have criminals using the same account to prove criminal groups resilience on the same platform, we might then be also interested in further finding, from a qualitative point of view, what those broken rules are.

Waiting until a substantial number of rule-breakers is reached may activate Facebook’s control only when it is already too late. At this point, Facebook is not even used by the group anymore. If the users are still active and their violations were shown, it is interesting to think about more timely solutions.

For example, Facebook algorithms could disallow people that have already shared CSAM, traded illicit goods through their platform or whatever of further assembling with accounts that were already targeted by the very same violations. Discrimination, as we see here, also matters.

Conclusions:

  • Empirical research in some specifics, high-risks domains (such as Child Exploitation) should be set as a platform’s primary goal.
  • Permanent and mandatory group moderation policies should be implemented for all groups.
  • Discriminating policy violations may help to better implement Facebook’s proposed intelligence tool, with a significant impact on its expected results