On whether media platforms should delegate its users the metaphysical task of classifying the world in details- CSAM reporting
Author: Carolina Christofoletti
- Premise 1: Badly categorized reportings will mislead algorithms and miss contents that should have been otherwise removed.
- Premise 2: If reporting procedures are too complex, cybergovernance will fail.
- Conclusion: Broader categories may be an interesting solution.
No question that the existence of CSAM report buttons are essential for a rational cyber-governance. No question that if child sexual abuse doesn’t come into light until the moment of reporting, the reporting procedures should remember their reportees about the possibility of that the reason why that content bothers anyone who is confronted with it is because its CSAM. Though I recognize that easy and visible report buttons shall exist, I would like to address a discussion that I find very intriguing: What is the degree of specificity in which those reports should be done?
From a legal point of view, the power of corporations to address the CSAM problem justifies why people are so interested in them. Simple like that: If CSAM legal definition varies with jurisdiction, CSAM as defined in the Community Rules of corporations, for the purpose of removal, remains the same.
For the mere fact that we are now dealing with a private universe where CSAM material starts to appear, the chaos of conflict of law is dead. For the purpose of removal, Child Sexual Abuse Material is what companies say it is. It doesn’t matter if criminal law says something else, if this is a mere problem of obscenity or whatever.
Recognizing that, we must also recognize that this kind of corporate decision is and must remain an ethical one. For cleaning their platforms from things that may pose risk for third parties [and we don’t even know how platforms are being exploited to host this kind of material]. If there can be one place in this world where vague terms are welcomed is in the corporate fight against CSAM.
The horror of lawyers having to classify the world does not and cannot belong to platform users. If corporations task in an ethical one, the ideal world is one in which every kind of suspicious material can be reported as fast as they emerge. Corporations are then gatekeepers for the purpose of CSAM removal and as so, they must make the report tasking as friendly and as easy for their users as possible. If misclassification can trick the algorithm is because the algorithmical function is too specific as to have become insufficient. The problem here is not in the algorithm itself, but in the fact that it should have counted on the possibility of wrong data feeding that is based exactly on the instructions it gave its operators.
If things like terms such as children, underage, sexual activity could be too broad to inform a criminal legislation, they may be the optimal solution for Internet Regulation, specially when we are dealing with artificial intelligence red flagging and removal. The danger of too specific categories is that, maybe, Internet Users don’t know the difference of a nudity, a sexual exploitation, an abuse and a non-consented image scenarios.
Since phenomenological evaluation can change depending on what kind of place in the world we are (for example, what is sexual activity in country A may not be in country B), vague (though still clear) terms evade the problem of having to choose a category to move the report further. The dream of cyber governance is to be accomplished the day when country A user reports country B material that would not have been reported otherwise. We are talking about Child Sexual Abuse Material after all.
We don’t want Internet users to think as jurists. We don’t want them to reflect if, according to the applicable law, material A is not not criminal. We want them, instead, to make an ethical decision: to report whatever content seems like, indicates, is or could be somehow related to the sexual exploitation of a child.
More than that: If we put on Internet users’ heads that they are doing a criminal report, and if they don’t trust the platform enough so as not to fear criminalization, the chance of communitarian refrain from reporting is negligently intensified (remember that the bare knowingly access is a criminal conduct in some jurisdictions).
Does it need to be nudity? No. Sexual activity? No. Those are legal terms and a kind of question that is ready to create a category of mental judgment that, if one thinks well, will refrain reporting. Not knowing the right answer may refrain action!
We want users to report things, we don’t want them categorize the world. Sure, platforms shall suggest what kind of content they are looking for… but setting the Terms of Community as a fixed, hermetically closed, do not report if structure (as if anything more than that was allowed in reports) may not seem the best idea. Nor providing reportees with a list of categories to check if their reports fit it!
Think about it…. Shouldn’t reporting channels provide its reportee with a broader, report in any case instruction rather than showing them their legislation-like Community Standards?