Business Ethics & Corporate Crime Research Universidade de São Paulo
FacebookTwitterGoogle PlusYoutube

Do not remove it, prevent it from being posted: How old hashes could help on the fight against new CSAM materials

Retrieved from: Android Dados Recuperação

 

Author: Carolina Christofoletti

Link no original: Clique aqui

 

A 37,6% of the videos removed by Youtube between October 2020 and December 2020, according to Youtube’s Community Guidelines Enforcement, had zero views. 37,5% of them had between 1 and 10 views. The curious fact is, while gathering those data, my selection was “automated flagging only”.

As one might guess, the reason why it interests me is that there is a specific category of files which tends to be, considering the high amount of material we are talking about and the frequency of their repeated appearance, governed mainly by automated flagging: Child Sexual Abuse Material (CSAM).

When I look at those statistics, my first question is: If the signature of the file had already pointed the file out as being something illegal, why do platforms let this very same files, even so, to be first uploaded inside their channels? Specially because those files are, in fact, entering the platform channels, my second question would be: How long, and not how many views, does it take for those files to be actually removed?

I must admit that, considering the harmful nature of the kind of files (CSAM) I am now talking about, this sound, at least, weird for me. And to explain that, I have two hypotheses:

1)   The first is a technical one, meaning that the system to recognize illegal files (hash match) is only embedded inside the platform and, for that reason, is not operational for things that were not already posted (“attempted postings”).

Is that the case, this illegal file would need, first, to be inserted into the platform’s channel for only then to be recognized and only then, removed.

The problem with that, apart from the existence (even for a minimal period of time) of an illegal CSAM file inside the platform channels, is that removals will set, specially where platforms are explaining why, an immediate alert to criminals (who would probably start erasing every other similar files they have on their computer, which probably is not the one through which this social media account was created and probably also not the one through which it was managed). Removals must be done, in that case and for that reason, silently.

2)   The second is a “legal” one, meaning that with platforms operating in the four corners of the world and having to cooperate with law enforcement in a transnational basis, letting the criminal offence which concedes the legal basis for platforms to act to be adjudicated as an attempted one (attempted CSAM sharing) would be a far too-risky business.

Remember that, to be able to legally face attempts, platforms would then need to deal with the whole criminal law dogmatics of every single country they are providing their services to. As such, it is legally easier to let it be posted and, after that, (hopefully) immediately remove it.

This second hypothesis seems more convincing to me. Despite that, the point that I would like to make is another one.

When criminals decide to upload CSAM files to a platform whose compliance policies to fight CSAM are well known (proactive search is usually mentioned in the Terms of Service), they commonly play (riskily) in the dark.

Criminals simply upload the file, never knowing if a) it would be automatically flagged, for this was a file already known to law enforcement authorities b) their criminal business will succeed, for the files are not previously known to the platform, and for discovering it content moderators would need to have it or someone would need to ethically report it (which tends to be even more difficult when criminals are doing that through a private account).

Another further question that may be relevant to platform’s “CSAM research Teams”, for the reasons mentioned in sequence, would be the actual ‘status’ of new files, when found through reporting or proactive search by content moderators.

Are “new” CSAM files occasionally found in pages where automatic flagging has given the first tip to another file, hosted in the same “virtual place”, though “hash matches” (“old” CSAM file)? Are we also talking about mixed galleries when we are talking about open media platforms? And if so, is the file being simply removed, or do platforms care about the “pages” and “accounts” where they were found?

Also, are those accounts hosting ‘new material’ eventually connected somehow to other accounts that have previously been removed for sharing this very same kind of material, which was maybe, at this time, “automatically flagged”? Remember criminals communicate with each other and, maybe, meaningful for them is that “they could post it”.

If so, what I am about to say could have prevented this very same platform from being abused for criminally sharing those “new” illegal contents.

From my own point of view, the best chance for platforms in preventing illegal material as such to be uploaded into its channel is to ban, at first sight and already in the “upload and post” section, the posting of previously known CSAM.

Rather than a “exemption of liability” policy (let it be posted and timely remove it), the focus needs to be immediately shifted to the “prevention of abuse” (do not allow any postings) policies, meaning that the overall controls in place should match the best efforts to prevent, specially, new material to be uploaded.

Criminals do not know what the detection mechanism is. They only know that it exists.

And maybe, if you go out on the street and start asking people how platforms currently control the posting of illegal content, they will probably answer that “there is some kind of pornography filter” there. And yes, that is what platforms should want criminals to believe. Is the file old or new, platforms do have ‘a nudity and pornography’ filter so do not even think of uploading those kinds of contents there.

If things are as criminology says and CSAM criminals really start with someone else’s images for, after that, start posting their own AND if postings are done, first, as a way of gaining reputation (exactly where “old” and “rare” images could reappear) and social networking with other like-minded, we have a clear reason to believe that ‘hashed images’ possibly appeared somewhere before in the platform and, if platforms had prevented those files from being posted, displaying silent message as “this file could not be uploaded”, “error”, “sorry, this upload is not possible”, criminals would believe that other files of the same nature also have no place there.

This would a possible preventive, meaningful and in the state of technic policy… to think about.