Business Ethics & Corporate Crime Research Universidade de São Paulo
FacebookTwitterGoogle PlusYoutube

Dancing in the Abyss: Do self-generated CSAM material follow a Guttman-like progression?

Caspar David Friedrich, O viajante sobre o mar de névoa, 1818.

Author: Carolina Christofoletti

Link in original: click here

 

You might be guessing right now what a Guttman progression is. And my intention here is, rather than just troughing concepts, following with you an argumentative path. And my first word for you all is that, in fact, the Guttman progression is not a concept, but a methodology. Guttman is, despite the hidden name, a very known scale to everybody, and which is aimed at social science measurements. If you have ever confronted yourself to things like end-course questionaries where you rapidly realize that if you answer yes to one question, you have a high probability of saying yes to the other things people are asking you, you are familiar to Guttman.

And the reason why I mention Guttman here is because, for anyone studying sexual crimes, pornography and child sexual abuse material, there is no way to start arguing those cases, philosophically, if you have no idea of what Guttman-based is. And the most known case that was argued based in a Guttman-like progression is the pornography viewer, which latter was incorporated to the CSAM viewer.

Basically, the Guttman argument there says that there is, in practice, something like an scalation process, in which people who begin viewing some kind of softcore material will even on a hardcore version, or even on a very different category that they began with. And, while the CSAM and pornography viewer are already good explored by Guttman arguments, the CSAM creator is not, and they are not, specially, when we are talking about auto generated CSAM material.

Personally, I have some reservations in accepting the Guttman progression in the viewer case. For me, we have here an argument that we are taking as universal truth in a place where things like algorithmically suggestion (a methodological bias that Guttman scale itself creates) and easy access have, most of all, had been entailed as trivial, and those are not. Philosophically, I have difficulties in understanding why the escalation chain must always end in extreme, very bad materials. I truly believe that there is here a point we are missing.

For CSAM researchers, we know that this progression metrics is, overall, still missing on the CSAM domain – although there is, in fact, some great studies pointing out similar results regarding the progression of CSAM collections in CSAM forums. For the pornography domain, we have a great dataset that is visible (maybe the great difference here from CSAM forums), though still very weakly explored: How search queries progress in adult websites, calibrated methodologically by the variance of how clicks progress through video suggestions.

But forget Guttman conclusions for a while, and let us move Guttman from the conclusion to the premise itself. In a year that Internet Watch Foundation had found a 77% increase of self-produced CSAM material, researchers are instigated to look for what is going on. And OnlyFans case is what interest me know and, exactly, because it gives me the opportunity to investigate Guttman-like progressions not from the viewer side, but from the creator’s one.

True, we have, in Only Fans, a situation where whoever is creating photo, videos or any kind of material (be it sexual or not) in the platform is being paid for it, once viewers can only access it through payment.

For Only Fans users, the fact that a platform with this very specific business model controls child access through a document photo is remarkable, and remarkable for the legal risk it creates. How easy you manipulate a document, if the only thing you need to access it is that the document’s photo (do a collage!), and not the document itself, is approved? After all, is Only Fan consulting every public register to check this information? Never mind.

Take only in mind that viewing CSAM through payment is one of the hypotheses where the knowingly access to CSAM can be adjudicated in European Law. As PornHub realized, platforms have a hard time dealing with a CSAM case litigation if it ever comes to criminal courts. That is, maybe, the reason why PornHub Terms of Service determine that, if you are ever prosecuted for CSAM files found on the platform, PornHub has the right to control your criminal defense. A clause that is, from a legal point of view, a void one.

What happened to Only Fans can happen to any platform who deal with viewer numbers, like numbers or audience statistics of any kind. Recently, Facebook and Instagram had announced that the number of likes can be hidden for whoever wants it to be so. Great idea may have been, maybe, to set it by design also to their 13 to 18 accounts.

Only Fans is not a pornography platform. Nor Twitter, nor Facebook, nor Instagram nor Twitch is. But somehow, Only Fans has become recognized by its ‘naked photos and videos’ service, as Facebook, Twitter, Instagram and others have become recognized by their high CSAM numbers in Open Web.

The payment feature of Only Fans, as the gift-card practices in social media platforms, is a possible explanation why self-generated Child Sexual Abuse Material, from Category A to C, appear in those places. But I am afraid that, if we take the payment feature out, the problem remains, as the hidden element also: How likes contribute to a Guttman progression feature that makes a feet photo end up in a sexually explicit material. How come?

We can talk about direct grooming, where production of CSAM material is being solicited directly, and that is a plausible hypothesis. But if we start the analysis from here, we are already missing the start point. And I want to bring you back to the Instagram’s child modelling problem . My initial question is, in which point criminals get “comfortable” to start writing, publicly, those kinds of solicitations, just like the “put your genitalia there and I will subscribe”.

My hypothesis is that things begin sometime before, when commentaries like “beautiful”, “gorgeous”, “sexy” starts to appear, and platforms refuse to moderate that as not being against their Terms of Service. Twitch “Hot Tub, Pool and Beaches” policy (https://blog.twitch.tv/pt-br/2021/05/21/lets-talk-about-hot-tub-streams/) calls my attention for the inability of those Trust & Safety Teams to see escalation.

If you were ever on Instagram, another image-based platform, do a social experiment: Look for a public personality and, if likes are still visible for you, take notes on how many times a photo of him/her walking her dog is liked and compare it, with another photo you always find: Beach ones, Pool ones, Hot Tub ones to use Twitch example. Do they differ?

Is Instagram a dating platform also? But to be is also a matter of perception. If my Instagram place is perceived, including by me, as a place where the more “explicit” photo I post the more likes I get, the more likes I want the more explicit I will be and the higher the probability that someone comes to comment, in private, how sexy and beautiful I am (watch out the groomers). Choose one account where this “sexy” profile is present and scroll down to see the historic of that account. If old posts are still there, your dataset is intact, waiting for you to come an analyzing it.

What happens to fitness models and of models-to-be of any kind that, suddenly, usually gymnastic lives turn into a me-the-hot-one show? What happens to me-the-hot-show, that people keep liking it and supporting it, in this wormhole where more and more videos like that start to appear? YouTube has proven it in a very bad way, and with gymnastic videos also.

How do we solve Only Fans problem? With an improved moderation system. Could you please mention that the contractual features, when signed with someone under 18 years old intending to sell sexual images, are void by law (illegal contract), please? Can you please be more imperative than the “do not post”, mentioning that whoever subscribes to an under 18-year-old account or deliberately access, in a 18+ account, those kind of materials without reporting it (I defend that, to make the wheel workable, this legal defense must exist and, for me, we can derive it from general criminal dogmatics – called constructive defense) will be reported to law enforcement? Yes platforms, this last one is maybe, your biggest criminal defense.

Just as a matter of law, if you, my reader, visit Only Fans, specifically, keep in mind that you are, very probably, under those legal qualifications hypothesis that I called double prosecution. Even though you clicked in Brazil, the server was in U.K and the action could have happened in both places, meaning you are under dual jurisdiction.

Very personally, I understand Terms of Services as something that must be read together with criminal intention in order to understand it. When you enter a platform though a Terms of Service agreement contract, you are made aware of the (civil) applicable law. Yes, it may be that the criminal offence belongs to a jurisdiction that is other than where the contract breach occurs. But, if national jurisdiction decides that their servers are meeting point located under their jurisdiction and remote access does not matter, you have no excuses to argue “I didn’t know the applicable law” or “I thought I was under my country laws”. Read, so, carefully the applicable law if you are even thinking about doing something illegal there. My Miranda Warnings are: the Terms of Services can and will be read, everywhere, against you.

Know that Only Fans is under U.K laws what, for platforms dislike, tend to be a very strict rule. As a matter of comparative law, what European Law reads as knowingly access to CSAM, U.K reads as CSAM creation.

It is not a new piece of legislation; it is a 1978 (Protection of Children Act) diploma. Section 1, (1) of the above-mentioned act mentions that it constitutes a criminal offence (a) to take or permit to be taken [F2 or to make], any indecent photograph [F2or pseudo-photograph] of a child F3 but also (d) to publish or cause to be published any advertisement likely to be understood as conveying that the advertiser distributes or shows such indecent photographs [F4or pseudo-photographs], or intends to do so. Uau!

Keep in mind that, in U.K, “to make” is “to cause to exist, to produce by action, to bring about” (R v Bowden [2000] 1 Cr. App. R. 438) and, as such, is also to be interpreted as:

“To make” has been widely interpreted by the courts and can include the following:

  • Opening an attachment to an email containing an image (R v Smith [2003] 1 Cr. App. R. 13)
  • Downloading an image from a website onto a computer screen (R v Jayson [2002] 1 Cr. App. R. 13)
  • Storing an image in a directory on a computer (although depending on where that image is stored, this could also be a possession charge under s. 160 CJA 1988) (Atkins v DPP; Goodland v DPP [2000] 2 Cr. App. R. 248)
  • Accessing a pornographic website in which indecent images appeared by way of automatic “pop-up” mechanism (R v Harrison [2008] 1 Cr. App. R. 29)

For users, watch this out. For platforms, Guttman progression with CSAM producers, specially where the content involves children, be it managed or not by an adult, is a problem of yours. With this business, you created this risk and, legally, closing the eyes is to be read as omission, what leads to legal liability. Call the compliance officers: That is now the huge legal risk, economical and reputational risk you have. Be, so, clear with users about their duty to report.