Business Ethics & Corporate Crime Research Universidade de São Paulo
FacebookTwitterGoogle PlusYoutube

Keep the networks in sight: Reducing the legal complexity of “Knowingly access to CSAM”​(Art. 5, 3,2011/92/EU Directive) adjudication on Instagram

Image retrieved from: HPCwire

Author: Carolina Christofoletti

Link in original: Click here

When criminal dogmatic was born, the Internet didn’t exist yet. At that time, the legal nature of things used to sparkle in front of our legal eyes. Prior to the emergence of virtual worlds, Child Sexual Abuse Material (CSAM) was to be found almost only on very specific underground sex shops located in a dark corner somewhere in the city.

At that time, Child Sexual Abuse Material was sold and traded as such. For that reason, adjudication of intentions in CSAM crimes constituted, for Criminal Law, no reasons of legal uneasiness of any kind. After all, when direct contact with the material was still a necessary step for possessing or sharing it, everyone that had decided carrying on with the criminal verb knew, or could have known, the illegal nature of that imagery.

Criminals were, outside any online environment, given the opportunity to check and duty classify the grammatical complement of their acts (objective element of a criminal offence). Unfortunately, things are not so simple like that any more.

Internet has brusquely changed this scenario.

With a single wrong click, trollers can now poison their victims computer with criminal material, whose records will remain forever on their devices. In a blackmail campaign, Child Sexual Abuse material can be delivered to an e-mail account that has never asked for it. Disguised in GIFs which are portrayed outside their original sexual contexts, criminal material can be virally shared not only on the Darknet, but also on popular social media platforms at the Open Web.

Whist criteria such as criminal intention and objective imputation have been traditionally sufficient to deal with the external features of crimes happening in the material world, it had revealed itself, without any further, as sufficiently precarious to deal, without any further and facing the current state of arts, with Internet Crimes.

But no, criminal dogmatics does not need to be reinvented. The timely approach to that does.

After all, the black box feature of the Internet only exists for where actors have no previous, (from CSAM clubs) ‘inside’ information about what a very specific black box (link) hidden. Distinguishing those cases are, not by change, exactly the challenge that law enforcement people and judges face.

The fact that some judicial search warrants, through which a CSAM club could emerge in the daylight, may never be issued for no criminal intention (a conditio sine qua non for a criminal offence to exist) is sufficiently touchable at the ‘crime scene’ clarify, very well, what distinctive role an Open Source Intelligence activity coming directly from criminals tearoom’s side could potentially in this scenario.

Once CSAM access are mapped, a data-cross analysis would be able to potentially indicate, at least with reference to the platform’s crime scene, which cases are worth a look.

Elements of crime

Child Sexual Abuse Material crimes are, as the majority of other criminal offences, intentional crimes. That means not only that the defendant must have known that what was in front of him/her was, in fact, Child Sexual Abuse Material (neutral intention): It must have also foreseen the finding of the criminal material prior to the click (causal foreseeability). The same thing happens when we are talking about knowingly access to CSAM.

As the 2011/92 Directive of the European Parliament and the Council clearly states, for the knowingly access provisions to be adjudicated, “the person should both intend to enter a site where child pornography is available and know that such images can be found there.” (Consideration 18).

And there’s only one place where all this conditions combine: CSAM forums with high access control requirements. In those places, directories might be explicitly labelled and, as CSAM is the prevalent activity there, the criminal nature of all from-there clicked links is objectively foreseeable. But unfortunately, it is not only from CSAM hidden explicit forums that the Internet is made.

The problem with the Internet’s black box is that, in order to check what was inside a link, one must have already opened it. Criminal law wasn’t counting with this timely inversion. In order to obey the strictness of criminal dogmatics, a first-click exception must have been guaranteed. This would have been an optimal solution, was not the fact that one-way, ‘ready to burn’ clicks and links are, actually, an intrinsic part of CSAM ‘clubs’ mechanics.

While explicit labels on the Open Web might be interpreted as obvious scams or traps by any savvy criminal, it might be seen as a protective feature for CSAM clubs operating in the Dark Web, discouraging so, ‘accidental’ clicks, specially from Internet vigilantes operating without any legal authorization for that. It’s hard to say, then, for what cases previous link descriptions should be seen as criminally relevant.

Geographical isolated and password protected criminal tearooms could well describe CSAM clubs hosted inside Darknet scenario, but it probably does not reflect what possibly happens on the Open Web.

We might expect CSAM clubs to be born not on the Dark web, where some degree of computational skill is needed, but rather on the Open Web, where networking platforms are the main candidates for CSAM assemblies. The high numbers of CSAM annualy found in platforms such as Facebook, Instagram and others validates the thesis.

There is, after all, nothing ultra-suspicious in being part of an Instagram group or visiting an Instagram page where CSAM is hosted, specially if contents and the page are sufficiently disguised as to set any access under the ‘reasonable doubt’ expectation.

That’s why, to beat the growing numbers of criminal content hosted on their channels, platforms should start looking for the backbone of the criminal networks it host.

Cache’s Daughter 

The genealogy of knowingly access provisions becomes clear at the time when we start to analyse what kind of legal problems the CSAM provisions had to face in courts. Law enforcement people are, after all, not the only ones who care about theoretical provisions of criminal law: Criminals also do.

The time CSAM clubs left behind their underground sex shops to host their business in an online server somewhere on the internet, they gave birth to a legal complexity that didn’t exist before.

Instead of selling Child Sexual Abuse Material for someone to physically possess it, criminals started collecting their criminal ‘cards’ and displaying it in huge, diversified and commonly grown galleries, ready to be accessed by anyone wanting to pay or contribute for it. For the criminal offense was CSAM possesing, virtual galeries were working according to the most obvious legal gap: Only human acts can be criminal or not.

Clicked images were, in fact, being stored in the browser cache (a functionality that speeds the displaying of a clicked page for future accesses) for latter view. But, for its storing was automatic, and although it created a permanent record of the criminal accesses, no intention to possess could be adjudicated.

The click-without-downloading version of CSAM clubs may explain, further, why CSAM very popular clubs such as Wonderland Club had so few arrests. To adress this issue, the Internet version of CSAM clubs needed to be otherwise addressed: The intentional access was, so, criminalized.

With the birth of knowingly access provisions, the cache was moved from the material side of the evidence scene to the intentional one. Rather than a single cached click, it is Internet browsing habits that courts will now analyse.

Knowing that, criminals are not only not storing their caches any more but also working with powerful non-tracking tools, starting, for example, from broken, codded CSAM weblinks that protect, until now, CSAM clubs doors. That is how, from an outside perspective, knowingly access to CSAM may look exactly the same as its unintentional counterpart.

And What is Next?  

The expectation around image-based platforms such as (but not only) Instagram is that, in terms of knowingly access to CSAM, the clarity of intentions could be extracted from observable patterns of accessed images, rather than from textual mentions.

For it is platforms themselves who are collecting those data, the missing “intention data” not only actually exist, but is also probably intact.

According, for example, to Instagram’s Data Policy, Instagram

“[collects] information about how you use our Products, such as the types of content you view or engage with; the features you use; the actions you takethe people or accounts you interact with; and the time, frequency and duration of your activities. For example, [Instagram] logs when you’re using and have last used our Products, and what posts, videos and other content you view on our Products.”

According to the same Policy, those data can be used by the platform to

“verify accounts and activity, combat harmful conduct, detect and prevent spam and other bad experiences, maintain the integrity of our Products, and promote safety and security on and off of Facebook Products. For example, we use data we have to investigate suspicious activity or violations of our terms or policies, or to detect when someone needs help.”

That’s how Safety Pop-Ups technologies, such as those activated during searches for CSAM specific keywords or recurrent attempts to connect to children’s accounts, is actionable.

We must bear in mind that, once accessing CSAM (intentionally or not) does not constitute a violation of platforms Community Guidelines– being, as such unable to lead to any account banning of any kind-, most of those access-only CSAM clubs running on Instagram platforms are still active.

The intelligence edem of social media platforms remain sleepy while criminals play pick-a-boo with Instagram algoritims and bop bag with its Trust & Safety Policies. That may explain why, despite constant review of safety policies, numbers continue to grow.

If this intelligence accesement existed, the still colaborative side of internet could very well function a front line filter, providing judges with more accurate information in order to decide in which ‘CSAM access’ cases a search warrant should be issued.

Search warrants would them need to be accessed not only through abstractions such as the ‘no single click’ rule, but rather from concrete circunstances.

In a nutshell, if Instagram is being used as a way of gathering intentional access to Child Sexual Abuse Material, the platform may, for the above mentioned reasons, be aware of that.

For CSAM criminals tend to navigate through a series of similar contents whose virtual locations and paths tend to be previously indicated somewhere else (a forum characteristic that Instagram chat encryption could also simplify), it would be rational to expected that Instagram’s flagged, reported and removed CSAM would also show a common trail of common accesses…. if the Platform is ever up to put the pieces of this puzzel together.   

And, where access charts become clear, criminal intention can sparkle again.