Business Ethics & Corporate Crime Research Universidade de São Paulo
FacebookTwitterGoogle PlusYoutube

A legal point of view: How EU proposed AI regulation and the CSAM technology debate come together

Retrieved from: Threatpost

Author: Carolina Christofoletti

Link in original: Click here

Disclaimer: All legal provisions mentioned in the following paragraphs refer to the Proposal for a Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts

A read on the Proposal for a Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts, published on this last 21rst April 2021, not accordingly to any specific paragraph, but to its general meaning will point out that the issue to be addressed here is, mainly, a protection against some “databases” practices.

The objective of this article is to read the current Proposal together with previous legal and European discussions for Child Sexual Abuse Material technologies. In order to do that, one must read this article with the Proposal for a Regulation of The European Parliament And Of The Council on a temporary derogation from certain provisions of Directive 2002/58/EC of the European Parliament and of the Council of 20th September 2020 in mind. This is important because one of the discussions that appeared there, that is, if the proposed derogation hampered or not the development of new technologies in matters of CSAM detection, can also be brought into scene here.

The first clearance that emerges here is that, considering that CSAM technologies (specially regarding machine learning powered ones, which are especially important in matters of recognized new CSAM material, that is, material inexistent in any law-enforcement related database) are powered with video and images which, in the last instance, can allow or confirm the unique identification of that natural person (Art. 3, 33, mentioned in the sequence), AI regulation communicated, directly, with the CSAM case.

Even though CSAM detection technologies are not able (for a series of legal considerations) to match CSAM images with any victim identification ‘database’ (those victims’ database is, according to Art.5, 1, d, restricted to law-enforcement personnel – read it together with Art. 3, (41))) in real time, the use of detection technologies seems to remain still allowed for CSAM detection purposes.

At this point, is it important to distinguish what is investigation and prosecution related technology to what is detection technology, which relates to criminal offences’ prevention activities which are, in essence, out of the scope of law enforcement as defined in this Directive. After all, the text of the Proposal points to the fact that detection is only to be understood under the scope of law enforcement activity if the activities are, in the case, carried out by law enforcement authorities. (Art. 3, (41)).

In terms of victim identification, the provisions of the above-mentioned Proposal make explicit exclusion to the uses of also ‘real-time’ systems for purposes of identification of potential victim of crime (which includes, as a matter of legal integration, also CSAM crimes, already criminalized within the scope of European Law, e.g. UE Directive 92, 2011). The victim’s database within the control of law enforcement personal was, as such, protected against future legal battles around it.

Considering the definition of Art. 3, (1) relating to artificial intelligence systems, one could rapidly conclude that current industry-related practices aimed at developing and enhancing CSAM detection technologies falls into the scope of the Proposal. If one continues the reading, one will see that the same Proposal mentions an incentive regarding AI Innovative Systems dedicated to the prevention, investigation, detection or prosecution of criminal offences. In the Proposal, not only the developing is safeguarded, but also the testing, a still very important point when we talk about CSAM detection technologies.

The reason why the present Proposal is relevant is that it creates, in the middle of the legal chaos and from a legal point of view, a great environment for CSAM fight. Yes, the E-Privacy Derogation for CSAM detection purpose is still in force, with its very problematic provisions regarding the development of new technology. But fortunately, when we are talking about CSAM detection technology we are, in fact, also talking about AI powered systems, and there is no way out. Providing that Union Law, and any Law, is to be interpreted in a way that contradictions are lost with the help of some specific hermeneutic technic and that:

1)   AI systems are more specific and adequate for the description of CSAM detection technologies phenomena, in comparison to the ‘technology’ term of E-Privacy Derogation Proposal (speciality principle)

2)   The present Proposal is a new law and, as such, as a matter of legal integration, tacitly derogated any contradictory provisions existent in the previous Law (meaning all legal documents, jurisprudence and others that rules the European Law as a whole)

We may expect, so, that the present European AI Proposal to be the strong and consistent legal bases in which the CSAM technology debate will further anchor.

  • Critical Commentary:

One cannot leave, without comment, the fact that the AI regulation proposal excluded from the concept of publicly accessible spaces the online environments, letting it unclear if this exclusion was meant to drive away the specific provisions applied to publicly accessible spaces in the case of online environments or rather, to constitute it as a private one.

The exclusion is explicitly mentioned on the Consideration (9), where it defines publicly accessible spaces and says that “online spaces are not covered either, as they are not physical spaces.

The main reason why that matters is, specially, because the publicly accessible provisions rule some law-enforcement related paragraphs. The depth of this discussion and the fact that it could, in fact, impact some databases processed related to online available data would need but, for that purpose, another specific article to be written.

Paragraphs of interest:

  • What an artificial intelligence system is

Art. 3. (1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;

  • About what kind of data are we talking about:

Art. 3, (33): ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data;”

  • Publicly accessible space

Art. 3, (39): ‘publicly accessible space’ means any physical place accessible to the public, regardless of whether certain conditions for access may apply;

  • Real-time systems:

Consideration (8): ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay.

  • AI for searches for potential victims of crimes:

Art. 5, 1, (d): the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:

(i) the targeted search for specific potential victims of crime, including missing children;

  •  Law enforcement definition

Art. 3, (41): ‘law enforcement’ means activities carried out by law enforcement authorities for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;

  • Specific Databases:

Consideration (20): The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.

  • Innovative AI systems:

Art. 54, 1: In the AI regulatory sandbox personal data lawfully collected for other purposes shall be processed for the purposes of developing and testing certain innovative AI systems in the sandbox under the following conditions:

(a) the innovative AI systems shall be developed for safeguarding substantial public interest in one or more of the following areas:

(i) the prevention, investigationdetection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security, under the control and responsibility of the competent authorities. The processing shall be based on Member State or Union law.