Business Ethics & Corporate Crime Research Universidade de São Paulo
FacebookTwitterGoogle PlusYoutube

Where things have gone wrong, put your Trust & Safety Team to analyse the data collected by your Privacy Policy

Image Retrieved from: ENISA

Author: Carolina Christofoletti

Link in original: Click here

The policies of any and all Social Platforms should not, under any circumstances, be read in isolation. When I refer to an isolated reading, I am referring, first, to a reading not only unrelated to the other policies in force within the same platform, but also to the general action guidelines of other stakeholders who are equally involved in the problem that the platform in question wants, through these policies, to solve.

Regarding policies against child sexual abuse material, my attention is drawn, first of all, to the fact that for many platforms the simple use of automated detection systems (artificial intelligence and machine learning) and the existence, more or less formalized, of a protocol system of communications between the platform and the channels responsible for receiving these complaints are perceived as sufficient.

Consequently, I am struck by the silence of these same platforms, which claim to be committed (often, as PornHub said, in a ‘zero tolerance’ policy) to prevent the use of their channels to disseminate child sexual abuse images, to analyse, in an internal investigation procedure, a database that, formed for a commercial purpose, would serve to leverage the performance of the Trust & Safety teams.

One case that I am particularly interested in is the case of adult pornography platforms. The challenge for the Trust & Safety teams operating on these platforms is to filter out, amidst a demeaning amount of sexually explicit content and titles often labelled on the verge of illegality (with ambiguous keywords such as “teens”), which content is potentially classifiable as underage content.

Image detection technology has, so far, had little chance with underage materials reputed to be “new”, i.e. not integrated into any database operating in detection systems. The moderator teams have, at present, and with a fairly high volume of brand-new content posted every second, little chance of identifying, in a 20-minute video, for example, where are the 5, 4 seconds in which, sometimes, a word, a specific object in the scene, an angle or any other indicator points to the blatant illegality of the content.

Often, the moderators do not even have 4 or 5 seconds to review the video, whatever it is. Much less do they have the crystal ball to know in 4 or 5 seconds, in one long film, where the problem lies or why the video was flagged. After all, when a video is flagged as child sexual abuse material on an adult pornography site, do the moderators watch it all the way through? Trust & Safety teams have, to control the platform, to think like criminals: pay attention, after all, to patterns (e.g. specific time cuts) in which illicit material is found (if ever) and to videos flagged repeatedly.

Briefly, the problem remains, in the end, where to look – and this is information that would ultimately interest not only the human team of moderators who review the materials but also the human team that hires, programs and manages the tools of control over the platform.

But the Trust & Safety teams don’t necessarily have to operate in the dark, especially if there is a highly interesting database waiting, untouched, in the Platform Privacy Policy.

I will now copy one of the clauses I found on an adult pornography site in the Privacy Policies section, a data collection clause that, according to the same policy, was being applied not only to registered users but also to anyone who visits the page. The platform was collecting, from whoever accessed its services, data as such:

“Website activity data: We collect information about how you use our Websites, products and services and interact with our content and advertisements, including the pages you visit in our Websites, search history, and the referring web page from which you arrived at our Websites. We collect browser and operating system information, devices you use to access the Websites and your time zone setting.

We also collect online identifiers. Specifically, we collect internet protocol (IP) address information and we set cookies as explained below in the section on Cookies and Automatic Data Collection Technologies”.

When content involving minors is found on an adult pornography platform that collects, in an opt-in clause, data such as the path of access to that material and search history (including outside the platform itself), this database should be delivered, on a priority level, to the platform’s Trust & Safety team for analysis and policy improvement.

Considering that, typically, child sexual abuse materials are accessed in large numbers (collector behaviour), this database, hidden in the platform’s commercial terms, could indicate:

  • 1) Other outside locations where links of the same nature could (still) be in active distribution (via access path). Direct accesses -broken links- circumstance would also be relevant (possible forum mechanics).

These access paths (especially the immediately preceding one) should be reported to the same competent channels that will report the material in order to investigate, also, if there are no other materials of this category in the same location.

  • 2) The search path across the open internet, which would be a great indicator especially for search engines, which could use the information to improve their security policies. The sites in question are, after all, holders of privileged information.
  • 3) Other sites, within the same platform, where other content of the same category is hosted, a great information for Trust & Safety teams, that should proceed, so, with other removals if it is the case.

It is precisely in the Privacy Policies that the communication axis between the stakeholders should have been opened. There is still time.