Business Ethics & Corporate Crime Research Universidade de São Paulo
FacebookTwitterGoogle PlusYoutube

Internal link-tree analysis: Why should Social Media Platforms care about that

Image Retrieved from: Greenbelt Consulting

Author: Carolina Christofoletti

Link in original: Click here

The “wormhole” quandary of every social media platform being monetized through any click model is one of mixed datasets. The reason why we could expect brand-new pages displaying cute animals to be so easily found by their fans as politic-related content is for potential voters is only one: The click-ranking algorithm does not discriminate it.

At least, this is what the generalized silence seen on some platforms recommendations algorithm statements (check, for example, the Instagram Transparency on that) indicate.

In both cases, it is a matter of similar clicks… and the present example applies to the wide variety of contents that are daily recommended by social media platforms through their by-design personalized content suggestion schemes.

In order to solve climbing numbers of Community Standards and Law violating posts daily found by social media platforms, the intersection between legal, innocent content and illegal, harmful content hosted on the same platform may be worth deeply exploring.

And for that purpose, the causal chains need to be properly identified. How were flagged, removed content being accessed through social media platforms? In terms of algorithm design, access patterns might be far more interesting than the statical, numerical observations that are currently in place on the Research Labs.

Looking retrospectively for the causal chain of accesses brings us the advantage of auditing platforms’ problem through the ‘big picture’ approach. Some until then missing links might, at that point, come to light.

For example, the common path between an at-first-glance unsuspicious healthy-food page and a pro-anorexia content being displayed on the same platform could emerge directly from the platforms black-box to light. And there is an immediate application for that finding.

For the healthy food page does not violate any known Community Standard, there is no reason to remove the page. But, in order to detain escalation, the algorithmic feature of this specific and of other healthy-food related pages might be worth shuffling.

First and foremost, because constant healthy-food related suggestions may lead, additionally and in a long term basis, to voluntary searches that directly lead to the Pro-Ana wormhole hidden on the platform. That is, if and when the platform’s subculture is yet strong enough to move users from health-food to more specific, Pro-Ana related keywords and pages.

Although rapid removal of harmful posts from platforms’ channels can be a great solution, it does not prevent similar, still-not-flagged similar contents to pop up to its previous similar audiences. After all, the suggestion feature keeps being powered by the very same algorithm.

The unwelcome news here is that we still do not have any idea on how much illegal or policy violating content still pops up for the ones who had previously accessed their removed counterparts. Seeing things that way, the Open Web power of amplifying illegal contents is far more dangerous than Darknet’s one.

Where pages are constantly related, the content is gone, but the ‘suggestion algorithm’ remains. That means, simply, that the resilience of criminal groups on platforms powered indiscriminately by algorithms can be easily propelled by the ghost suggestion path that criminals have, themselves, already built.

And the second unwelcome news is that, until now, we do not have any idea of what role do platforms algorithms play on the wormhole of illegal content that platforms are, themselves, finding through their automatized tools. We do not know where the system breaks.

And in fact, if algorithms scandals were not humanly found and outrageously reported, cases such as Youtube’s gymnastic videos would have maybe never come to light.

Blind to the points where innocent causal chains evolve to more harmful ones, platforms may now be unable to control the algorithmic Frank-stein that lives right now inside their labyrinthic channels. Platforms are then urged to identify the point where its wonderful recommendation machine breaks.

But, the odds of social media Frank steins can be neutralized if Research Teams start to go after the access-trees. Rather than a progressive observation of millions of posts uploaded every day to social media channels, the regressive analysis might simplify the research task a lot. It is from flagged, removed content that the archaeological dig is expected to start.

And regulators should be interested in this result more than anyone else. You must agree if me that, in terms of legal liability, there is a huge difference between the numerical legal scene of a platform with lots of direct access to illegal contents (meaning that the access instructions were to be found somewhere else), and the numerical legal scene of a platform where content-paths are originating directly on the platforms, with indirect help of platforms’ own algorithms.

Despite being a turning-point for regulators, the unsafe role played by platforms own algorithms remains still a hidden data.

And it is urgent to know what this data is.

After all, if things such as AI-flagged terrorist propaganda and child sexual abuse material are being simply removed without any further analysis, a great opportunity of properly protecting platforms against repeated misused is being simply discarded, together with this very unique dataset.

The problem here is one of where to look. That is why good advice for Social Media Research Labs is then, start theorizing over what lies, more specifically, on the platform’s rubbish bins.