Business Ethics & Corporate Crime Research Universidade de São Paulo
FacebookTwitterGoogle PlusYoutube

Relying on Children to Protect Themselves: A Commentary

Image retrieved from: Techmundo

Author: Carolina Christofoletti

Link in original: Click here

 

In an announcement released today, Instagram announced that they are implementing new features to protect child-users on their platform. Within the new features, the main point seem to be a Safety Notice that is appearing when the adult account involved in a communication with a child one presents, itself, abnormal (suspicious patterns).

If end-to-end encryption doesn’t allow monitoring the chats anymore – a case in which Safety Notices could be displayed in cases where abnormal texting (or sexting) patterns were identified, mapping account patterns is presented as an alternative.

A nice control for cases where adults are inappropriately using platform services to contact any children, but a weak control when the target is not a child whatsoever, but rather a specific child (ex. someone they have met at the club or from their families). For those cases, the previous pattern may not exist at the time an offence is being committed.

After this preliminary introduction, let’s explore what the new features are:

⌘ Age verification

“To address this challenge, we’re developing new artificial intelligence and machine learning technology to help us keep teens safer and apply new age-appropriate features, like those described below.”

⌘ Unwanted contact:

“To protect teens from unwanted contact from adults, we’re introducing a new feature that prevents adults from sending messages to people under 18 who don’t follow them. For example, when an adult tries to message a teen who doesn’t follow them, they receive a notification that DM’ing them isn’t an option.”

But:

✱ Since sending a follow request to an -18 account keeps as an active function, the safety function is still a weak one here (specially if adults are holding private accounts, needing children to follow them in order to discover who they actually are)

✱ Account geolocation data may be an interesting feature used for measuring the probability of a suspicious contact (ex. someone from another city, from the other side of the country or even another country)

⌘ Escalation concerns

“Safety notices in DMs will notify young people when an adult who has been exhibiting potentially suspicious behaviour is interacting with them in DMs. For example, if an adult is sending a large amount of friend or message requests to people under 18, we’ll use this tool to alert the recipients within their DMs and give them an option to end the conversation, or block, report, or restrict the adult.”

Nice things that start to appear:

  • A discreet “Don’t change platforms” advice
  • A clear notice that here’s no “alert” coming with the report function
  • A discreet “Don’t send any photos” advice, hidden in the ‘trust’ word.

But:

✱ Prevent suspicious profiles to keep with suspicious activities may only help until a certain point, and only if safety notifications also work. Stop them from sending a request to accounts that have a special characteristic (-18 accounts) may be a better solution once the pattern was mapped.

✱ Maybe putting the “Followers and Following” control in the hands of -18 users- so that they are the ones who need to send the contact request first- would be a better option.

✱ In scenarios such as the grooming one, where predators explore from some vulnerable characteristics of their victims (ex. low self-esteem), it’s still worth analysing if and, if so, to what extent those the reporting functions work when we are talking about -18 accounts.

✱ What’s the difference between blocking, reporting and restricting, and in what cases shall they be used?

✱ The red colour of the “Report”, “Block” and “Restrict” account may have the unwanted side effect of making their click less likely.

⌘ Hiding child users

“In the coming weeks, we’ll start exploring ways to make it more difficult for adults who have been exhibiting potentially suspicious behaviour to interact with teens. This may include things like restricting these adults from seeing teen accounts in ‘Suggested Users’, preventing them from discovering teen content in Reels or Explore, and automatically hiding their comments on public posts by teens.”

✱ If children still use their real names in their accounts, it’s still pretty easy to find. (local contacts)

✱ What do “potentially suspicious behaviour” mean after all?

⌘ Private accounts

“We’ve recently added a new step when someone under 18 signs up for an Instagram account that gives them the option to choose between a public or private account.”

✱ For -18 accounts, private accounts should be made private by design.

Conclusion:

Even though social media platforms tend to be pretty comfortable with report buttons, relying on them for protecting child users from grooming and other kinds of inappropriate behaviours in chats which the platforms themselves cannot see (See no Evil, hear no Evil) may be more complex as it initially seems.

Specially if “Report”, “Block” and “Restrict” buttons are displayed with no further indication of what is expected to be reported after all, and if a click (even coming from a Safety Notice pop-up) will present whoever actioned the button with a further, super complex “table of classification” according to the platform Terms of Service. Digital literacy is, specially if we are talking about Child Safety in social media platforms, key.