Stock-Cyberbullying
The debate about harmful content circulating over social media has been going on for some time. What's needed is more oversight with the help of cutting-edge AI. Image Credit: Shutterstock

The US Supreme Court is hearing two crucial cases related to the critical question - whether Google’s YouTube or Twitter – and by extension, any other intermediary, should be liable for algorithmically promoting terrorism related content on its platform.

These cases come after families of those who lost their lives in terrorist attacks claim that such platforms are partly responsible for the spread of distorted messages by algorithmically recommending such content. It is being said the legal issues raised can change the way social media operates, as their entire business model works on algorithmic recommendations.

Breaking down the legal and ethical issues

Section 230 of the US Communications Decency Act of 1996 provides immunity to platforms for content posted on their portals by users. This is often called the ‘safe-harbour’ provision, and it exists in different forms in many jurisdictions. The idea is that users and community members should be responsible for the content they post or the products they sell on their platform.

Think of an online portal having immunity if an electronics brand sends you a faulty hairdryer, or YouTube having immunity if a content creator steals someone else’s music and uses it without a license (which would be a violation of the musician’s copyright).

This immunity is not a blanket protection, and these platforms must comply with a few conditions, varying with the jurisdiction. If we look at laws around the world, generally platforms undertake their own due diligence to be able to avail this immunity. They take extensive measures to ensure the trust and safety of their platforms.

For example, they deploy innovative AI tools and even human reviewers to moderate content of all types proactively. They also have reporting tools to identify harmful content reactively – such as the report abuse feature on posts.

So, legally speaking, the cases in the US are about the applicability of the safe harbour protection to algorithms. But for average users, the cases are about whether these companies are doing enough to ensure the trust and safety of their platforms.

The logical question is should platforms have any liability for content produced by content providers but recommended by platforms algorithmically? The answer is both a yes and a no.

Making platforms liable for their content would mean that they become the arbiters of truth and false, right and wrong. That is the job of the courts, not private companies. Or the job of the society.

Making platforms liable would be like shooting the messenger. To give you an analogy, think of these platforms in the same way as telecom companies. Just like telecom service providers are not sued for what people who speak over their networks say, websites that are hosts to content providers, cannot be sued for what the content providers say or do.

AI offers a way forward

We believe that harmful content showing up on a user’s feed is not really a failure of the law, it is a failure of AI. The need of the hour is intermediaries proactively taking steps to ensure that the trust and safety of their platforms. What we need right now is cutting-edge AI that can identify and weed out such content and robust government policies that can set best practices for screening content.

Platforms as part of their due diligence are already building strong content moderation policies to prohibit such content. Some platforms have also developed automated systems that aid in the detection of content that may violate their policies. For example, they use hash sharing databases, which are basically databases that utilize technology to prevent re-uploads of known violative content before becoming available to the public. Such best practices can be codified as industry standards, and improved in collaboration with regulators.

It is also important to understand that it is in an intermediary’s long-term best interest to have a safe and secure platform to attract more users and advertisers. If they can work with the authorities and jointly develop algorithmic tools and principle-driven policies aimed at fighting off ‘bad actors’, they can all collaboratively do the same.

As users, we hope that these platforms remain safe for all of us.