Governments throughout the world have focused their attention in recent months to combat misinformation online. Lat year, France passed a law to empower judges to remove false content during election campaigns.
Germany introduced legislation that imposes sentences and fines for failure to remove hate speech. Recently, Singapore passed a law that requires social media companies to issue corrections — within a matter of hours — when users post items that are considered false.
This has sharpened the focus on these companies, bringing into contention the responsibility that they have for the business models they have built. Becoming large repositories of data, these companies have thrived off on selling such data to advertisers.
This has led to the consequence that there are incentives for promoting outrage (the so called “click-bait” model), and the entire ecosystem then becomes permeated with false information that often wreaks havoc on social as well as economic lives. While critics have argued that this legislation goes against the concept of innovation, this oftens misses the point.
The issue of false information for example, is far better regulated in the press. There are far fewer instances of newspapers being accused of spreading misinformation; this is because the business model puts the burden of responsibility on the journalists and the organisation itself.
This then creates a strong set of disincentives at the micro level for such misinformation to not proliferate throughout the economy. No such disincentives currently exist for social media companies, and in an environment where everyone can post whatever they like without any set of accountability, the end result is that decisions can be manipulated systematically by deliberately targeting a subset of people vulnerable to influence outcomes in elections and referendums.
For example, there is evidence that even the Brexit vote was influenced by a targeted campaign of misinformation in key districts. And because there is no independent track record of what people saw what ads — no one other than the company itself has access to such data — it is virtually impossible to ascertain what exactly happened.
Western regulators have conducted hearings into this phenomenon and there have been calls as radical as to break up some of these companies that have an undue influence on manipulating such information. In my opinion, this does not address the root of the problem.
Perhaps an alternative would be to compel a change in the business model itself. If social media companies were to switch to a subscription-based model (à la Netflix) where users would have to pay a monthly fee, then two problems would instantly disappear.
One would be the issue of anonymity whereby users would be forced to disclose personal data on themselves, thereby leading regulators to identify who was posting what. This would compel users to “own” what they have posted.
The second would be an economic disincentive, whereby the number of posts would itself decrease, thus improving the quality of the posts themselves.
Whatever the solution, it is clear that these are hard, but very real and consequential, problems that societies have to confront head on. The current set of laws that have been adopted by various governments have balanced between putting the onus of responsibility on the companies themselves to remove objectionable speech, as well as on the individual itself.
However, the structural problem remains at the root of the business model itself, and perhaps the time has come to address the root cause that moves social media companies closer to the print publications such that the information posted can be closer to the truth.
In the final analysis, debates should be based on the foundation of truth, thus providing for a filter that keeps out lies that often go viral. This may be hard to enact, but that should not detract societies from attempting to resolve.
— Nasser Malalla Ghanem is Senior Partner at the law firm of NM Associates, which has a joint venture with GCP.