1.2177723-608507550
A man demonstrates how he enters his Facebook page as he works on his computer at a restaurant in Brasilia, Brazil, Thursday, Jan. 4, 2018. Facebook is announcing initiatives in Brazil to counter false news reports that are expected to proliferate as the country heads toward October elections. (AP Photo/Eraldo Peres) Image Credit: AP

The indictment of 13 Russians filed by Robert Mueller, the special counsel investigating Russian efforts to influence the 2016 presidential election in the United States, details the secret workings of the internet Research Agency, an organisation in St Petersburg, Russia, that disseminates false information online. According to American intelligence officials, the Kremlin oversaw this shadowy operation, which made extensive use of social media accounts to foster conflict in the US and erode public faith in its democracy.

But the Kremlin’s operation relied on more than just its own secrecy. It also benefited from the secrecy of social media platforms like Facebook and Twitter. Their algorithms for systematically targeting users to receive certain content are off limits to the public, and the output of these algorithms is almost impossible to monitor. The algorithms make millions of what amount to editorial decisions, pumping out content without anyone fully understanding what is happening.

The editorial decisions of a newspaper or television news programme are immediately apparent (articles published, segments aired) and so can be readily analysed for bias and effect. By contrast, the editorial decisions of social media algorithms are opaque and slow to be discovered — even to those who run the platforms. It can take days or weeks before anyone finds out what has been disseminated by social media software.

The Mueller investigation is shining a welcome light on the Kremlin’s covert activity, but there is no similar effort to shine a light on the social media algorithms that helped the Russians spread their messages. There needs to be. This effort should begin by “opening up” the results of the algorithms.

In computer-speak, this “opening up” would involve something called an open application programming interface. This is a common software technique that allows different programmes to work with one another. For instance, Uber uses the open application programming interface of Google Maps to get information about a rider’s pickup point and destination. It is not Uber’s own mapping algorithm, but rather Google’s open application programming interface, that makes it possible for Uber to build its own algorithms for its distinctive functions.

The government should require social media platforms like Facebook and Twitter to use a similar open application programming interface. This would make it possible for third parties to build software to monitor and report on the effects of social media algorithms.

To be clear, the proposal is not to force companies to open up their algorithms — just the results of the algorithms. The goal is to make it possible to understand what content is fed into the algorithms and how the algorithms distribute that content. Who created the information or advertisement? And to what groups of users was it directed? An open application programming interface would therefore threaten neither a social media platform’s intellectual property nor the privacy of its individual users.

Media watchdog groups have long been able to assess the results of the editorial decisions of newspapers and television. Whether those stories express the Left, Right or Centre of the political spectrum, they are openly available to independent organisations that want to understand what is being communicated.

Extending this practice to social media would mean that a watchdog group could create software to analyse and make public whatever information from the platforms it might consider important: The demographics of the readership of a certain article, for instance, or whether a fake story continued to be widely disseminated even after being debunked.

After the Mueller indictment, Twitter issued a statement noting that technology companies “cannot defeat this novel, shared threat alone” — referring to efforts like the Russian disinformation campaign. “The best approach,” the statement continued, “is to share information and ideas to increase our collective knowledge, with the full weight of government and law enforcement leading the charge against threats to our democracy.”

This is true. And one effective form of information sharing would be legally mandated open application programming interfaces for social media platforms. They would help the public identify what is being delivered by social media algorithms, and thus help protect American democracy.

— New York Times News Service

Tom Wheeler is a visiting fellow in Governance Studies. He is an author and was chairman of the Federal Communication Commission in the US.