1.2132098-1237132345

Fakery has long infected public debate in democracies, especially during election campaigns, with effects hard to gauge.

In the UK in 1924, the fake “Zinoviev letter” fuelled a Russia scare different from today’s concerns. It is thought to have greatly harmed the then Labour government’s chances of re-election when the letter was reported as authentic in the Daily Mail just before polling day.

Technology has made the threat of fakery greater today. Widespread disclosures are no longer made only by mass media, whose claims are necessarily public and able to be challenged. A sort of darkness covers deceptions that are now possible through a mix of data-driven targeting and the social media news feeds of individuals, sharing among their personal networks. For those journalists who seek truth and take seriously their role to facilitate democracy, it is imperative to expose fake news and masquerading voices. Nothing is more potentially destructive of freedom under law than a community losing trust in the information with which public debate is nourished, public choices are made and public accountability is extracted.

 Nothing is more potentially destructive of freedom under law than a community losing trust in the information with which public debate is nourished.


Knowledge is gradually building about how Russian trolls have attempted to influence or disrupt public debate beyond Russia, using social media in particular. An important step in controlling the infection was Twitter’s recent disclosure to the US Congress of 2,752 accounts that Twitter has “tied to Russian actors”, as the House permanent select committee on intelligence put it.

Availability of the list of Twitter accounts allows other democratic institutions, especially traditional journalism organisations, to investigate the effects of those accounts, especially any impact on their own published journalism and on the debates they host on their own platforms.

What, if anything, can be learned about the objectives and tactics of those who create and deploy the fakery? Recently, the Guardian began answering. Using the list it reported that the Russian “troll army” accounts had been cited more than 80 times across UK media. Two of the accounts have appeared in two different Guardian articles. A report last June about the LGBTI community’s fight back against online abuse necessarily drew heavily on social media. It has been footnoted to disclose that one of the tweets quoted in the piece was drawn, unknowingly, from a fake account.

The second fake account mentioned in a Guardian article, @TEN_GOP, used to be understood to represent Tennessee Republicans’ opinions. It had been promoted by members of President Trump’s inner circle, though I am not aware of any evidence that they knew it was a Russian troll account when they cited it. Last June the account was quoted in a Guardian live blog in a selection of conservatives’ responses to the US withdrawal from the Paris climate agreement.

I asked the Guardian’s technical experts to use the list to examine what had been happening in the more than 40 million comments published online below articles since the beginning of 2016. The analysis is continuing so I might have more to add. But early results, which show relatively few of the accounts appeared, invite cautious analysis.

The more such work is publicised — prudently, for no one wants to assist trolls — the more readers are put on guard. I trust that other journalism organisations will conduct similar exercises and share their techniques and findings. I hope the social media giants, Facebook and Twitter, will promptly and routinely make public all fake accounts, advertisements and “news items” they find as they inquire into how their services have been manipulated for malign purposes. Excessive secrecy retards the democratic fightback. It is important to note that those who made online comments in which they embedded one of the troll accounts did not necessarily know the account was fake. They may simply have agreed with its sentiments.

At the Guardian, six of the troll Twitter accounts have been found so far in below-the-line comments beneath eight articles published between June 2016 and 1 November 2017, the date the congressional committee released Twitter’s list. The articles are in two broad categories: the US presidential election campaign in 2016 and Donald Trump’s first six months in office; and the conflict in Syria, where Russia is fighting on the side of the regime of Bashar Al Assad.

A comment beneath an article about the Women’s March on Washington, which coincided with Trump’s inauguration, claimed that “the organiser” of the march was campaigning for Sharia law across the US, and cited as its lone source a Russian troll account. Below an article referring to Trump and Vladimir Putin meeting at the G20 summit in Hamburg last July, where violence embarrassed the German hosts, was a comment disparaging “extreme left anti-fascists” and embedding a trolling Twitter handle.

Other comments that contain fake accounts within them seem aimed at fomenting disdain, or worse, towards, for example, those who called out Trump for his behaviour towards women.

As others who have analysed Russian trolling have noted, the comments are sometimes unsteadily expressed, as if the writer was not a native English speaker. If the words and messaging associated with proven troll accounts can be aggregated in large amounts, perhaps patterns will emerge and help with prevention, not just reaction. In this fight, artificial intelligence can be harnessed by both sides. Exposing destructive use of social media, including through its appearances in older media forums, can begin in earnest. Opportunities exist for people of goodwill to collaborate to make the work faster and more effective.

— Guardian News & Media Ltd

Paul Chadwick is the Guardian’s fourth readers’ editor.