Image Credit: Niño Jose Heredia/©Gulf News

Online disinformation and the spread of deceptive political messages are pernicious, but they aren’t necessarily the worst abuse of social networks by governments and political actors.

Rational people are resistant to propaganda, and irrational ones only consume messages that stroke their confirmation biases. No one, however, can be impervious to personal attacks on a mass scale.

A report by the human rights lawyer Carly Nyst and Oxford University researcher Nick Monaco is an early attempt to study the phenomenon of state-sponsored trolling, or the digital harassment of critics.

Case studies

The case studies come from a diverse set of countries: Azerbaijan, Ecuador, the Philippines, Turkey, the US and Venezuela.

They complement what is already known about the practice in Russia, whose achievements in the field of digital abuse have generated the most interest to date.

The stories in the report, commissioned by the Palo Alto, California-based Institute for the Future, are all similar in some respects.

Thousands of social network accounts, both operated by humans and by bots used to amplify the attack, gang up on a person who dares to criticise a regime or a political figure.

Ways of trolls

Invariably, the person is accused of being a foreign agent and a traitor.

Memes and cartoons are used to insult the target.

The language of the comments, posts and tweets is often abusive; female targets, such as the Turkish journalist Ceyda Karan and her Filipina colleague Maria Ressa, are routinely threatened with rape.

The general idea behind the campaigns is to give the target the impression of an organic swelling public indignation about his or her work and views — but also to drown out the target’s voice with the howling of thousands of digital voices.

In more authoritarian countries, the campaigns are often conducted by pro-government organisations.

That was the case in Russia in the early years of this decade.

According to the Institute for the Future report, it’s the case in Azerbaijan today, where a group called Ireli (“Forward”) openly hunts the regime’s opponents on the web.

Professionalisation of trolling

The tendency, though, is toward the professionalisation of trolling.

Russia’s internet Research Agency, featured in an indictment by Special Counsel Robert Mueller, is just one example of how trolling operations can be run by a corporation-like entity.

In Ecuador, a firm called Ribeney Sociedad Anonima won a government contract for trolling services.

In the less authoritarian states, where voting is still meaningful, trolling operations often grow out of election campaigns.

In Ecuador, Rafael Correa created a troll army for the 2012 election and kept using it after he won.

In the Philippines, Rodrigo Duterte hired trolls to work for his 2016 presidential campaign and has since put some of the most prominent ones in government jobs.

In a democracy like India, Prime Minister Narendra Modi’s Bharatiya Janata Party maintains an “information technology cell”, with thousands of members who receive daily instructions on what topics to promote and whom to gang up on.

State-sponsored

The Institute for the Future report also takes aim at the pro-Donald Trump trolls in the United States who proliferated during the 2016 campaign and remain active now that he is president.

In America’s case, the report defines state-sponsored trolling “as the involvement of hyper-partisan news outlets and sources close to the president” that have evolved “from an electioneering trolling machine to an incumbent government’s apparatus”.

Certain statements from high officials, the report says, are “tantamount to a coded condoning of vitriolic harassment online from high officials”.

As an example, it cites the campaign of abuse against Rosa Brooks, a Georgetown University professor, after she suggested in a column that military officers might disobey Trump’s orders.

But Trump fans’ targeted attacks aren’t state-sponsored in the same sense as the Russian, Azerbaijani or Philippine trolling efforts.

They’d take place even if Trump had lost, just as the similarly abusive behaviour by trolls from the opposite camp, the anti-Trump “Resistance”, persists despite Hillary Clinton’s election defeat.

Insults, threats

These operations are instigated, if not necessarily run, by political machines rather than the US government.

One could argue, though, that such political machines can be as powerful as the state when it comes to hounding and silencing critics.

The insults and threats can be unsettling on their own, and they can make it hard for the targeted person to get a coherent message to followers.

And, sometimes, attacks have real-world consequences, as when trolls get hold of the target’s personal information.

Abuse online, offline

That is what happened to the Finnish journalist Jessikka Aro, who tried to investigate Russian troll factories and was subjected to online and then offline abuse.

It’s difficult to understand why social media platforms do little, if anything, to stop the trolling campaigns.

Twitter and Facebook will remove posts and comments containing death and rape threats, but not insults, treason accusations or suggestions that a journalist is on a hostile spy agency’s payroll.

They also don’t make it easy to complain about entire trolling campaigns rather than individual comments and messages, which are difficult for a trolling target to flag: Ressa, the Filipina journalist, received up to 90 hate messages an hour at the height of the campaign against her.

The Institute for the Future makes some suggestions on how social networks can help, but they aren’t particularly useful.

For example, it says a network could ask users who create bot accounts to identify them as such — which troll farms would be understandably reluctant to do.

It also suggests that the social media companies should somehow detect and identify state-linked accounts, a game of whack-a-mole that is as hard to play as it is pointless.

Empowering targets

The easier and more useful thing would be to empower the targets of abuse campaigns.

For example, flagging a dozen similar abusive comments should result in special attention from the network.

Users should also be able to turn off comments to specific posts and temporarily disable tagging, otherwise it’s too easy for trolls to take over a feed.

Detecting bots

And if bots are to be marked, it should be up to the networks to detect them: The technology is there, it’s just not being applied consistently enough.

The best answer would be for the networks to talk to the trolls’ targets and find out what tools they would have needed to fight back.

The Institute for the Future’s report would be a good starting point: The authors have interviewed some of the targeted journalists and activists.

Together, these people and the social networks could figure out ways to curb politicised online harassment without curbing freedom of speech.

— Bloomberg

Leonid Bershidsky is a Bloomberg Opinion columnist covering European politics and business.