1.2041992-2664438285
Image Credit: Niño Jose Heredia/©Gulf News

After last year’s presidential election in the United States, Facebook came in for a drubbing for its role in propagating misinformation — or “fake news”, as we called it back then, before the term became a catchall designation for any news you don’t like. The criticism was well placed: Facebook is the world’s most popular social network, and millions of people look to it daily for news.

But the focus on Facebook let another social network off the hook. I speak of my daily addiction, Twitter.

Though the 140-character network favoured by US President Donald Trump is far smaller than Facebook, it is used heavily by people in the media and thus exerts perhaps an even greater sway on the news business.

That’s an issue because Twitter is making the news dumber. The service is insiders and clubby. It exacerbates groupthink. It prizes pundit-ready quips over substantive debate, and it tends to elevate the silly over the serious — for several sleepless hours this week it was captivated by “covfefe”, which was essentially a brouhaha over a typo.

But the biggest problem with Twitter’s place in the news is its role in the production and dissemination of propaganda and misinformation. It keeps pushing conspiracy theories — and because lots of people in the media, not to mention many news consumers, don’t quite understand how it works, the precise mechanism is worth digging into.

We recently saw the mechanism in action when another baseless conspiracy theory rose to the top of the news: The idea that the murder last year of Seth Rich, a staff member at the Democratic National Committee, was linked, somehow, to the leaking of Hillary Clinton campaign emails. The Fox News host Sean Hannity pushed the theory the loudest, but it was groups on Twitter — or, more specifically, bots on Twitter — that were first to the story and helped make it huge.

One way to think of today’s disinformation ecosystem is to picture it as a kind of gastrointestinal tract.

At the top end — the mouth, let’s call it — enter the raw materials of propaganda: The memes cooked up by anyone who wants to manipulate what the media covers, whether political campaigns, terrorist groups, state-sponsored trolls or the home-grown provocateurs who hang out at extremist online communities.

Then, way down at what we will politely call the “other end”, emerge the packaged narratives primed for widespread dissemination to you and everyone you know. These are the hot takes that dominate talk radio and prime-time cable news, as well as the viral Facebook posts warning you about this or that latest outrage committed by Clinton.

How do the raw materials become the culturewide narratives and conspiracy theories? The path is variegated and flexible and often stretches across multiple media platforms. Yet, in many of the biggest misinformation campaigns of the past year, Twitter played a key role.

Specifically, Twitter often acts as the small bowel of digital news. It’s where political messaging and disinformation get digested, packaged and widely picked up for mass distribution to cable, Facebook and the rest of the world. This role for Twitter has seemed to grow more intense during (and since) the 2016 presidential election campaign in the US. Twitter now functions as a clubhouse for much of the news. It’s where journalists pick up stories, meet sources, promote their work, criticise competitors’ work and workshop takes. In a more subtle way, Twitter has become a place where many journalists unconsciously build and gut-check a worldview — where they develop a sense of what’s important and merits coverage, and what doesn’t. This makes Twitter a prime target for manipulators: If you can get something big on Twitter, you’re almost guaranteed coverage everywhere.

For determined media manipulators, getting something big on Twitter isn’t all that difficult. Unlike Facebook, which requires people to use their real names, Twitter offers users essentially full anonymity, and it makes many of its functions accessible to outside programmers, allowing people to automate their actions on the service. As a result, numerous cheap and easy-to-use online tools let people quickly create thousands of Twitter bots — accounts that look real, but that are controlled by a puppet master.

Twitter’s design also promotes a slavish devotion to metrics: Every tweet comes with a counter of Likes and Retweets, and users come to internalise these metrics as proxies for real-world popularity.

Yet, these metrics can be gamed. Because a single Twitter user can create lots of accounts and run them all in a coordinated way, Twitter lets relatively small groups masquerade as far larger ones. If Facebook’s primary danger is its dissemination of fake stories, then Twitter’s is a ginning up of fake people.

“Bots allow groups to speak much more loudly than they would be able to on any other social media platforms — it lets them use Twitter as a megaphone,” said Samuel Woolley, the director for research at Oxford University’s Computational Propaganda Project. “It’s doing something that I call ‘manufacturing consensus,’ or building the illusion of popularity for a candidate or a particular idea.”

How this works for conspiracy theories is relatively straightforward. Outside of Twitter — in message boards or Facebook groups — a group will decide on a particular message to push. Then the deluge begins. Bots flood the network, tweeting and retweeting thousands or hundreds of thousands of messages in support of the story, often accompanied by a branding hashtag — #pizzagate, or #sethrich.

The initial aim isn’t to convince or persuade, but simply to overwhelm — to so completely saturate the network that it seems as if people are talking about a particular story. The biggest prize is to get on Twitter’s Trending Topics list, which is often used as an assignment sheet for the rest of the internet.

I witnessed this in mid-May, just after the Fox affiliate in Washington reported that a private investigator for Rich’s family had bombshell evidence in the case. The story later fell apart, but that night, Twitter bots went with it. Hundreds of accounts with few or no followers began tweeting links to the story. By the next morning, #SethRich was trending nationally in the US on Twitter — and the conspiracy theory was getting wide coverage across the right, including, in time, Hannity.

A Twitter spokesman said the company took bots seriously; it has a dedicated spam-detection team that looks out for bot-based manipulation, and it is constantly improving its tools to spot and shut down bots.

What’s more, because the media is large and chaotic, it is often unclear what role, exactly, bots play in ginning up interest in a story. Conspiracy theories went big long before Twitter was around. If you removed Twitter from the equation, wouldn’t Hannity have picked up the Seth Rich rumour anyway?

Yet, the more I spoke to experts, the more convinced I became that propaganda bots on Twitter might be a growing and terrifying scourge on democracy. Research suggests that bots are ubiquitous on Twitter. Emilio Ferrara and Alessandro Bessi, researchers at the University of Southern California, found that about a fifth of the election-related conversation on Twitter last year in the US was generated by bots. Most users were blind to them; they treated the bots the same way they treated other users.

Finally, in a more pernicious way, bots give us an easy way to doubt everything we see online. In the same way that the rise of “fake news” gives the US president cover to label everything “fake news”, the rise of bots might soon allow us to dismiss any online enthusiasm as driven by automation. Anyone you don’t like could be a bot; any highly retweeted post could be puffed up by bots.

And if that’s the case, why believe anything?

— New York Times News Service

Farhad Manjoo is a technology columnist with the New York Times.