Please register to access this content.
To continue viewing the content you love, please sign in or create a new account
Dismiss
This content is for our paying subscribers only

Tech Media

Social media can be polarizing. A new type of algorithm aims to change that.

'Bridging systems' are designed as an antidote to today's toxic Twitter, Facebook feeds



Social media that reward pure engagement have an inherent 'bias toward division.'
Image Credit: Shutterstock

Social media can be polarizing. A new type of algorithm aims to change that.

Researcher Aviv Ovadya views social media algorithms as fueling extremism around the world, and he has an idea to fix that - not by getting rid of the algorithms but by changing how they work.

"Social media doesn't have to divide us," said Ovadya, an affiliate at Harvard's Berkman Klein Center for Internet & Society. "It doesn't have to incentivize outrage and hate."

In a new working paper, published Wednesday, Ovadya and co-author Luke Thorburn of King's College London make the case for what they call "bridging systems" - algorithms designed to elevate posts that resonate with diverse audiences. They see the approach as an antidote to today's toxic Twitter and Facebook feeds, which tend to highlight the most attention-grabbing content, even if it's polarizing.

Social media that reward pure engagement - likes, shares, angry comments - have an inherent "bias toward division," Ovadya argues. The headlines, pictures and videos that thrive tend to be those that immediately appeal to a specific audience, whether that's vaccine skeptics on Instagram, conspiracy theorists on YouTube or teens with eating disorders on TikTok. And if that content also repels or alienates another group, that negative engagement can amplify it further.

Advertisement

The problem of what's sometimes called "algorithmic amplification" was a main focus of Facebook whistleblower Frances Haugen, whose testimony spurred a bevy of regulatory proposals in the United States and abroad. Those included bills aimed at holding tech companies liable for speech that their recommendation systems promote, or even requiring them to give users an option to "turn off" their algorithms.

Ovadya argues those fixes are misguided. On social media, he says, "there's always an algorithm" deciding what people see, and it's never value-neutral. Even a strictly chronological feed prioritizes recency and directs our attention toward the users who post most frequently, at the expense of those who take more time to craft their thoughts.

Rather than fighting algorithms, Ovadya proposes putting them to work toward a more productive goal. Specifically, he proposes that social media platforms find ways to boost posts that "bridge" different audiences, whether that's left and right, young and old, or cat lovers and dog lovers.

For example, a recommendation system could be programmed to look not only at how many likes or dislikes a given post receives, but whether it's getting likes from people of different political leanings. The "bridging" doesn't necessarily have to be along political lines, Ovadya adds. You could imagine a transportation policy forum that prioritizes content that appeals to both cyclists and motorists, for instance.

Ovadya and Thorburn aren't the only social media theorists to suggest this. Chris Bail, director of Duke's Polarization Lab, advanced similar ideas in his 2021 book "Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing."

Advertisement

Bringing together Democrats and Republicans, or Palestinians and Israelis, via bridging algorithms might sound "pie in the sky," Bail says via email. But they needn't solve all of our thorniest political problems to be useful.

"A lot of what bridging algorithms promote is more mundane - think Charles E. Schumer posting about sports, or Marco Rubio posting something about pets," Bail said. "Though this type of content may seem frivolous to high-minded political junkies, new research suggests these types of mundane connections may be extremely important," especially in a social media world that seems geared to "dehumanize and humiliate people."

A bridging algorithm is at the heart of Twitter's crowdsourced fact-checking tool, called Community Notes (formerly Birdwatch). It ensures that user-written fact-checks get published on Twitter only when they've been rated "helpful" by a diverse set of reviewers.

"It's easy to see how this approach could apply to other things on social media as well," Twitter Vice President Keith Coleman told reporters in September.

Bridging algorithms could also come with blind spots and biases of their own. Evan Greer, deputy director of the tech advocacy group Fight for the Future, notes that just because a given idea holds bipartisan appeal doesn't make it worthy.

Advertisement

"Some of the worst ideas and actions in human history, such as enslavement and colonization, enjoyed overwhelming popular support," Greer said. By the same token, "Many good ideas are often controversial at first," such as LGBTQ rights.

Ovadya agrees bridging algorithms would need to be implemented with care to avoid suppressing marginalized viewpoints. Ultimately, he said, the goal should be not to sweep disagreements under the rug, but to tip the scales of online discourse toward "productive conflict" built on respect for people's differences.

Advertisement