1.1979072-1743810501
The awkward truth about trolls: any of us could become one Image Credit: Niño Jose Heredia/©Gulf News

Today, online harassment and trolling is prevalent in social media, and especially in online discussions, whether on Facebook, news sites, or Reddit. A recent Pew Research Centre survey estimated that four out of 10 users have been harassed in one way or another: From name-calling to sexual harassment. Last year, the Guardian ran a feature that showed how more than a million comments posted to this website were deleted for being abusive or off-topic. Several websites have resorted to completely removing the comment facility. But why has trolling become so widespread?

One common misconception is that trolls are in some way sociopathic, and comprise a small but vocal minority of all users of the web. This notion has also been reinforced by past research that focused on interviewing trolls or showed that trolls have unique personality traits.

But there’s another possibility: Trolls might be made rather than born, and are just ordinary people like you and me. In fact, our new research shows that trolling is more situational than an innate characteristic. In other words, under the right (or wrong) circumstances, anyone can become a troll. Through a combination of experimentation, data analysis, and machine learning, we can identify two key factors that make the average person more likely to troll.

The first is their emotional state or mood: Trolling ebbs and flows with people’s emotions, depending on the time of day and even the day of week — people are most likely to troll late at night, and at the beginning of the work week. There is even some evidence of this mood spilling over from one discussion into others — a person simply participating in the same discussion as a troll is more likely to troll in a later, unrelated discussion.

The second factor is the context of the discussion a person is having. Seeing prior “troll comments” in a discussion doubles a person’s likelihood of subsequently trolling. Trolling also has a domino effect: The more troll comments there are at the start of a discussion, the more likely subsequent participants in that discussion will also troll.

What these findings suggest is that any person who happens to wake up on the wrong side of the bed can be the spark that results in a cascade of bad behaviour. And if troll comments beget more troll comments, you can imagine a discussion, or even a whole community, being overrun by these kinds of posts if left unchecked.

We might hypothesise that “upvotes” and “downvotes” can solve the problem, as a community can use these signals to tell bad commenters that what they write isn’t appropriate or appreciated. However, voting can make matters worse. While it might have been designed as a simple indicator of a comment’s quality, it’s been shown that downvotes propagate negativity through a community. That is, people who have their comments downvoted not only write a greater number of negative, troll comments in the future, but they also retaliate by downvoting others. (And if you were hoping that being upvoted leads to better comments or more upvotes, this isn’t the case.)

These sobering findings suggest that what we currently have in discussions might not be working as we hoped. Nonetheless, there are other things communities can implement to minimise trolling. For instance, when we see something we disagree with, many of us are susceptible to posting angry comments in the heat of the moment. Briefly delaying when comments get posted could create opportunities to amend or retract comments that our later selves may come to regret.

We can use these findings to develop predictive models that can identify potentially problematic discussions, and help community moderators defuse aggressive situations more quickly. We can also come up with social techniques to try to influence people towards acting more civilly. One recent experiment conducted on Reddit demonstrated how pinning a post about a community’s rules to the top of discussion pages made it more likely for newcomers to follow those rules when commenting.

Still, there’s a lot that we don’t know about trolling. How common is organised trolling? What role do sock puppets (people who use multiple online identities to promote their own views) play? How about bots? Have strategies for trolling evolved over time? With a growing community of organisations and researchers interested in tackling the problem, I am optimistic that online discussions will get better as these findings inspire improvements.

With the increased polarisation that we see in the world both on and offline, it’s easy to dismiss trolls as these people who are not “one of us”. The reality is that many trolls are just people like ourselves having a bad day. Recognising that we’re responsible for both the inspiring and the despair-inducing conversations is key to having better, more productive online debates in future.

— Guardian News and Media Limited

Justin Cheng is a researcher at Stanford University.