Twitter is testing a new feature that will let you know before posting, if your tweet replies are offensive. In a bid to clean up conversations on the social media platform, Twitter Inc will test sending users a prompt when they reply to a tweet using “offensive or hurtful language,”, the company said in a tweet on Tuesday.
On Tuesday, @TwitterSupport announced: “When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”
When users hit “send” on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not.
Twitter has long been under pressure to clean up hateful and abusive content on its platform, which are policed by users flagging rule-breaking tweets and by technology.
According to a Reuters article, Sunita Saligram, Twitter’s global head of site policy for trust and safety, said in an interview: “We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret.”
Twitter’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content. The company took action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report.
Asked whether the experiment would instead give users a playbook to find loopholes in Twitter’s rules on offensive language, Saligram said that it was targeted at the majority of rule breakers who are not repeat offenders.
Twitter said the experiment, the first of its kind for the company, will start on Tuesday and last at least a few weeks. It will run globally but only for English-language tweets.
Many tweeps replied to the tweet announcing the change. While some said this is a good step, others felt it was not okay to censor emotional outburst and asked who would decide what is offensive and what’s not.
Tweep @soniagupta504 asked: “Who decides what is "harmful" language? The same white techbros who decide what constitutes abuse and harassment? Because that's not working out so well.”
Meanwhile, many continued to petition for an ‘Edit’ button.