When society is divided and tensions run high, those divisions play out on social media.
Platforms like Facebook hold up a mirror to society – with more than 3 billion people using Facebook’s apps every month, everything that is good, bad and ugly in our societies will find expression on our platform. That puts a big responsibility on Facebook and other social media companies to decide where to draw the line over what content is acceptable.
Facebook has come in for much criticism in recent weeks following its decision to allow controversial posts by President Trump to stay up, and misgivings on the part of many people, including companies that advertise on our platform, about our approach to tackling hate speech. I want to be unambiguous: Facebook does not profit from hate.
Billions of people use Facebook and Instagram because they have good experiences – they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it.
A better way to expose
More than 100 billion messages are sent on our services every day. That’s all of us, talking to each other, sharing our lives, our opinions, our hopes and our experiences. In all of those billions of interactions a tiny fraction are hateful.
When we find hateful posts on Facebook and Instagram, we take a zero tolerance approach and remove them. When content falls short of being classified as hate speech – or of our other policies aimed at preventing harm or voter suppression – we err on the side of free expression because, ultimately, the best way to counter hurtful, divisive, offensive speech, is more speech.
Exposing it to sunlight is better than hiding it in the shadows.
But it won’t be easy
Unfortunately, zero tolerance doesn’t mean zero incidences. With so much content posted every day, rooting out the hate is like looking for a needle in a haystack. We invest billions of dollars each year in people and technology to keep our platform safe. We have tripled – to more than 35,000 – the people working on safety and security. We’re a pioneer in artificial intelligence technology to remove hateful content at scale.
And we’re making real progress. A recent European Commission report found that Facebook assessed 95.7 per cent of hate speech reports in less than 24 hours, faster than YouTube and Twitter. Last month, we reported that we find nearly 90 per cent of the hate speech we remove before someone reports it - up from 24 per cent just two years ago.
We took action against 9.6 million pieces of content in the first quarter of 2020 – up from 5.7 million in the previous quarter. And 99 per cent of the ISIS & Al Qaeda content we remove is taken down before anyone reports it to us.
Not relying on auto-correct
We are getting better – but we’re not complacent. That’s why we recently announced new policies and products to make sure everyone can stay safe, stay informed, and ultimately use their voice where it matters most – voting.
These include the largest voter information campaign in US history, with a goal of registering four million voters, and updates to policies designed to crack down on voter suppression and fight hate speech. Many of these changes are a direct result of feedback from the civil rights community - we’ll keep working with them and other experts as we adjust our policies to address new risks as they emerge.
Of course, focusing on hate speech and other types of harmful content on social media is necessary and understandable, but it is worth remembering that the vast majority of those billions of conversations are positive.
A way to connect
Look at what happened when the coronavirus pandemic took hold. Billions of people used Facebook to stay connected when they were physically apart. Grandparents and grandchildren, brothers and sisters, friends and neighbours. And more than that, people came together to help each other.
Thousands and thousands of local groups formed – millions of people came together – in order to organize to help the most vulnerable in their communities. Others, to celebrate and support our healthcare workers. And when businesses had to close their doors to the public, for many Facebook was their lifeline.
More than 160 million businesses use Facebook’s free tools to reach customers, and many used these tools to help them keep their businesses afloat when their doors were closed to the public – saving people’s jobs and livelihoods.
Importantly, Facebook helped people to get accurate, authoritative health information. We directed more than two billion people on Facebook and Instagram to information from the World Health Organisation and other public health authorities, with more than 350 million people clicking through.
And it is worth remembering that when the darkest things are happening in our society, social media gives people a means to shine a light. To show the world what is happening; to organize against hate and come together; and for millions of people around the world to show their solidarity. We’ve seen that all over the world on countless occasions – and we are seeing it right now with the Black Lives Matter movement.
We may never be able to prevent hate from appearing on Facebook entirely, but we are getting better at stopping it all the time.