Brussels: The European Union on Saturday finalised new legislation to require Big Tech to remove harmful content, the bloc's latest move to rein in the world's online giants.
The Digital Services Act (DSA) - the second part of a massive project to regulate tech companies - aims to ensure tougher consequences for platforms and websites that host a long list of banned content ranging from hate speech to disinformation and child sexual abuse images.
EU officials and parliamentarians finally reached agreement at talks in Brussels early Saturday on the legislation, which has been in the works since 2020. "Yes, we have a deal!," European Commissioner for the Internal Market Thierry Breton tweeted.
The DSA, one half of an overhaul for the 27-nation bloc’s digital rulebook, helps cement Europe’s reputation as the global leader in efforts to rein in the power of social media companies and other digital platforms.
The dark side of the internet also includes e-commerce platforms filled with counterfeit or defective products.
"Today's agreement on DSA is historic," European Commission chief Ursula von der Leyen tweeted. "Our new rules will protect users online, ensure freedom of expression and opportunities for businesses. What is illegal offline will effectively be illegal online in the EU."
The EU's provisional agreement reached Saturday remains subject to formal approval by the 27 member states and the European Parliament.
“With the DSA, the time of big online platforms behaving like they are `too big to care’ is coming to an end,’’ said EU Internal Market Commissioner Thierry Breton.
EU Commission Vice-President Margrethe Vestager added that “with today’s agreement we ensure that platforms are held accountable for the risks their services can pose to society and citizens.’’
The act is the EU’s third significant law targeting the tech industry, a notable contrast with the US, where lobbyists representing Silicon Valley’s interests have largely succeeded in keeping federal lawmakers at bay.
While the Justice Department and Federal Trade Commission have filed major antitrust actions against Google and Facebook, Congress remains politically divided on efforts to address competition, online privacy, disinformation and more.
It also bans deceptive techniques companies use to nudge people into doing things they didn’t intend to, such as signing up for services that are easy to opt into, but hard to decline.
Up until now, regulators have had no access to the inner workings at Google, Facebook and other popular services. But under the new law, the companies will have to be more transparent and provide information to regulators and independent researchers on content-moderation efforts.
To enforce the new rules, the EU’s executive Commission is expected to hire more than 200 new staffers. To pay for it, tech companies will be charged a “supervisory fee.”
The EU’s new rules should make tech companies more accountable for content created by users and amplified by their platforms’ algorithms.
The biggest online platforms and search engines, defined as having more than 45 million users, will face extra scrutiny.
Experts said the new rules will likely spark copycat regulatory efforts by governments in other countries, while tech companies will also face pressure to roll out the rules beyond the EU’s borders.
Breton said they will have plenty of stick to back up their laws, including “effective and dissuasive” fines of up to 6% of a company’s annual global revenue, which for big tech companies would amount to billions of dollars. Repeat offenders could be banned from the EU, he said.
The tentative agreement was reached between the EU parliament and the bloc’s member states. It still needs to be officially rubber-stamped by those institutions, which is expected after summer but should pose no political problem. The rules then won’t start applying until 15 months after that approval, or January 1, 2024, whichever is later.
“The DSA is nothing short of a paradigm shift in tech regulation. It’s the first major attempt to set rules and standards for algorithmic systems in digital media markets,’’ said Ben Scott, a former tech policy advisor to Hillary Clinton who’s now executive director of advocacy group Reset.
The need to regulate Big Tech more effectively came into sharper focus after the 2016 US presidential election, when Russia used social media platforms to try to influence voters. Tech companies like Facebook and Twitter promised to crack down on disinformation, but the problems have only worsened. During the pandemic, health misinformation blossomed and again the companies were slow to act, cracking down after years of a llowing anti-vaccine falsehoods to thrive on their platforms.
Under the EU law, governments would be able to ask companies take down a wide range of content that would be deemed illegal, including material that promotes terrorism, child sexual abuse, hate speech and commercial scams. Social media platforms like Facebook and Twitter would have to give users tools to flag such content in an ``easy and effective way’’ so that it can be swiftly removed. Online marketplaces like Amazon would have to do the same for dodgy products, such as counterfeit sneakers or unsafe toys.
These systems will be standardized to work the same way on any online platform.
Twitter said it would review the rules “in detail’’ and that it supports “smart, forward thinking regulation that balances the need to tackle online harm with protecting the Open Internet.’’
TikTok said it awaits the act’s full details but “we support its aim to harmonize the approach to online content issues and welcome the DSA’s focus on transparency as a means to show accountability.”
Google said it looks forward to “working with policymakers to get the remaining technical details right to ensure the law works for everyone.’’ Amazon referred to a blog post from last year that said it welcomed measures that enhance trust in online services. Facebook didn’t respond to a request for comment.