Watch Nidhi Razdan: Taylor Swift’s deepfakes point to the dangers of AI Video Credit: Gulf News

Fake, sexually explicit images of pop star Taylor Swift this past week caused ripples around the world, as the dangers of AI-generated images caused concerns.

The social media platform, X, took the drastic step of temporarily halting any search results on the singer as it tried to stop the spread of the photos. Many say it was a welcome action but came too late.

The fake images had already been seen many millions of times before they were pulled. The controversy has led to a new bill in the US that has just been introduced by a bipartisan group of senators, which would criminalize the spread of non-consensual, sexualized images generated by artificial intelligence.

Essentially, victims can sue those who produce or distribute their sexually explicit images made without their consent or even anyone who received the material knowing it was not made with consent.

Get exclusive content with Gulf News WhatsApp channel

Clear and explicit rules

The White House also weighed in, calling the spread of the fake Swift photos as “alarming”. The White House Press Secretary, Karine Jean-Pierre told reporters, “We know that lax enforcement disproportionately impacts women and they also impact girls, sadly, who are the overwhelming targets.” She also said while there need to be laws to tackle the misuse of AI, social media platforms also need to ensure this content is banned on their sites.

This is a stand that needs support across political and geographical boundaries. While AI presents us with huge opportunities and innovation, its misuse has huge consequences.

That is why the government of India has also warned social media companies that they will be held accountable for deepfakes posted on their platforms, in compliance with “clear and explicit rules”.

The warning comes just a couple of months before India’s general election, where there is growing concern about the misuse of AI to spread fake news. India’s minister for IT and electronics, Rajeev Chandrasekhar told ‘The Financial Times’ that India had “woken up earlier” to this threat.

Read more

“We are the world’s largest democracy [and] we are obviously deeply concerned about the impact of cross-border actors using disinformation, using misinformation, using deepfakes to cause problems in our democracy,” Chandrasekhar said.

India has warned platforms to “identify and remove misinformation which is patently false, untrue or misleading in nature and impersonates another person, including those created using deepfakes”. This has separately led to another debate about whether India is over-policing the interne. That is certainly a concern.

But it is not just the Indian election that could be impacted. The US election later this year has a lot at stake and already fake videos and even audio of Joe Biden and Donald Trump have been called out.

Dealing with misinformation

A fake audio of Biden telling voters to skip the New Hampshire primary alarmed many experts. Social media platforms have taken some steps to deal with misinformation but clearly, they need to do much more.

For example, on X, a video that has been manipulated is labelled as such. Meta and Google have also just announced that political campaigns will have to disclose if their ads have been digitally altered.

The BBC reports that according to a 2023 study, there has been a 550% rise in the creation of doctored images since 2019, fuelled by the emergence of AI. Quoting independent analyst Genevieve, the BBC says due to AI, the number of new pornographic deepfake videos has surged more than ninefold since 2020.

Search engines like Google also have a responsibility to act. Victims of a deep fake have to fill out a form on Google but the process is known to be tough and tedious. By the time someone acts, a video has already gone viral and the damage has been done.

Taylor Swift is famous and her legions of fans lead the outcry against her fake images. But millions of less well-known women and girls are vulnerable to this harassment.

People need to be educated about the dangers of AI-generated fake content, laws need to be strengthened and platforms made accountable.