DEEP FAKE MONALISA-1572776669873
Image Credit: ©Gulf News

The recent video was unmistakably her. For centuries she’d sat still, smiling that gnomic smile, but now here she was in full motion: the Mona Lisa, still yellowed by age, chatting animatedly with an unseen interviewer as if she were Diana, Princess of Wales, talking about her charity work.

This remarkable piece of film was only the latest advance in the science of “deepfakes”: computer-generated videos depicting real people doing and saying things that never happened. Deepfakes, which are cheap and simple to build using widely available online tools, have been used to make Barack Obama and Donald Trump give imaginary speeches, and to insert the faces of numerous female celebrities into pornographic videos. The Mona Lisa team’s special innovation was to create AI that could produce deepfakes from just a single source image.

Three major social networks - Facebook, YouTube and Twitter - were plunged into controversy this year by a very different use of faked video. Real footage of Nancy Pelosi, the speaker of the US House of Representatives, had been slowed down and its pitch edited in order to make it look as if she was drunkenly slurring her words. YouTube took the video down, but Facebook and Twitter refused to, prompting a fiery debate about how far social networks should go in censoring disinformation.

The Pelosi video, which was tweeted by Donald Trump, was emphatically not a deepfake. Rather than a product of sophisticated AI, it was a simple edit that anyone with an iPhone could have performed. Nevertheless, as America prepares for its next presidential contest in 2020, and as fake news emerges as a major issue in elections across the world, the Affaire de Pelosi looks like a dress rehearsal for the storm that could unfold if deepfakes start being used as a serious tool of dirty campaigning and election interference.

Real footage of Nancy Pelosi, the speaker of the US House of Representatives, had been slowed down and its pitch edited in order to make it look as if she was drunkenly slurring her words.

- Laurence Dodds

Lack of policy

“We don’t have a policy that stipulates that the information you post on Facebook must be true,” Facebook said. Understandably, given the immense pressure it receives from both sides of the American political spectrum, it almost always refuses to identify or delete misleading content. Instead it relies on outside fact checkers to flag news items as false or misleading, and once that has happened its applies a number of sanctions designed to slow the spread of that content.

It’s possible that deepfakes, which are wholesale impersonations of real human beings, will be put in a different category. Thomas Kadri, a PhD student at Yale Law School, argues that it should be possible for social networks to draw a line between deepfakes and misleadingly edited videos, probably by taking a leaf from defamation law. “Principles of defamation rely on the idea that you are harming someone’s reputation by making people think that they are saying or doing things that they did not say or do. Some of these digital falsifications fall squarely into that logic as well.”

‘This video is fake’

But given social networks’ current philosophy, there is no guarantee they will actually do this. Indeed, removing political deepfakes could prove controversial because they can be used for parody and art.

A more palatable approach for Big Tech would be to clearly label deepfakes as such. The advantage of that would be that parody videos and genuine attempts to mislead would be treated in the same way. The former would hardly suffer from a big red label reading “THIS VIDEO IS FAKE”, while the latter would be undermined.

“Facebook, YouTube and Twitter should get comfortable with much more aggressive labelling,” says Alex Stamos, Facebook’s former chief security officer. “Deepfakes have unmistakable technical indicators that can be picked up with [AI]; auto-labelling anything above a reasonable confidence level would help prevent viral spread while giving time for more measured responses.

Another possibility along these lines would be simply to ban deepfakes entirely. But none of the platforms has ever been willing to take this kind of sweeping approach in the past. Facebook did not shut down live streaming even after the Christchurch attacks, and has not simply banned political ads. YouTube does not simply hide all vaccine-related content from its search results, as the online mood board Pinterest chose to do.

Beyond merely removing deepfakes or slapping labels on them, social networks could also make more profound changes to how content goes viral on their services.

Brakes on fake media

They could add more friction to the act of sharing videos in general. Adding more confirmation steps for the user to retweet, or provide contextual information... for an impartial user, many fake media are not very challenging to detect, all it needs is some more time to think and examine. A little bit more time for that will be an effective brake on the viral propagation of fake media.

Alternatively, they could make further reforms to their algorithms. Ovadya imagines a policy in which, after a piece of content achieves a certain level of virality, an automatic freeze is applied that reduces its ranking until a fact-checker within the social network can verify its truth.

Phone makers, social networks and journalistic institutions could work together to establish a “chain of custody” for video footage, by which cameras would add a unique digital fingerprint to anything they shoot. Later, if a deepfake is produced using that footage, the owner of the original could file a takedown notice using their unique information, verifying that the deepfake is a distorted copy.

It would have drawbacks, especially if it were applied to all footage shot by anyone with a smartphone; it could make anonymous recording impossible and put people recording police brutality or state violence in danger. Limiting such a scheme to media organisations might be more feasible.

Twitter’s policy

Will any of this actually happen? Twitter does have a policy banning its users from “manipulating or interfering in elections”. Specifically, it forbids users from falsely claiming to represent a particular politician, campaign or government agency. Twitter also forbids users from impersonating public figures, and parody accounts must be clearly labelled with words such as “parody” or “fake”. But a spokesman refused to say whether political deepfakes would break any of these policies.

YouTube gave the most detailed response. It cited its policy against “deceptive practices”, such as misleading video descriptions that “trick users into believing the content is something it is not”. It also said that YouTube was tweaking its recommendation engine to reduce “borderline” content such as anti-vaccination conspiracy theories or “blatantly false” claims about historic events such as 9/11.

The company says it was aware of deepfakes and has teams working on the problem, suggesting it might deal with them in a similar way to copyright infringements and spam. But it refused to say where deepfakes would fall under its policies, and even refused to say which of its rules the slowed-down Pelosi video had actually violated.

These policies will happen. It’s a question of will it happen before or after some very bad things happen.

Laurence Dodds is a columnist, specialising on the new legislators of humankind.