If the 2016 United States presidential election was an election of “fake news”, then 2020 has the potential to be the election of “deepfakes”, the new phenomenon of bogus videos created with the help of artificial intelligence. It’s becoming easier and cheaper to create such videos. Soon, those with even a rudimentary technical knowledge will be able to fabricate videos that are so true to life that it becomes difficult, if not impossible, to determine whether the video is real.
In the era of conspiracy theories, disinformation and absurd denials by politicians staring down seemingly indisputable facts, it is only a matter of time before deepfakes are weaponised in ways that poison the foundational principle of democracy: Informed consent of the governed. After all, how can voters make appropriate decisions if they aren’t sure what is fact and what is fiction? Unfortunately, we are careening towards that moment faster than we think.
Deepfakes are created by something called a “generative adversarial network”, or GAN. GANs are technically complex, but operate on a simple principle. There are two automated rivals in the system: A forger and a detective. The forger tries to create fake content while the detective tries to figure out what is authentic and what is forged. Over each iteration, the forger learns from its mistakes. Eventually, the forger gets so good that it is difficult to tell the difference between fake and real content. And when that happens with deepfakes, those are the videos that are likely to fool humans, too.
Of course, fakes and forgeries are not new. Whether it was the Soviets airbrushing out “undesirables” or Hollywood special effects, convincing imitations of reality have been around for a while. But in both instances, there were only a few masters of the trade who could pull off a convincing fake. Deepfakes, on the other hand, require little technical expertise, meaning that virtually anyone with the right software will be able to make any fake video of just about any person seemingly saying whatever they want.
That democratisation of forgery is just around the corner. “I would say within another 18 to 24 months, that technology is going to get to a point where the human brain may not be able to decipher it,” Hany Farid, a professor of computer science at Dartmouth College [New Hampshire, United States], recently told me. Soon, the forger will consistently fool us.
Given how poorly our democracies performed with easily debunked fake-news articles about, say, the Pope endorsing US President Donald Trump, the prospect of having to question videos that we can see with our own eyes is even more harrowing.
“The things that keep me up at night these days are the ability to create a fake video and audio of a world leader saying I’ve launched nuclear weapons,” Farid told me. “The technology to create that exists today.”
Once those videos go online, millions of people will fall for them. But the really scary question is this: Will any nuclear-armed governments fall for them, too, and launch a counterattack based on a lie?
Deepfakes could also imperil the democratic process itself. It’s not difficult to imagine a scandalous, but fake video being posted online right before polls open, or to imagine conspiracy theorists, or Trump himself, sharing a doctored video aimed at destroying a political opponent. That’s not so far-fetched given that it’s already happened, albeit with old-school editing rather than deepfake technology.
How does reality fight back?
But, as Farid worries, perhaps the larger threat comes from the destruction of democratic accountability. “Because if it is, in fact, the case that almost anything can be faked well, then nothing is real.” Once deepfakes exist, politicians can pretend that any disqualifying behaviour has actually been created by a neural network. As we’ve seen in the Trump era, with a highly polarised electorate, millions will believe what they are told by a politician they support, even when there is overwhelming evidence to the contrary.
So how does reality fight back? Unfortunately, detecting fake content after it’s uploaded isn’t really an option. Given the scale of the internet, even an algorithm that is 99 per cent effective will still let a huge volume of fake content slip through the cracks. Instead, technology companies and news networks will need to consider adapting to this technological frontier with “secure imaging pipelines”, which verify and authenticate content at the source when it is created. Just as Twitter users have verified check marks, so too could videos posted online, marking them as authentic and unedited.
But ultimately, the solution lies with us. Deepfakes are a threat to our democracy because of underlying political deficiencies that make us an easy target. A significant chunk of the US electorate dabbles in conspiracy theories, encouraged by a president who promotes them himself. Millions of Americans consume “news” from outlets that pump out lie after lie. And the groups most likely to be fooled are those who have low levels of media literacy and are unable to discern questionable sources from reliable ones. If better forgers are coming, we, as citizens, need to ensure that voters are educated to become better detectives.
— Washington Post
Brian Klaas is an assistant professor of global politics at University College London, where he focuses on democracy, authoritarianism, and American politics and foreign policy. He is the co-author of How to Rig an Election and the author of The Despot’s Apprentice and The Despot’s Accomplice.