Washington: Two lawmakers are warning that the country is woefully unprepared for the rise of deepfakes, alarmingly realistic videos that appear to show people doing things they didn’t do.

Senators. Mark R. Warner, D-Va., and Marco Rubio, R-Fla., are exploring ways to curb the trend of doctored videos before it becomes too widespread, saying they could wreak havoc if used in disinformation campaigns such as the one conducted by the Russian government in 2016. In a wide-ranging technology policy paper Monday, Warner floated the idea of holding social media platforms liable for failure to take down deepfakes. And Rubio in a recent speech called on government and political leaders to treat them as a national security threat.

The attention from lawmakers means deepfakes are no longer a fringe issue but a more serious front in the fight against fake news, and tech companies may soon feel pressure to get ahead of them. But any policy solution would have to balance the harm to potential victims against free-speech rights for people who use deepfakes for creative or satirical purposes.

Warner said the easily accessible technology used to make the videos could “usher in an unprecedented wave of false and defamatory content”. In his policy paper, he wrote, “Just as we’re trying to sort through the disinformation playbook used in the 2016 election and as we prepare for additional attacks in 2018, a new set of tools is being developed that are poised to exacerbate these problems”.

Software to create deepfakes is available for free online, and it doesn’t require advanced production skills to use. It works by feeding hundreds of pictures of a person’s face into a machine learning algorithm that then maps them onto video of another person’s body. Anything the person in the video does or says can be made to look like it’s coming from the victim. The results are sometimes so seamless that it’s difficult to tell with the naked eye that the videos are fraudulent.

Lawmakers caution that it’s a tool that could send the fake news crisis into overdrive. Think about it: Realistic-looking videos appearing to show politicians meeting taking bribes or uttering inflammatory statements could be used to try to sway an election. Or doctored footage purporting to show officials announcing military action could trigger a national security crisis.

“This all sounds fantastic, it all sounds exaggerated, it all sounds hyperbolic. But the capability to do all of this is real and exists now, the willingness exists now, all that’s missing is the execution. And we are not ready for it,” Rubio said in a speech earlier this month at the right-leaning Heritage Foundation. “I know for a fact that the Russian Federation at the command of Vladimir Putin tried to sow instability and chaos in American politics in 2016,” he said. “They did that through Twitter bots and they did that through a couple of other measures that will increasingly come to light. But they didn’t use this. Imagine using this. Imagine injecting this in an election.”

To chip away at the problem, Warner has proposed is amending the Communications Decency Act to hold social media platforms liable under state law if they don’t take down deepfakes and other manipulated content shown in court to be defamatory. Right now, the law provides immunity for platforms in such cases.

“Currently the onus is on victims to exhaustively search for, and report, this content to platforms — who frequently take months to respond and who are under no obligation thereafter to proactively prevent the same content from being re-uploaded in the future,” Warner wrote in his policy proposal. The platforms, he said, were “in the best place to identify and prevent this kind of content from being propagated.”

Legislation to do this would almost certainly run into opposition from civil liberties groups. This year, organisations such as the Electronic Frontier Foundation lobbied unsuccessfully against a similar carve-out in the Communications Decency Act that sought to hold media platforms liable for sex trafficking. The groups said the move, while well-intended, was so broadly written that it criminalised protected speech.

“Any effort on this front would need to address the challenge of distinguishing true deepfakes aimed at spreading disinformation from satire or other legitimate forms of entertainment or parody,” Warner wrote. “Attempting to distinguish between true disinformation and legitimate satire could prove difficult,” he said, but “courts already must make distinction between satire and defamation/libel”.

Deepfakes started cropping up last year on Reddit after a user superimposed the faces of Gal Gadot, Taylor Swift and other celebrities onto the faces of actors in pornographic videos. They’ve also been used to lampoon President Donald Trump by pasting his face over Russian President Vladimir Putin and German Chancellor Angela Merkel. And the comedian Jordan Peele used the technology to graft President Barack Obama’s face over his own in a widely-circulated public service announcement warning of the dangers of deepfakes.

“It’s only a matter of time until ‘deepfake’ videos become a household term,” Rubio said in an email.

Rubio hasn’t offered any concrete policy proposals yet. For now, he told me, he’s simply trying to sound the alarm in hopes of bringing new ideas to the table.

“I’m working to raise awareness,” he said, “and find ways to address this threat from foreign actors and criminals and defend our elections this fall and in the future”.