silhouette shadow
Image Credit: Pixabay

Microsoft Corp. is releasing new technology to fight "deepfakes" that can be used to spread false information ahead of the US election.

"Microsoft Video Authenticator" analyzes videos and photos and provides a score indicating the chance that they're manipulated, the company said.

Deepfakes use artificial intelligence to alter videos or audio to make someone appear to do or say something they didn't. Microsoft's tool aims to identify videos that have been altered using AI, according to a Tuesday blog post by the company.

The tool will benefit campaigns and newsrooms, according to Tom Burt, a company vice president. "This will be a long-term effort, but we hope to have an impact in the lead-up to November," he said in a statement.

The digital tool works by detecting features that are unique to deepfakes but that are not necessarily evident to people looking at them. These features - "which might not be detectable to the human eye" - include subtle fading and the way boundaries between the fake and real materials blend together in altered footage. The tool will initially be available to political and media organizations "involved in the democratic process," according to the company.

A second new Microsoft tool, also announced Tuesday, will allow video creators to certify that their content is authentic and then communicate to online viewers that deepfake technology hasn't been used, based on a Microsoft certification that has "a high degree of accuracy," the post said. Viewers can access this feature through a browser extension.

The AI technology used to generate fake videos can continuously learn and improve, making it "inevitable that they will beat conventional detection technology," according to the blog. "However, in the short run, such as the upcoming U.S. election, advance detection technologies can be a useful tool to help discerning users identify deepfakes."