Microsoft has launched a new video authenticator tool to combat "deepfakes" to prevent misinformation on social media ahead of the upcoming U.S. election. Microsoft Video Authenticator has been designed to analyze photos and videos to automatically determine a "confidence score" to indicate that they're artificially manipulated.
"This technology has two components. The first is a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content. The hashes and certificates then live with the content as metadata wherever it travels online. The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it", the company explained.
Deepfakes basically use artificial intelligence to manipulate audio, video, or image files to make people appear to say something or do they didn’t. Deepfakes are now extensively used worldwide (including politicians, meme-makers, and news agencies), so it’s important for organizations to step up and build technology to address this issue. With Microsoft Video Authenticator, Microsoft aims to help news outlets and political campaigns that are involved in the democratic process.
Microsoft noted that the implementation of this tool is a part Defending Democracy Program, the company's recent initiative to fighting disinformation, protect voting, secure campaigns, and more. "It’s also part of a broader focus on protecting and promoting journalism as Brad Smith and Carol Ann Browne discussed in their Top Ten Tech Policy Issues for the 2020s", Microsoft said yesterday.