Microsoft’s Deepfake Detection tool – Heard about this? If not all, then be ready to dive deep into it in the quick discussion that follows.
While the US presidential elections get scheduled around the first week of November, Microsoft announced two new technologies to combat disinformation. These technologies form an integral part of Microsoft’s Defending Democracy Program and are aimed at spotting synthetic media.
Microsoft claims that these tools will help protect voting through ElectionGuard. It also states that the first tool which is the Video Authenticator tool provides a ‘percentage chance’ or ‘confidence score’ of the media and helps detect manipulation if any. This will supposedly aid to secure campaigns and others involved in the democratic process through AccountGuard, Microsoft 365 for Campaigns, and Election Security Advisors.
Undoubtedly, disinformation is widespread all around in today’s digitized world. The major issue is deepfakes or synthetic media. The photos, videos, or audio files that are manipulated by artificial intelligence (AI) in several ways which are hard to identify is termed as the Deepfake media. They could appear to make people say things that the anointed person might not have said or put them into situations that they do not procure to.
The palpability that they’re created by AI and so continue to expand makes it accustomed to beat the detection technology. However, the growing adversities of technology, demand to be executed with further elevated technology. So, at least for the short run, essentially the upcoming U.S. election, Microsoft’s Video authenticating tool backed by the second tool a reader, users can rest assured of identifying deepfake.
What is the advantage of the Microsoft Deepfake detection tool?
If you encounter a piece of online content that seems real but ‘smells’ fishy then there are high chances that it might be manipulated using AI to misinform the crowd. The Microsoft Deepfake detection tool rates videos with a confidence score that tells the users about the originality of data.
The tool’s algorithm is created by the combined efforts of the efficient Microsoft AI team and the Microsoft AI, Ethics, and Effects in Engineering and Research (AETHER) Committee. The tool is powered by Microsoft’s Azure cloud infrastructure. It is targeted to identify manipulated elements which otherwise is very difficult to trace.
The responsible association made the tool undergo rigorous AI fusion by using publicly available datasets such as Face Forensic++. The tool was then tested on the Deepfake Detection Challenge Dataset. The tool encourages creators to certify their work with a signature and attach digital hashes to the content. So, wherever the data travels, the attached hashes and certificated traverse around with it. This attachment stays as the metadata to the work.
The second tool is a reader that is associated with the tool and scans the certificates and hashes to verify that the content is authentic. The technology is an output of the research and study made by Microsoft Research & Microsoft Azzure, partnered with Microsoft’s Defender Democracy program. It is said to empower the Project Origin that was recently announced by BBC.
Whether Microsoft plans to make either of these tools public, is yet to be discovered. As of now, Microsoft is said to be in communicating mode with the AI Foundation’s Reality Defender 2020 which is a US-based dual commercial and nonprofit enterprise. Video Authenticator would soon be made available to electorial organizations.
Microsoft further adds that none of the organizations can singlehandedly combat disinformation and trace the harmful deepfakes easily. Accepting the challenge of the evolving nature of complexities and the growth of the sophistication in the generation of synthetic media, Microsoft announced that it will partner with multiple organizations and do whatever in regards to validating content. Microsoft has partnered with the AI foundation and the consortium of the media partners including the BBC, CBC/Radio-Canada, and the New York Times on Project Origin. An array of publishers and social media companies, in the Trusted News Initiative also seem to be engaged with Microsoft’s latest developments in the technology.
Microsoft is indulging in all sorts of campaigns and quizzes to reach out to the people to know about synthetic media in the process, build the quintessential critical media literacy skills, and verify the impact of it on the democracy.
Soon we could be able to figure out the synthetic media that is growing its legacy in its cheap ways and knock it down gradually to achieve the best of the authentic data.