Microsoft Launches Deepfake Detection Tool Ahead of US Elections

Microsoft’s Deepfake Detection tool – Heard about this? If not all, then be ready to dive deep into it in the quick discussion that follows. 

Microsoft releases deepfake detection tool ahead of US Election
Microsoft Deepfake Detection Tool

While the US presidential elections get scheduled around the first week of November, Microsoft announced two new technologies to combat disinformation. These technologies form an integral part of Microsoft’s Defending Democracy Program and are aimed at spotting synthetic media.

 Microsoft claims that these tools will help protect voting through ElectionGuard. It also states that the first tool which is the Video Authenticator tool provides a ‘percentage chance’ or ‘confidence score’ of the media and helps detect manipulation if any. This will supposedly aid to secure campaigns and others involved in the democratic process through AccountGuard, Microsoft 365 for Campaigns, and Election Security Advisors.

Undoubtedly, disinformation is widespread all around in today’s digitized world. The major issue is deepfakes or synthetic media. The photos, videos, or audio files that are manipulated by artificial intelligence (AI) in several ways which are hard to identify is termed as the Deepfake media. They could appear to make people say things that the anointed person might not have said or put them into situations that they do not procure to. 

Facebook AI Launches Its Deepfake Detection Challenge - IEEE Spectrum

The palpability that they’re created by AI and so continue to expand makes it accustomed to beat the detection technology. However, the growing adversities of technology, demand to be executed with further elevated technology. So, at least for the short run, essentially the upcoming U.S. election, Microsoft’s Video authenticating tool backed by the second tool a reader, users can rest assured of identifying deepfake.

What is the advantage of the Microsoft Deepfake detection tool?

  If you encounter a piece of online content that seems real but ‘smells’ fishy then there are high chances that it might be manipulated using AI to misinform the crowd. The Microsoft Deepfake detection tool rates videos with a confidence score that tells the users about the originality of data. 

The tool’s algorithm is created by the combined efforts of the efficient Microsoft AI team and the Microsoft AI, Ethics, and Effects in Engineering and Research (AETHER) Committee. The tool is powered by Microsoft’s Azure cloud infrastructure. It is targeted to identify manipulated elements which otherwise is very difficult to trace.

The responsible association made the tool undergo rigorous AI fusion by using publicly available datasets such as Face Forensic++. The tool was then tested on the Deepfake Detection Challenge Dataset. The tool encourages creators to certify their work with a signature and attach digital hashes to the content. So, wherever the data travels, the attached hashes and certificated traverse around with it. This attachment stays as the metadata to the work.   

Microsoft Is Distributing a Deepfake Detection Tool | MakeUseOf

The second tool is a reader that is associated with the tool and scans the certificates and hashes to verify that the content is authentic. The technology is an output of the research and study made by Microsoft Research & Microsoft Azzure, partnered with Microsoft’s Defender Democracy program. It is said to empower the Project Origin that was recently announced by BBC. 

Whether Microsoft plans to make either of these tools public, is yet to be discovered. As of now, Microsoft is said to be in communicating mode with the AI Foundation’s Reality Defender 2020 which is a US-based dual commercial and nonprofit enterprise. Video Authenticator would soon be made available to electorial organizations.

Scan & Detect Deepfakes With A Simple Tool

Microsoft further adds that none of the organizations can singlehandedly combat disinformation and trace the harmful deepfakes easily. Accepting the challenge of the evolving nature of complexities and the growth of the sophistication in the generation of synthetic media, Microsoft announced that it will partner with multiple organizations and do whatever in regards to validating content.  Microsoft has partnered with the AI foundation and the consortium of the media partners including the BBC, CBC/Radio-Canada, and the New York Times on Project Origin. An array of publishers and social media companies, in the Trusted News Initiative also seem to be engaged with Microsoft’s latest developments in the technology.

Microsoft Offers Azure Tools, Services For AI, Blockchain | Silicon UK Tech  News
Microsoft Azure

Microsoft is indulging in all sorts of campaigns and quizzes to reach out to the people to know about synthetic media in the process, build the quintessential critical media literacy skills, and verify the impact of it on the democracy. 

Soon we could be able to figure out the synthetic media that is growing its legacy in its cheap ways and knock it down gradually to achieve the best of the authentic data.

A Rising Concern For Fake News

Sadly, the spread of fake news is seemingly uncontrollable. Fortunately, Microsoft found a way to reduce Deepfake news. But how do fake news arise?

Here are some ways on how fake news arise:

  • Exaggerated news: There are news outlets that exaggerate the importance of certain events in the hopes that people will believe them. In other words, they purposely choose to publish stories about terrorist attacks and civil conflicts around the world in order to make the public believe that these things are getting out of hand. 
  • A big gap: Even if they do make an effort to cover everything in the news, the reality of the situation is that there’s a gap in true and fake news. People tend to believe everything that they read on the news because the news is being delivered in an extremely simplified manner. That’s why political campaign text messaging is still preferred to directly forward the messages of politicians to their target voters instead of risking their reputation using social media.
  • Limited point of view reports: Having a limited point of view means that people look at news based on what they want to see and not what’s actually going on in the world. As a result, it’ll become harder for people to understand what’s going on.
  • Fake media passed off as real news: One of the reasons why fake news emerge is the way that media outlets manipulate the news coverage to the public. If they can convince the public that what they’re telling is really what’s going on around the world, people won’t be able to see the truth behind all of their stories.
Microsoft Launches Deepfake Detection Tool Ahead of US Elections 1
Archana Udayanan
Archana usually enjoys reading a book, with an affinity to the mystery. She is an active writer at the Creative11 space, who likes blogging and writing articles that call for a definite read. Other than writing, she loves to nurture her small garden and paint to rejoice. Archana makes her home in Kerala with her family.

Grab your lifetime license to AI Image Generator. Hostinger Hosting