Tech giant Microsoft announced on Tuesday the rollout of two new technologies to combat disinformation.
The U.S. technology giant hopes it will help educate the public about the problem of disinformation particularly in the spread of deepfake content.
Deepfakes came to prominence in early 2018 after a developer adapted cutting-edge artificial intelligence techniques to create software that swapped one person’s face for another.
The process worked by feeding a computer a lot of still images of one person and video footage of another. The software then used this to generate a new video featuring the former’s face in the place of the latter’s, with matching expressions, lip-synch, and other movements.
Since then, the process has been simplified – opening it up to more users – and now requires fewer photos to work.
Some apps exist that require only a single selfie to substitute a film star’s face with that of the user within clips from Hollywood movies.
According to Microsoft, the research they supported by Professor Jacob Shapiro at Princeton, updated this month, cataloged 96 separate foreign influence campaigns targeting 30 countries between 2013 and 2019.
These campaigns, carried out on social media, sought to defame notable people, persuade the public or polarize debates. While 26% of these campaigns targeted the U.S., other countries targeted include Armenia, Australia, Brazil, Canada, France, Germany, the Netherlands, Poland, Saudi Arabia, South Africa, Taiwan, Ukraine, the United Kingdom, and Yemen.
Some 93% of these campaigns included the creation of original content, 86% amplified pre-existing content and 74% distorted objectively verifiable facts. Recent reports also show that disinformation has been distributed about the COVID-19 pandemic, leading to deaths and hospitalizations of people seeking supposed cures that are actually dangerous.
As part of Microsoft’s Defending Democracy Program, which, in addition to fighting disinformation, helps to protect voting through ElectionGuard and helps secure campaigns and others involved in the democratic process through AccountGuard, Microsoft 365 for Campaigns and Election Security Advisors.
New technologies used in disinformation
According to their latest blog post, disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate.
To address this, Microsoft is currently working on two separate technologies to address different aspects of the problem.
One major issue is deepfakes, or synthetic media, which are photos, videos or audio files manipulated by artificial intelligence (AI) in hard-to-detect ways.
They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology.
However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identifies deepfakes.
Another technology to be introduced is the Microsoft Video Authenticator which can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated.
In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.
This technology was originally developed by Microsoft Research in coordination with Microsoft’s Responsible AI team and the Microsoft AI, Ethics and Effects in Engineering and Research (AETHER) Committee, which is an advisory board at Microsoft that helps to ensure that new technology is developed and fielded in a responsible manner.
Video Authenticator was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, both leading models for training and testing deepfake detection technologies. (JM Agreda)
Leave a Reply