The use of deepfakes has spread rapidly, including with an application with this proposal. But in the meantime, you need to create ways to identify which videos are not legitimate. This is what Facebook intends to do when funding and rewarding research in this regard.
The company will invest US $ 10 million to stimulate the creation of methods that point out if a video has been manipulated. The Deepfake Detection Challenge, as it was called, will also be attended by Microsoft, The Partnership on AI consortium and researchers from several universities.
“The goal of the challenge is to produce technology that everyone can use to better detect when artificial intelligence is used to deceive people,” explains Facebook Vice President of Technology Mike Schroepfer.
To help challenge participants, Facebook will create a dataset with several deepfakes. “It is important to have data freely available for use by the community, provided with clear consent from the participants and few usage restrictions,” says Schroepfer.
Facebook created the material from real videos made with people who were paid for it and who authorized the use of the videos in the surveys. The company said it would not use images of its users in the challenge.
The material will be tested in a technical working session at the International Conference on Computer Vision, which takes place in October in South Korea. The Deepfake Detection Challenge is due to be launched in December.
Deepfake by Mark Zuckerberg
One of the deepfake experiments involved the Facebook founder himself. A video posted in June on Instagram showed what Mark Zuckerberg would be saying that “whoever controls the data controls the future”.
The authors of the video are Bill Posters and Daniel Howe, who created it in partnership with the advertising agency Canny. They also indicated in the description that it was not a legitimate video. Instagram decided not to delete the material.
With information: Facebook.