MENLO PARK — Facebook AI, in partnership with leaders from academia and industry, has launched the Deepfake Detection Challenge (DFDC), an open initiative to accelerate the development of technologies aimed at detecting deepfakes and manipulated media. The challenge was introduced at the NeurIPS conference, offering participants access to a new dataset containing over 100,000 videos created specifically for this purpose. The goal is to encourage researchers to build and improve models that can effectively detect manipulated content.
The challenge, hosted by Kaggle, a prominent data science platform, will run through March 2020. Participants will compete by submitting AI models that detect deepfakes, with submissions scored based on their performance. Facebook has committed more than $10 million in prizes and grants to incentivize innovation, while AWS is contributing up to $1 million in credits to support the hosting of participants’ models.
Cristian Canton Ferrer, the Facebook AI Research Manager overseeing the project, emphasized the importance of the dataset in driving deepfake detection research. The videos were created using paid actors in a variety of settings, with AI-driven manipulations applied to simulate real-world deepfakes. These videos serve as a critical resource for building models that can handle diverse scenarios of manipulated media.
Participants are encouraged to share their models as open-source to promote wider research efforts. The challenge has taken extensive precautions to ensure that the dataset is responsibly sourced, featuring only actors who consented to their likeness being used, and no Facebook user data is included.
In collaboration with other key partners, including Microsoft, AWS, and academic institutions, the DFDC is part of a broader industry effort to address the growing concern of AI-manipulated media. Facebook AI aims to develop tools that can help prevent the misuse of deepfakes, ensuring that AI is used responsibly and transparently.