MENLO PARK — A coalition of leading tech companies and academic institutions, including Facebook, Microsoft, and universities such as MIT, Cornell Tech, and the University of Oxford, have launched the Deepfake Detection Challenge (DFDC) to address the growing threat posed by AI-generated “deepfake” videos. The challenge aims to spur innovation in detecting manipulated media by providing a large dataset and inviting global participation from the AI and research communities.
Deepfake technology, which can create realistic yet entirely fabricated videos of individuals, has raised significant concerns about misinformation and the integrity of online content. The DFDC seeks to develop advanced tools that can detect these altered videos and prevent their misuse.
To support this effort, a new dataset created with the help of paid actors is being commissioned, ensuring ethical data use without involving user data. The challenge, funded with over $10 million, includes a leaderboard, grants, and awards to encourage participation. The DFDC will be overseen by the Partnership on AI’s Steering Committee on AI and Media Integrity, which includes a diverse coalition of organizations from the tech, media, and academic sectors.
The challenge will kick off with a technical working session at the International Conference on Computer Vision (ICCV) in October, followed by the full dataset release and official launch at the Conference on Neural Information Processing Systems (NeurIPS) in December. The winning teams will be recognized at NeurIPS, where their methods will be shared with the broader community.
Academic experts involved in the challenge emphasize the importance of collaboration between industry and academia to combat the dangers of manipulated media. They believe that by pooling resources and knowledge, the global community can develop effective tools to preserve the integrity of information in the digital age.