MENLO PARK — The conversation surrounding the governance of AI took a significant step forward, with Facebook sharing its views on shaping the future of artificial intelligence both within the European Union and globally. As we become more attuned to the vast opportunities and inherent risks of an interconnected, fast-paced digital world, Facebook has advocated for the development of new regulations to safeguard key aspects of the internet—ensuring it remains open, accessible, and accountable.
During the past year, Facebook CEO Mark Zuckerberg outlined four essential areas where regulation could have a transformative effect: harmful content, election integrity, privacy, and data portability. AI, as a pivotal technology for both the present and future, was highlighted as a key component in these discussions. As part of its continued efforts to engage with policymakers, Facebook welcomed the chance to provide feedback on the European Commission’s recent “White Paper on Artificial Intelligence – A European approach to excellence and trust.”
Much like the internet, AI represents a groundbreaking force, promising to fuel innovation and economic growth while tackling long-standing global challenges. It is central to Facebook’s operations, from curating News Feed content to combating misinformation. AI has also been instrumental in pandemic response efforts, such as facilitating the creation of disease maps to track COVID-19’s spread.
However, as with any transformative technology, AI presents unique legal and ethical dilemmas. How can societies ensure that AI systems are transparent, accountable, and respectful of privacy? Facebook has recognized the EU’s leadership in technology regulation, especially with the success of the General Data Protection Regulation (GDPR), and believes the EU is well-positioned to guide the development of AI governance.
Facebook supports the European Union’s vision of establishing an “ecosystem of excellence” in AI—an environment where research, industrial capacity, and competitiveness can thrive. This is especially relevant as Europe looks to rebuild its economy in the aftermath of the COVID-19 pandemic.
The tech giant also agrees with the European Commission’s aim of fostering an “ecosystem of trust” for AI in Europe, grounded in core democratic values, human rights, and the rule of law. By leading in this space, the EU can set a global standard for AI governance, providing a counterpoint to countries that pursue AI development without the same commitment to these principles.
Striking the right balance between regulation and innovation is not without its challenges, but Facebook is committed to collaborating with the EU to navigate this complex terrain. Facebook has been working towards both excellence and trust in AI, demonstrated through its global AI research initiatives, partnerships with academic institutions, and AI for Good projects.
One example of this commitment is Facebook’s ongoing support for the development of ethical AI, such as its partnership with the Technical University of Munich and the creation of an Institute for Ethics in AI. The company has also engaged in collaborative efforts across regions like India, Latin America, and the Asia-Pacific to explore how AI can contribute to meeting global objectives such as the United Nations’ Sustainable Development Goals.
Internally, Facebook has focused on developing AI systems that are transparent and fair. This is exemplified by its “Why Am I Seeing This?” feature, which offers users explanations of how AI-driven decisions are made. In addition, the company has been involved in shaping global AI policies through its participation in initiatives like the OECD’s AI Principles and Policy Observatory.
In its comments on the EU’s White Paper, Facebook made two key recommendations: the need for clear definitions of high-risk AI applications and the importance of aligning AI regulations with the GDPR. These suggestions aim to prevent burdensome regulations that could stifle innovation while ensuring that AI developments remain safe, fair, and effective. Facebook emphasized that regulators should focus on high-risk AI uses and cautioned against vague criteria that could lead to over-regulation.
Additionally, Facebook advocates for an AI governance model that follows the self-assessment approach embedded in the GDPR, rather than imposing pre-approval mechanisms that could hinder technological progress. The company also raised several practical concerns regarding the Commission’s proposal, such as potential conflicts with data protection laws and intellectual property rights, along with technical challenges in implementing certain requirements.
As this debate evolves, Facebook is eager to continue collaborating with the European Commission and other stakeholders to navigate the complexities of AI regulation. To further these discussions, Facebook will be hosting an online conference on June 18 to mark the fifth anniversary of its AI research lab in Paris, where experts will explore the future of AI governance in Europe and beyond.
For those interested in exploring Facebook’s detailed responses to the European Commission, the full document can be accessed here.