
The European Union’s push to establish codified rules for artificial intelligence is taking hold, although not all technology companies are on board. Microsoft said it would be open to following the EU’s new code of practice for AI, but Meta outright rejected the code of practice as too legally risky regulatory overreach.
These varying perspectives illustrate the increasing tension between regulators, on one hand, and tech companies that create general-purpose AI models, on the other. The code of practice is a voluntary code of conduct that is part of the overall backdrop of the EU’s AI Act. The goal is to ensure compliance pertaining to transparency and copyright standards, while at the same time encourage innovation. However, companies are radically divided on how it will impact their operations and the AI ecosystem in Europe.
What Is the EU AI Code of Practice?
The EU AI code of practice was introduced by a panel of 13 independent experts. It offers a voluntary framework to guide companies in complying with the European Union’s AI Act. The AI Act, which became law in June 2024, sets transparency standards for general-purpose AI models and mandates compliance with European copyright laws.
Key expectations from signatories include publishing summaries of the training content used and adopting copyright policies aligned with EU law. The code aims to offer legal clarity before the AI Act’s full enforcement begins in 2026.
This code targets major AI developers, including Microsoft, Meta, Alphabet, OpenAI, Anthropic, and Mistral. Thousands of other tech companies operating in the region are also expected to engage with the framework.
Microsoft Supports Voluntary Alignment
Microsoft President Brad Smith said the company is likely to endorse the EU AI Code of Practice. While the company has yet to have an official commitment, Smith indicated that Microsoft thinks of the code as an opportunity.
“I think it’s likely we will sign. We need to read the documents,” Smith told Reuters. He was enthusiastic that there is a chance for direct contact between the tech industry and the EU’s AI Office, which is tasked with implementing the regulations.
Microsoft’s proposed engagement follows a sustained engagement around general AI regulation in Europe, where the company has pushed a cooperative governance approach, including working together to develop public and private modes of governance. The company framed the code as an opportunity to inform responsible development, while avoiding problems when compliance becomes mandatory.
Meta Warns of Legal Confusion and Regulatory Overreach
In contrast, Meta strongly criticized the voluntary framework. Joel Kaplan, Meta’s Chief Global Affairs Officer, stated that the company will not sign the code. He cited AI Act compliance challenges and concerns about ambiguity in legal obligations.
“Meta won’t be signing it. This code introduces a number of legal uncertainties for model developers,” Kaplan posted on LinkedIn. He argued that the framework goes well beyond what the AI Act requires, creating compliance issues that could chill innovation.
Meta echoed the concerns of 45 European companies who believe the current form of the code could hurt competitiveness. According to them, the added requirements would block smaller businesses from building on frontier AI models and restrict European AI innovation.
Innovation vs Regulation
At the heart of the debate lies a deep divide over how much control governments should exercise over AI development. Microsoft leans toward proactive collaboration with regulators, seeing AI Act compliance as a manageable and necessary responsibility.
Meta, however, views the code as a dangerous precedent that exceeds the intended scope of AI regulation in Europe. They believe that voluntary rules may set unofficial standards that bind companies before they’re legally required.
This philosophical gap could affect how global AI companies operate in the European Union. It may even shape future tech policies beyond Europe’s borders, especially as governments across the world look to regulate generative AI.
What Lies Ahead for EU’s AI Governance?
The European Union wants to lead global AI regulation, but these contrasting corporate responses highlight the challenge of building consensus. With major players like Microsoft leaning in and Meta stepping away, the success of the EU AI code of practice may depend on how flexible and clear the framework becomes in its final form.
Meanwhile, the EU’s AI Office will continue pushing for wider adoption before the AI Act enforcement begins. Whether more companies join or resist the code will show how prepared the tech world is to engage with structured AI oversight.