
Google has made it official that they support the AI code of practice proposed by the European Union. The tech behemoth’s announcement comes as the EU is getting set to enforce the EU AI Act, the world’s most extensive rules about artificial intelligence.
To European policymakers, this represents a strong win as they continue to assert their right to update regulations that independently apply to all digital technology. This also indicates a further divergence among the big tech companies on how that best can be facilitated with respect to the future of AI regulation in Europe.
What Is the EU AI Code of Practice?
The AI code of practice is a guiding document, both voluntary and influential, that specifies how general-purpose AI systems should function as seen through the forthcoming EU AI Act. It offers extensive guidance on safety, transparency and accountability, data governance, and intellectual property rights.
Kent Walker, president of global affairs, expressed the company’s support for the code, and addressed the possibility that this “might gives European citizens and businesses access to even more secure and reliable AI tools.”
Walker agreed that the code aligned well with EU’s larger set of regulatory requirements, but expressed concern that other parts of the framework could limit innovation. Walker warned that parts of the framework such as copyright regulations or harmful trade secret disclosures could inhibit AI across Europe.
Google Backs Rules but Warns of Innovation Risks
In its statement, Google expressed both optimism and concern. The company reiterated its desire to find a constructive way to help shape responsible AI development. However, it also made it clear that overregulation could damage Europe’s global competitiveness.
Walker stated that “departures from EU copyright law, actions that delay approvals, or impositions that expose trade secrets” could deter investment and progress. This queasiness projected by Walker fits into wider industry anxieties about the bureaucracy and conflicting regulations and other factors around EU and UK AI regulations.
Yet, Google’s commitment puts it on the same side as other supporters of the code like OpenAI and French AI company Mistral, and strengthens the momentum of the EU’s push to set standards before the EU AI Act is ratified next month.
Meta Refuses to Sign Amid Legal Concerns
While Google steps up, not every tech giant is on board. Meta, the owner of Facebook and a leading AI model developer, has refused to sign the AI code of practice.
Joel Kaplan, Meta’s chief legal officer, claimed the code “introduces a number of legal uncertainties for model developers.” He argued that it extends beyond the intended scope of the EU AI Act, warning that such overreach could “throttle the development and deployment of frontier AI models in Europe.”
Meta’s rejection of the framework adds fuel to an already heated debate. Industry leaders, including Airbus and BNP Paribas, have called for a two-year delay in enforcing AI rules, citing confusion over overlapping regulatory requirements.
US Pushes Back While EU Defends Its Digital Sovereignty
As the EU pushes ahead with its AI agenda, tensions with the United States continue to simmer. After a meeting between European Commission president Ursula von der Leyen and US President Donald Trump, both sides pledged to address “unjustified digital trade barriers.”
However, the European Commission made it clear that its digital regulations are not part of that discussion. A spokesperson stated firmly, “We are not moving on our right to regulate autonomously in the digital space.”
This statement signals that the EU has no intention of softening its stance, despite lobbying from the US and hesitation from tech giants like Meta.
Microsoft Likely to Join, Strengthening Europe’s AI Position
Microsoft may soon follow Google’s lead. President Brad Smith told Reuters that it was “likely” the company would also sign the AI code of practice.
If Microsoft joins, it would mark a major alignment between the EU’s regulatory vision and leading global AI developers. This could pave the way for a new global benchmark in responsible AI governance, centered in Europe.
While challenges remain, we can see how the EU’s strong regulatory position has now started a global conversation about how to balance both innovation and ethical responsibility. Google’s reaction to signing the code, while presenting some legitimate concerns, clearly shows that working toward a safe AI future requires collaboration and not refusal.