OpenAI Vs. Meta: What The EU AI Code Reveals About Their Strategy
A divergence that may ultimately shape global AI governance
This article appeared originally on Forbes on July 22, 2025. Click here to read the original.
Note: since the original publication, Anthropic and Google committed to signing up.
Two of the most powerful tech companies took strikingly different paths when it comes to artificial intelligence regulation in Europe. OpenAI has embraced the European Union’s voluntary Code of Practice for General Purpose AI, signing on and expressing support for collaborative governance. Meta, by contrast, has opted out, publicly rebuking the EU’s approach. These divergent responses reflect the companies’ business models and interests, risk tolerance, and long-term bets on AI regulation both in Europe and at home in the United States.
Europe’s Voluntary Experiment In AI Oversight
The EU’s code of practice, announced on July 10, is a voluntary framework to guide the development and deployment of general-purpose AI models. Co-created by regulators and industry, it encourages transparency, information sharing, and best practices around risk management and model evaluation. It is an attempt to set a regulatory benchmark for AI companies, requiring signatories to publish summaries of training data, avoid unauthorized use of copyrighted material, and implement internal risk-monitoring frameworks. In return, companies that sign on are promised reduced administrative burdens and greater legal clarity. The subtext is clear: by joining, firms are effectively placed on the “good guys” list, those that may not face heightened scrutiny and risk being targeted. Refusing to sign the code does not mean that a company is out of compliance. Until now, few companies, including Mistral (the French AI national champion), OpenAI and Microsoft, have committed to it.
EU AI regulation has come under pressure from European industry leaders who, in an open letter, urged the EU Commission to pause key AI Act obligations for two years. A report by Tech Policy Press revealed how U.S. firms, including Google and Meta, have pushed to soften or delay implementation, arguing that compliance would harm innovation. Joel Kaplan, Meta’s president of global affairs, openly criticized the EU’s trajectory, arguing that “this overreach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.”
OpenAI: Compliance As Part Of Expansion
OpenAI’s support for the code is consistent with a broader strategy that positions the company as a responsible, collaborative actor in global AI governance. Ben Rosen, head of AI policy at OpenAI, called this a “big moment for both the EU and industry.” The decision comes as OpenAI accelerates its international expansion and advances its enterprise offerings across Europe. With EU member states among its largest markets, aligning with the code strengthens OpenAI’s credibility and supports its push into the continent. Beyond regulatory goodwill, it is a strategic move to embed itself into Europe’s economic and digital infrastructure. The company plans to expand in Europe by investing in data centers, supporting AI education, partnering with governments, seeding national startup funds, and boosting adoption across sectors.
OpenAI’s decision to sign the EU code of practice is aligned with its intended market positioning as a leader in responsible AI development. Historically, the company has taken a proactive stance on transparency and safety, having published system cards, opened evaluation data and invited external experts to stress-test its models. It was among the first to adopt global AI safety protocols, including the Bletchley Declaration and the Seoul Framework. These moves aren’t just about AI ethics; they’re strategic, helping OpenAI shape the regulatory landscape while demonstrating a commitment to align with emerging norms.
By aligning with EU regulators, OpenAI is cultivating legitimacy, an asset that may be critical as concerns over model risks and frontier capabilities grow. The company’s charter commits it to avoiding uses of AI that would “harm humanity or unduly concentrate power.” The code of practice gives OpenAI a forum to demonstrate this commitment publicly.
Meta: Resistance While Shifting AI Strategy
Meta’s refusal to sign the EU code is consistent with its skepticism toward what it sees as heavy-handed, innovation-stifling regulation. It happens in the context of the company’s ongoing EU litigation on various fronts, including data privacy and the enforcement of the European Digital Services Act and the Digital Markets Act. More recently, these regulatory tensions have taken on a geopolitical dimension, as the U.S. administration began leveraging them in broader trade negotiations, suggesting that AI regulation is now part of the bargaining over tariffs. Meta pleaded for help from the government, asking for the President to “defend American AI companies and innovators from overseas extortion and punitive fines, penalties, investigation, and enforcement.”
Meta’s refusal to sign the code comes as it shifts its strategy, trailing in the race for AI models after a lackluster reception to the release of its Llama 4 family. The company is aggressively investing to catch up, most notably with a $14.3 billion investment in Scale AI that brings its founder, Alexandr Wang, into Meta to head a newly formed superintelligence research lab, focused on developing AI systems that could eventually surpass human intelligence. Alongside the acquisition, CEO Mark Zuckerberg said he plans to invest “hundreds of billions of dollars” into computing infrastructure to build superintelligence.
Two Models, Two Messages To AI Regulators
OpenAI and Meta are sending fundamentally different messages to regulators on both sides of the Atlantic, each reflecting a distinct bet on how to shape the future of AI governance. OpenAI is wagering that early cooperation and some transparency will secure it a seat at the rulemaking table and stave off harsher regulation down the line. Meta, by contrast, is banking on vocal resistance to limit what regulators can reasonably demand, especially from open-source developers. This raises a bigger question: Is it better to follow the code and deal with more rules and the compliance burden, or stay flexible and risk facing more scrutiny?
These stances are deeply tied to business models. OpenAI, now a capped-profit entity with close ties to Microsoft, depends on a perception of responsibility to attract enterprise clients and mitigate scrutiny around frontier model development. Meta, whose business still depends heavily on advertising and consumer-scale platforms, is positioning itself as the defender of open innovation, aligning with developers and researchers who fear a regulated, centralized AI future.
This divergence may ultimately shape global AI governance. Voluntary frameworks like the EU code of practice are testing grounds for model transparency and for how industry players influence the rules that will eventually govern them. In that context, the positions of OpenAI and Meta are rehearsals for the regulatory battles to come.