Site icon TechPolyp

Meta Rejects EU AI Code Overreach

Meta rejects EU AI code

When you purchase through links on our site, we may earn an affiliate commission. This doesn’t affect our editorial independence.

Meta has refused to sign the European Union’s Code of Practice for its AI Act, weeks before the bloc’s rules for general-purpose AI take effect. This bold move places Meta at the centre of regulatory tensions in Europe. The Meta rejects EU AI code stance reflects deep resistance against what the company considers over-regulation.

“Europe is heading down the wrong path on AI,” wrote Joel Kaplan, Meta’s chief global affairs officer, on LinkedIn. He explained that Meta had carefully reviewed the Commission’s Code of Practice for general-purpose AI and would not sign it. According to him, the Code introduces significant legal uncertainties for model developers. The decision reinforces the broader message: Meta rejects EU AI code and sees the bloc’s approach as stifling.

Meta Rejects EU AI Code, Raises Concerns About the Code of Practice

The voluntary framework published by the EU earlier this month aims to help AI companies comply with upcoming legislation. It requires companies to maintain documentation, avoid using pirated data, and comply with content owners’ removal requests. Meta views many of these conditions as vague, overreaching, or unworkable. Their refusal signals that Meta rejects EU AI code due to concerns about practicality and legal clarity.

Kaplan argued that the EU’s legal approach would damage innovation in the region. “This will throttle the development and deployment of frontier AI models in Europe,” he warned. He added that the rules would stunt European companies hoping to build on advanced AI platforms. These comments are part of a broader justification for why Meta rejects EU AI code and questions its long-term economic consequences.

The AI Act itself is a risk-based regulation that bans specific uses outright. This includes applications such as social scoring or behavioural manipulation. It also flags “high-risk” areas such as biometric identification, education, and employment systems. Developers must register their AI systems and meet strict quality, safety, and transparency standards. Meta rejects EU AI code partly because it believes these requirements are excessively burdensome for general-purpose AI.

Industry-Wide Pushback and the EU’s Firm Position

Other tech giants share similar reservations. Companies like Alphabet, Microsoft, and Mistral AI have all pushed back against the rules. Many urged the Commission to delay the rollout or modify the guidelines to reduce compliance risks. Despite this pressure, the Commission remains firm. It insists the timeline won’t shift and that enforcement is non-negotiable. In this context, Meta rejects EU AI code as part of a broader industry backlash.

In Case You Missed It:

Meta AI Hiring Spree Targets OpenAI

Meta Edits App for Creators Introduces Powerful Video Tools

On the same day, the EU also released implementation guidelines for AI model providers. These offer more direction ahead of the August 2 effective date. The guidelines target providers of “general-purpose AI models with systemic risk,” which includes firms like OpenAI, Anthropic, Google, and Meta. Companies with models released before August 2 must comply fully by 2027. Still, Meta rejects EU AI code and its related obligations, signalling a potential regulatory clash ahead.

Through refusing to engage with the EU’s voluntary Code, Meta is drawing a line. The decision makes it clear that Meta rejects EU AI code not out of defiance, but due to a strategic disagreement. Whether this position proves viable long-term remains to be seen.

Exit mobile version