Europe is setting the pace for AI regulation, and the spotlight is on a new voluntary code of practice for general-purpose AI.
This code is designed to help companies align with the upcoming EU AI Act, a sweeping piece of legislation that aims to keep artificial intelligence safe, ethical, and fair.
And here’s the big news: Google just said yes to it.
The tech giant has agreed to sign the code, just days before stricter rules kick in on August 2.
Meta Said No. Google Said Yes. What’s the Difference?
Earlier this month, Meta made headlines by refusing to sign the EU’s AI code.
Their reason?
They think the EU is overreaching. In fact, Meta claimed Europe is “heading down the wrong path on AI.”
On the flip side, Google is playing ball – sort of.
In a blog post, Kent Walker, Google’s president of global affairs, said the final version of the code was “better” than earlier drafts.
But he also expressed concern that overregulation could slow Europe down in the global AI race.
Here’s what he meant:
- New rules might conflict with EU copyright law.
- The approval process could become slower and more bureaucratic.
- Companies might be forced to share trade secrets, putting innovation at risk.
Google’s move signals cooperation, but with some hesitation.
Who Will These Rules Affect?
Starting August 2, the new regulations will focus on general-purpose AI models with systemic risk, think models like ChatGPT or Google’s Gemini.
Some of the companies in the hot seat:
- Anthropic
- Meta
- OpenAI
These companies will have a two-year window to fully comply with the AI Act.
What Does Signing the Code of Practice Actually Mean?
Good question.
By signing this voluntary code, AI companies commit to:
- Sharing clear documentation on their AI tools and systems.
- Avoiding pirated content for training their models.
- Respecting copyright owners who opt out of AI training datasets.
Basically, it’s about building AI responsibly and transparently.
Let’s Talk About the AI Act for a Second
The EU AI Act is one of the most ambitious AI laws in the world.
Here’s how it works:
Risk Level | Example Use Cases | Rule Type |
---|---|---|
Unacceptable Risk | Social scoring, behavior manipulation | Completely banned |
High Risk | Biometrics, facial recognition, hiring tools | Must meet strict requirements |
Limited/Minimal | Chatbots, recommendation engines | Light transparency obligations |
If your AI tool falls in the high-risk category, you’ll need to register it, perform risk assessments, and follow strict quality control measures.
Why This Matters
This isn’t just a Europe thing. When a market as big and influential as the EU moves, the whole world watches.
And tech giants can’t afford to ignore the implications.
- For companies: The rules could raise costs and slow down development.
- For users: It means better protection from biased or manipulative AI.
- For the AI world at large: Europe could become the gold standard for AI ethics—if it balances innovation and regulation well.
Final Thoughts
This moment highlights a growing divide in how tech companies view regulation.
Some, like Google, are trying to play nice, possibly to stay ahead of stricter laws.
Others, like Meta, are pushing back, arguing that too many rules will kill creativity and progress.
The real question? Can Europe protect users without stifling innovation?
Time will tell.