The European Parliament has blocked lawmakers from using built-in AI tools on their official work devices.
The decision follows mounting concerns over cybersecurity, privacy, and foreign access to sensitive data.
According to an internal email obtained by Politico, the Parliament’s IT department said it cannot guarantee the security of information uploaded to AI systems.
Officials also confirmed that they are still assessing how much data is shared with AI providers.
As a result, the department concluded that keeping these tools disabled is the safest option.
Cloud-Based AI
Lawmakers routinely handle sensitive material. This includes private emails, draft legislation, and internal policy discussions.
When users interact with AI tools, their data often travels to cloud servers owned by third-party companies. In many cases, those servers are located outside the European Union.
This creates uncertainty. The Parliament’s IT department warned that once data leaves official systems, it may fall outside EU control.
Even with safeguards in place, officials said the risks remain difficult to measure. Therefore, they chose to disable AI features by default.
U.S. Surveillance Laws
Popular AI chatbots are operated by U.S.-based companies, including Anthropic, Microsoft, and OpenAI.
Because these firms fall under U.S. law, American authorities can legally demand access to user data.
This means that if European lawmakers upload confidential material to these platforms, U.S. agencies could require companies to turn over that information.
For the European Parliament, this risk is unacceptable.
Also read: Google Backs EU’s AI Code of Practice, Unlike Meta
AI Training Practices
Many systems rely on user inputs to improve their models. While companies say they limit how data is stored and shared, officials argue that transparency remains incomplete.
The Parliament’s IT department acknowledged that it is still evaluating how AI providers process uploaded information.
Until that review is complete, officials decided to err on the side of caution.
Europe’s Data Protection

Europe has some of the strongest data protection laws in the world. These rules aim to limit how companies collect, store, and use personal information.
However, tensions are growing. Last year, the European Commission proposed legislative changes that could relax certain data protection requirements.
The goal was to help large technology companies train AI systems using European data.
Critics strongly opposed the proposal; they argued that it weakened privacy safeguards and favored U.S. tech giants.
The Parliament’s move highlights that this debate remains unresolved.
Also read: Europe’s AI Regulation
Political Tensions
Several EU member states are reassessing their dependence on U.S. technology companies. These firms remain subject to U.S. law and changing political demands.
In recent weeks, the U.S. Department of Homeland Security issued hundreds of subpoenas to technology and social media companies.
The requests sought information about people who publicly criticized the Trump administration’s policies.
Reports show that companies, including Google, Meta, and Reddit complied in several cases.
Notably, these subpoenas were not issued by a judge and were also not enforced by a court.

