• Home
  • Blog
  • AI News
  • Eric Schmidt, Former Google CEO, Calls for Global AI Regulation

Eric Schmidt, Former Google CEO, Calls for Global AI Regulation

Published:February 13, 2025

Reading Time: 3 minutes

Eric Schmidt, former CEO of Google, has raised alarms about AI being used in a “Bin Laden scenario” or by “rogue states” to harm innocent people. His concerns are centered around AI’s potential to be misused by nations such as North Korea, Iran, or Russia. The use of AI in the development of biological weapons is a special concern.

“The real fears that I have are not the ones that most people talk about AI. I talk about extreme risk,” Schmidt said. He emphasized that AI is developing quickly and is accessible to bad actors who could use it for harm.

AI Innovation and Oversight

Schmidt is advocating for a balance between government oversight and innovation. Although he supports government regulations to prevent AI misuse, he warns against excessive restrictions that could slow technological progress. “The truth is that AI and the future is largely going to be built by private companies,” he said.

“It’s really important that governments understand what we’re doing and keep their eye on us.”

He also endorsed US export controls on advanced microchips used in AI development. These controls, initially enforced by former President Joe Biden, were designed to limit adversarial nations from accelerating their AI research. However, with changing political leadership, the future of these restrictions remains uncertain.

Also read: The EU Wants Your Input on AI Regulations

Europe’s Trust-Centric Approach to AI

Schmidt’s concerns align with the European Union’s (EU) approach to AI regulation, which prioritizes trust and excellence. The EU’s strategy aims to make AI human-centric and ensure that safety and fundamental rights are upheld.

In April 2021, the European Commission introduced an AI regulatory framework, focusing on:

  • Fostering a European approach to AI
  • Reviewing the Coordinated Plan on AI with EU member states
  • Implementing legal frameworks to assess AI risks

By 2024, the EU launched the AI Innovation Package, which supports startups and small businesses in developing trustworthy AI. A key initiative, GenAI4EU, promotes collaboration between AI startups and major industries.

Global AI Leadership and Investment

The EU is striving to compete on a global scale by investing in AI development and governance. It has an annual AI investment target of €20 billion and is supported by Horizon Europe and Digital Europe programs. The EU hopes to lead in cutting-edge AI research and applications.

The Recovery and Resilience Facility has allocated €134 billion toward digital transformation. This will reinforce Europe’s ambitions to become a global leader.

AI and National Security Concerns

Schmidt’s warning about AI misuse is only part of a bigger issue. AI, if left unchecked, could be weaponized by hostile nations or individuals. This is why the EU has introduced a layered approach to AI regulation, categorizing risks into four levels: minimal, high, unacceptable, and transparency risk. 

This system ensures that AI applications with the potential for significant harm are properly monitored and controlled. Additionally, the EU’s legal framework addresses civil liability and AI safety regulations. These policies provide clarity for developers, deployers, and users of AI technologies, setting a global benchmark for responsible AI development.

The Debate Over AI Regulation

While the EU prioritizes trust and security, not all nations share the same stance. At the recent AI Action Summit in Paris, the US and UK refused to sign an agreement on AI regulation. US Vice President JD Vance warned that strict regulations could “kill a transformative industry just as it’s taking off.”

Schmidt also voiced concerns that over-regulation in Europe could stifle innovation. “The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he said, pointing to the need for a regulatory balance that supports both safety and progress.

Smartphone Use Among Children: Schmidt’s Changing Perspective

Beyond AI, Schmidt also weighed in on the increasing concerns surrounding smartphone use among children. As the former head of Google during its acquisition of Android, Schmidt now supports initiatives to restrict phone use in schools. 

“I’m one of the people who did not understand, and I’ll take responsibility that the world does not work perfectly the way us tech people think it is,” he admitted. Studies suggest excessive screen time negatively affects children’s mental health and academic performance. 

Schmidt believes smartphones can be safe with proper moderation. That doesn’t stop his support for proposals to ban social media for children under 16.

“Why would we run such a large, uncontrolled experiment on the most important people in the world, our next generation?” he questioned.

Regulation vs. Innovation

Schmidt’s warnings highlight real threats, but overregulation could hinder advancements that benefit society. The EU’s strategy of trust and excellence provides one model, while the US and UK advocate for a less restrictive approach.

Lolade

Contributor & AI Expert