Microsoft’s chief scientist, Dr. Eric Horvitz, believes that President Donald Trump’s plan to block state-level regulation could severely limit U.S. progress in AI.
Speaking at the Association for the Advancement of Artificial Intelligence, Dr. Horvitz stressed the need for thoughtful oversight. “Bans on regulation will hold us back,” he said.
In his view, guidelines and controls are not barriers. Instead, they ensure AI systems remain safe, effective, and trustworthy.
His comments come at a time when the U.S. faces growing concerns over AI misuse. These include misinformation, data privacy, and potential biosecurity risks.
Trump’s Plan
Trump’s latest proposal seeks to prevent all U.S. states from passing AI-related laws for a decade.
This is part of his new budget proposal. Supporters argue the plan promotes national unity and prevents regulatory confusion.
Prominent figures backing the plan include tech investor Marc Andreessen and Vice President JD Vance.
They believe China’s rapid AI development poses a major threat. According to them, if the U.S. hesitates, it risks falling behind.
They claim that national uniformity will make it easier to innovate and compete. However, critics say this approach could ignore local needs and concerns.
A Divide
Despite Horvitz’s public statements, reports suggest Microsoft may support the proposed moratorium.
According to The Financial Times, Microsoft is part of a broader lobbying effort. Other participants reportedly include Google, Meta, and Amazon.
This creates a notable contradiction. On one hand, a top Microsoft executive supports regulation.
On the other hand, the company may be pushing to limit it. When asked for clarification, Microsoft declined to comment.
This gap between public messaging and private lobbying raises questions. It highlights the growing tension between safety and speed in AI development.
A Call for Stronger Oversight
In addition to Horvitz, UC Berkeley professor Stuart Russell voiced deep concern. He criticized the industry’s high tolerance for risk.
Russell asked a pointed question: why would the public accept a technology with a 10–30% chance of causing human extinction?
His point was simple. If an airline offered a flight with that risk, no one would board. So why accept it with AI?
Both experts argue that the stakes are too high for a hands-off approach. They believe careful regulation can prevent harm without halting progress.
Implications for U.S. States
If passed, the proposed moratorium would block all state-level AI legislation for ten years. This would remove the ability of states to set their own rules, even if local concerns arise.
Currently, states like California and New York are drafting laws to guide how AI is used in hiring, healthcare, and policing. Under the ban, such efforts would stop.
Instead, all authority would shift to the federal level. Critics argue this could silence important regional perspectives and delay essential protections.