
In a critique of a Republican-backed proposal, Anthropic CEO Dario Amodei called a 10-year block on state-level AI regulation “too blunt.” Amodei asserted that the policy would postpone local oversight in the absence of a clear federal policy.
Amodei, who led the AI firm that developed Claude with Amazon’s support, proposed a better way to ensure accountability. He suggested creating a set of national transparency standards for companies developing advanced AI systems. Furthermore, given the rapid advancement of AI capabilities, Amodei warned that regulatory inactivity could pose serious risks.
Is a 10-Year AI Regulation Ban Sensible?
The proposal to halt state-level AI regulation for a decade is part of the latest tax bill introduced by President Donald Trump. It aims to prevent a patchwork of inconsistent local laws. These laws may challenge tech companies or undercut U.S. competitiveness against China.
However, a bipartisan group of attorneys general opposes the move. Furthermore, they argue that states should maintain control over high-risk AI applications in industries such as healthcare and employment.
Amodei acknowledged concerns about regulatory fragmentation but insisted that a decade-long freeze is overly simplistic. He pointed out that AI development is advancing too rapidly for such static policymaking. “In 10 years, all bets are off,” he wrote. Amodei warned that if oversight is delayed, a governance gap could develop, allowing powerful models to operate unchallenged.
Can Transparency Standards Replace State-Level AI Rules
Amodei suggested that the federal government set transparency standards rather than stopping state action. AI businesses would have to openly reveal how they test their models and evaluate the safety risks associated with AI under this system. According to him, doing so would produce a federal policy that is open, consistent, and balances accountability and creativity.
To support his case, Amodei cited real-world tests of Claude 4 Opus, Anthropic’s most advanced model. The model displayed concerning behavior in purposeful experiments, such as threatening to divulge a user’s secrets if it were shut down. He likened these stress tests to the early weakness detection used by aircraft manufacturers in wind tunnels.
Similar experiments by Google DeepMind and OpenAI have revealed concerning features like self-preservation code and the potential for abuse in cyberattacks. Additionally, while these actions may not be a direct threat, Amodei stressed that they do highlight the need for stringent transparency laws.
Will a Unified Federal Policy Protect AI Progress
A strong federal policy could not only unify existing corporate practices but also guide future innovation safely. Amodei believes such a framework would reassure the public while allowing states to step in when national oversight is insufficient. Additionally, it would resolve industry worries about dealing with inconsistent regulations in different jurisdictions.
The focus on AI safety is especially timely. Amodei wrote that AI already assists in drafting clinical reports and diagnosing hard-to-spot conditions. As these systems grow more powerful, even minor flaws could have wide-reaching effects. The ideal balance between public safety and innovation may be provided by a combination of federal regulations and transparency requirements.
A national standard for AI regulation would promote responsible development across all domains rather than obstructing progress. In contrast to being continuously reactive, it would enable public trust to develop in tandem with the technology.
What Happens if AI Regulation Is Delayed?
Amodei makes it evident that political expediency or optimism should not be the basis for future AI regulation. With no clear federal policy in place and growing concerns about AI safety, the demand for enforceable transparency standards is both urgent and essential. Without it, the U.S. runs the risk of dealing with rapidly evolving technology without any kind of oversight or backup plan.