
A proposed AI regulation ban is facing mounting bipartisan resistance from 40 state attorneys general, who oppose a 10-year federal prohibition on state-level oversight of artificial intelligence (AI). The measure has generated severe worries about consumer protection and state authority.
The attorneys general highlighted that the proposed ban would dangerously centralise AI regulation at the federal level, depriving states of their constitutional right to safeguard residents from emerging risks associated with quickly developing AI technologies. They worry that by preventing state-level legislation, the move will create a regulatory vacuum, leaving consumers defenceless and slowing the creation of responsive safeguards.
State Leaders Oppose AI Regulation Ban Over Consumer Risk
Reuters reports that a growing bipartisan coalition of state attorneys general opposes a federal proposal restricting state AI laws for ten years. This provision, quietly included in the House Energy and Commerce Committee’s markup, aims to override new AI regulations passed by states.
On Friday, 40 attorneys general, including Republicans from Ohio, Tennessee, Utah, Virginia, and Arkansas, sent a combined letter to Congress urging them to reject the bill. The pushback comes as the House Budget Committee prepares to consider the bill on Sunday.
Though brief, the AI provision carries major implications and surfaced suddenly during a House Energy and Commerce Committee markup session. Its introduction alarmed legal experts and state officials, who fear it favors tech giants lobbying for minimal national oversight standards. If enacted, the clause grants the AI industry ten years without state regulation, benefiting companies seeking weaker, centralised federal rules.
Moreover, the AI Regulation Ban would prohibit state and municipal governments from regulating AI models, systems, or automated decision-making technologies. If passed, it may overturn regulations governing a wide range of applications, including commercial products like ChatGPT and AI used in employment screening, housing eligibility, education, public infrastructure, and government operations.
States Already Leading on AI Protections Face Legal Limbo
Several states have already enacted AI-specific legislation. For example, California, home to significant AI businesses like OpenAI, Alphabet, Meta, and Anthropic, has already established several legislation aimed at high-risk AI applications. These include criminal sanctions for the unauthorised creation of sexually explicit deepfakes, limits on fraudulent AI-generated political content, and mandated disclosure when patients interact with AI in healthcare contexts.
Furthermore, health maintenance organisations in California are not permitted to use artificial intelligence instead of clinicians when establishing medical necessity. Illinois also mandates transparency and consent for AI use in video interviews. If the AI Regulation Ban passes, these laws could be invalidated or rendered unenforceable.
California’s Attorney General Rob Bonta, whose state has become a regulatory leader in AI oversight, expressed sharp disapproval of the ban. He said in a statement that,
Efforts to prevent states from establishing and enforcing sensible AI safeguards undermine our ability to protect the public. States must retain the authority to respond to new and evolving technologies.
Conclusion
The AI regulatory ban faces substantial Senate opposition, particularly under the Byrd Rule, which prohibits non-budgetary provisions in reconciliation bills. Despite substantial bipartisan interest in regulating artificial intelligence, many proposals remain stuck due to legislative impasse. The Trump administration’s drive to centralise AI governance is consistent with its larger deregulation strategy for new technologies.
This, however, contrasts with growing state-level momentum, where nearly half of all states in the United States have passed legislation prohibiting the use of AI-generated deepfakes in political campaigns. As federal efforts lag, states continue to lead the way in addressing the ethical and societal issues posed by rapidly evolving AI technologies.