
A section of a large U.S. tax and spending package, whose adoption is supported by Republicans, is aimed at preventing states from developing their own measures of AI policy. The bill, which is a condition of federal funding of broadband, would impose an interim ban on the states from enacting their own AI regulations. Although passed procedurally by the Senate parliamentarian, it is bipartisan and can be overturned by a simple majority. Tech companies advocate for its provision because this would allow tech companies to have a unified framework rather than a hodgepodge of state laws. Those who are critical of the move point out that such a shift will result in a lack of democratic controls.
Big Tech’s Lobbying Muscle Reshapes AI Policy Landscape
The fight over AI regulation reveals the growing power of tech lobbyists in Washington. Lobbying activity on AI tripled in just one year, from 158 organizations in 2022 to 451 in 2023, with major players like Meta, Amazon, and Google spending over $10 million each to influence early regulatory outcomes. Trade groups like Incompas have spearheaded coordinated efforts to preempt state regulation, arguing for federal consistency to avoid a “compliance maze” across states.
This mirrors earlier trends where dominant industry players shape rules during major tech shifts. Publicly, tech companies embrace “responsible AI,” but critics argue this posture masks a push for minimal oversight. Consumer protection groups, by contrast, operate with a fraction of the lobbying budgets and struggle to counterbalance corporate influence.
The timing of regulation is critical. Delayed oversight often leads to public harm, as seen in previous tech booms. Advocates for AI guardrails argue that history shouldn’t repeat itself. They warn that without early intervention, AI tools could entrench systemic bias, enable fraud, or escalate disinformation, risks exacerbated by unchecked industry control. The current moratorium effort underscores how early AI policy may be more shaped by capital than by public consensus.
Global Pressure Fuels Call for U.S. Regulatory Uniformity
Tech firms are also using global regulatory fragmentation to push for a unified U.S. framework. The EU’s AI Act enforces strict, risk-based rules across member states, setting a global precedent. In contrast, the U.S. has no national standard, and states like California have taken the lead with AI accountability laws. This divergence is prompting tech leaders to argue that state-by-state laws weaken America’s competitive edge against rivals like China. Incompas CEO Chip Pickering warned that without national consistency, “U.S. leadership in AI is at risk.” Japan and Singapore, meanwhile, are promoting innovation with responsible AI rules, adding to the international compliance burden for developers.
Industry leaders warn of an “innovation tax” caused by juggling conflicting laws across jurisdictions. This argument has appeal in Congress, especially as lawmakers consider the geopolitical race to dominate AI. But critics say that economic competitiveness shouldn’t override civil rights and consumer protections. The international pressure exposes the reality that AI governance is a geopolitical question rather than a technological one. In the case of the U.S., determining whether the rules are set by federal regulators, states, or corporations will determine not only the U.S. innovation locally but also its impact and credibility in the world regarding artificial intelligence.
A Familiar Pattern in Tech Regulation History
The issue of an AI policy is based on an old pattern: quick innovation, then regulation. Similar to the industrial revolutions witnessed in history, AI tools are transforming society at a rate that cannot be accommodated by governance. Big Tech is attempting to impose its rules since it is deleterious. Impacts are not yet fully developed as railroad, oil, and telecommunication corporations did before it. The proponents claim that this must be a different time, where guardrails are erected in advance. The difficulty is in balancing innovation and accountability. In case of a repeat, regulation will only come when scandals or harm is caused to society.