
New York has passed a bill aimed at preventing disasters caused by frontier AI models developed by tech giants like OpenAI, Google, and Anthropic. The bill would require accountability, transparency, and reporting from big AI labs; it is currently pending Governor Kathy Hochul’s decision. The legislation is being hailed as a milestone for AI safety and has the backing of AI pioneers Yoshua Bengio and Geoffrey Hinton. Therefore, if it is signed, it would create the first legally binding safety regulations for advanced AI systems in the US.
New York’s Bold Move to Regulate AI
New York’s response to growing concerns about advanced AI systems is the Responsible AI in Safe Environments (RAISE) Act. This law specifically targets frontier AI models that are available to New York users and were trained using enormous amounts of processing power. It targets high-risk situations, including the possibility of more than 100 injuries or a threat to more than $1 billion to the economy.
But unlike California’s SB 1047, which was vetoed, the RAISE Act does not include more stringent mandates that could constrain innovation. Senator Andrew Gounardes, a co-sponsor of the bill, spoke of the need to move before these technologies advance beyond human control. “Those who are most aware of the threats are warning us,” he said. That’s not something to ignore.”
Big Tech Divided Over New York’s RAISE Act
The RAISE Act would compel big AI companies to share logs of incidents, release detailed safety reports, and uphold security compliance. Companies doing business in New York would be severely discouraged by the possibility of civil penalties of up to $30 million for violations. Venture capital firms like Andreessen Horowitz and Y Combinator have strongly criticized the bill. Furthermore, they label it as harmful to the US tech ecosystem.
One post on X called it “a stupid, stupid state-level AI bill” that weakens American leadership as rivals surge ahead. Meanwhile, Anthropic, while advocating for federal transparency laws, stated that the RAISE Act may unintentionally harm small businesses.
New York Assemblymember Alex Bores countered critics by clarifying that the bill only applies to firms using over $100 million in computing power. This threshold excludes small startups and academic projects. “We’ve designed this to be fair and focused,” said Bores in a recent interview.
Frontier AI Models Face New State Oversight
The bill clearly defines its scope, limiting its application to businesses creating frontier AI models for New York users. This ensures regional accountability even for globally based tech giants like OpenAI or DeepSeek. The law enforces strict transparency requirements without going so far as to mandate a “kill switch.”
Advocates of this kind of tech regulation contend that it is necessary for both public safety and long-term innovation, despite industry resistance. In addition, Assemblymember Bores said that since New York has the third-largest state economy, companies would not want to relocate.
Final Words
The RAISE Act may soon become a model for other jurisdictions trying to balance innovation with AI safety. Consequently, some in the industry view it as a positive change, while others see it as a threat to growth. By paying close attention to large-scale systems, the bill shows that thoughtful, well-considered tech regulation is both necessary and feasible.