Tech

New York Poised to Make History with Landmark AI Regulation

Share
Share

New York Governor Kathy Hochul is facing a pivotal decision that could establish the state as a trailblazer in regulating advanced artificial intelligence. The highly anticipated signing of the RAISE Act (S6953B/A6453B), passed by both legislative chambers with near-unanimous support, would make New York the first U.S. state to adopt a comprehensive legal framework for so-called “frontier models,” with direct implications for tech giants like OpenAI, Google, and Anthropic.

Introduced in March 2025 by Senator Andrew Gounardes and Assemblymember Alex Bores, the bill received final approval on June 12 with a 58-to-1 vote in the Senate. This broad bipartisan backing reflects a growing consensus around the need to regulate AI systems that cross specific thresholds of complexity and capacity—namely, those trained with more than 10^26 computational operations or with development costs exceeding $100 million.

If signed by Hochul before mid-July, the law would take effect 90 days later, in October 2025. This gives affected companies limited time to comply with new requirements, including the publication of security protocols, mandatory annual audits, and a strict 72-hour deadline to report critical security incidents.

The RAISE Act was designed to avoid the pitfalls of similar efforts, such as California’s failed SB 1047. It removed controversial provisions like “emergency off-switches” and narrowed liability for derivative models. It also exempts small developers and academic research, focusing regulation on the most powerful and potentially risky systems.

Developers of frontier models would be required to implement protective measures against unauthorized access, assess threats such as the use of AI for bioweapons or automated criminal activity, and conduct annual reviews of model capabilities. Independent audits will play a central role; their findings must be submitted to regulators and are expected to create demand for a new market of AI safety experts.

The law also introduces significant penalties: a first violation could cost up to 5% of total computing expenses (with a $10 million minimum), while repeat offenses could trigger fines of up to 15% (capped at $30 million). For a typical $100 million model, this translates to penalties ranging from $5 to $15 million—meaningful figures even for the largest tech firms.

The bill has split the AI ecosystem. Nearly 40 leading researchers, including Geoffrey Hinton and Yoshua Bengio, sent a letter to Governor Hochul endorsing the bill as “measured and sensible.” Hinton, often dubbed the “godfather of AI,” has repeatedly warned of AI’s existential risks, while Bengio has consistently advocated for regulation.

On the other hand, opposition from the tech industry has been fierce. Venture capital firm Andreessen Horowitz has criticized the bill, with general partner Anjney Midha dismissing it as “another stupid state law on AI that will only harm the U.S. as our adversaries advance.” The American Innovation Network has launched a six-figure lobbying and ad campaign aimed at blocking the law.

Critics argue the RAISE Act could turn technical challenges into bureaucratic burdens. The American Enterprise Institute warned that it prioritizes box-checking over real safety and estimated that compliance costs could easily surpass initial projections, reaching as much as $15 million annually for some firms.

Beyond New York, the RAISE Act enters a growing global conversation about AI governance. While inspired in part by the European Union’s AI Act, its focus is narrower, targeting only the most powerful systems with a risk-based approach. Yet enforcement may prove difficult: questions remain about whether a single state can regulate global corporations and how to interpret ambiguous standards like “unreasonable risk of critical harm.”

Another major hurdle is the lack of qualified AI auditors. Few organizations currently have the technical expertise to evaluate frontier models, potentially creating a bottleneck in enforcement.

Industry estimates suggest annual compliance could cost affected companies between $5 million and $15 million, though final figures will depend on how the law is implemented.

Hochul’s decision will have consequences far beyond state lines. Enacting the RAISE Act would establish the first comprehensive state-level regulation of frontier AI in the U.S., potentially inspiring similar efforts across the country. Over 40 states considered AI legislation in 2024, highlighting the growing interest in state action amid federal inaction.

At the national level, several members of Congress have proposed a 10-year moratorium on state AI regulation—legislation that would effectively nullify the RAISE Act. However, more than 260 state lawmakers from all 50 states have voiced opposition to such federal overreach, defending the right of states to legislate on transformative technologies.

Governor Hochul now stands at a crossroads. Her decision will determine whether New York leads the next phase of AI governance in the U.S.—or whether the limits of state power will allow the technology to continue evolving largely unchecked.

Share
Related Articles
PoliticsTech

Lyon Cuts Ties with Microsoft and Embraces Open-Source Software to Achieve Digital Sovereignty

The French city of Lyon has announced an ambitious technological shift: it...

PoliticsTech

Brazil Orders Social Media Platforms to Take Responsibility for User Content

Brazil’s Supreme Federal Court approved a landmark decision on Thursday requiring social...

Tech

China Bets on Raw Power: Loongson Unveils Processors with Up to 64 Cores and 300W TDP

Chinese chipmaker Loongson has officially announced the launch of its latest generation...

TechViral

Severe WinRAR Vulnerability Exposes Millions of Users: Patch Now Available

WinRAR, one of the most widely used compression tools on Windows systems,...

TechViral

Global iCloud Outage Leaves Millions Without Access for Nearly Five Hours

On June 24, 2025, millions of users around the world were affected...

Tech

Security mitigations reduce Intel GPU performance by 20%

Just like their processors, Intel’s graphics cards are also affected by security...

Tech

U.S. Congress Flags DeepSeek as National Security Threat Over Ties to Chinese Military

A bipartisan committee in the U.S. Congress has labeled Chinese artificial intelligence...

TechViral

Olympic Cats Made with AI? Here’s How the Latest Viral Phenomenon Was Born

When you mix two of the internet’s biggest obsessions — cats and...