Updated Dec 20
New York Governor Under Fire as Major AI Bill Returns with Weakened Provisions

Governor Kathy Hochul Adjusts AI Oversight Bill

New York Governor Under Fire as Major AI Bill Returns with Weakened Provisions

Governor Kathy Hochul has sent back the RAISE Act — New York's prominent AI bill — with substantial revisions. Critics are up in arms over the weakened developer obligations and penalties, arguing it caters to Big Tech, while proponents appreciate the alignment with federal and California policies.

Background of the RAISE Act and Initial Legislative Intentions

The RAISE Act, formally known as the Responsible AI Safety and Education Act, originated as a legislative effort by the New York State Legislature to establish stringent oversight and regulatory frameworks for the development of high‑risk artificial intelligence technologies. This initiative was driven by the growing concerns about the potential risks associated with advanced AI systems, including but not limited to misinformation, bias, and safety incidents. The Act was characterized by its comprehensive approach, requiring AI developers to adhere to strict safety protocols, conduct thorough risk assessments, and promptly report any safety incidents. The initial intention was to create a robust state‑level regulatory structure that could serve as a model for AI governance, ensuring that AI technologies were developed and deployed in a manner that prioritized public safety and transparency.
    Governor Kathy Hochul's decision to send the RAISE Act back to the legislature with substantial revisions marked a significant shift in New York's legislative approach toward AI regulation. The revisions proposed by Governor Hochul aimed at aligning state policy closer to federal and other state guidelines, in particular those of California, which had enacted similar laws. The revised bill significantly scaled back developer obligations and penalties, which had been central to the original proposal. This move was perceived as an attempt to streamline regulatory requirements in a way that would not stifle innovation or create a fragmented regulatory landscape for tech companies operating across different jurisdictions. Governor Hochul's actions reflected a broader trend of states adjusting their legislative approaches in anticipation of a standard federal framework that could create uniform regulatory standards nationwide.

      Governor Hochul's Substantial Revisions and Justifications

      Governor Kathy Hochul's decision to return the RAISE Act with significant revisions has sparked considerable debate among lawmakers, advocacy groups, and industry leaders, aligning New York's approach more closely with federal guidelines and California's policies. According to The New York Times, these revisions dramatically reduce the obligations and penalties for AI developers, provoking criticism for allegedly ceding oversight responsibilities to the industry. The adjustments are seen as a move to harmonize state regulations with broader national standards, ostensibly to foster innovation and prevent a fragmented regulatory landscape.

        Main Changes in Obligation, Timelines, and Penalties

        The recent revisions to the New York RAISE Act, as reported by The New York Times, highlight significant changes in obligations imposed on AI developers, as well as adjustments in timelines and penalties. Originally, the RAISE Act required AI developers to adhere to strict safety and security policies, conduct extensive testing for potential "critical harms," and report safety incidents within very short time frames. Failing to comply could have resulted in substantial financial penalties up to $10 million for initial violations and $30 million for repeated offenses. However, Governor Kathy Hochul's revisions, which suggest a shift towards alignment with federal and California policies, significantly lessen these obligations. For instance, the time frame for reporting safety incidents has been extended from days to two weeks, penalties have been reduced to $1 million per violation, and several previously mandatory safety planning and testing provisions have been diluted or entirely removed. These changes have prompted a mixture of reactions, underscoring a tension between regulatory oversight and industry flexibility.
          The governor's decision to adjust the RAISE Act seems to be motivated by a desire to harmonize New York's AI regulations with federal expectations, aiming to avoid a fragmented set of rules across states that could potentially hinder technological innovation. Analysts have pointed out that while the federal executive branch seeks to create a unified national AI policy by preempting conflicting state regulations, New York's revised approach reduces the regulatory burdens on AI developers by aligning with these broader federal goals. This approach reflects concerns from industry leaders who argue that disparate state laws can complicate innovation and increase compliance costs.
            Critics, however, express concerns that these softened measures may weaken the effectiveness of AI regulation in safeguarding against risks posed by advanced AI technologies. As described, advocates argue that the initial bill's stringent requirements were critical to ensuring robust safety protocols and rapid incident response times, which are essential in mitigating potential harms such as misinformation and bias. The reduction in penalties and longer reporting timelines could potentially decrease the incentives for AI developers to rigorously test and monitor their systems, thereby increasing the likelihood of unchecked AI interactions affecting society negatively.
              Moreover, the broader legal and political context surrounding the governor's revisions includes anticipations of federal preemption. The White House has indicated intentions to review state AI laws for conflicts and prioritize consistent federal standards, which may render state‑specific regulations less relevant. New York's changes are interpreted as a step toward meeting these federal aspirations, potentially simplifying compliance for businesses operating in multiple states. Nonetheless, this alignment has been criticized by some lawmakers and safety advocates who fear it may undermine state‑level regulatory power and diminish the protections necessary to proactively manage AI risks.

                Diverse Reactions and Criticisms from Lawmakers and Advocates

                The return of the RAISE Act by Governor Hochul with significant amendments has stirred varied reactions among lawmakers and advocates. Critics have expressed deep concern over what they perceive as a retreat in AI safety measures, accusing the governor of succumbing to pressure from Big Tech. According to the Prospect, the modifications in the bill significantly reduce developer obligations, weaken penalties, and extend reporting timelines. This, critics argue, impairs the bill’s original intent of enforcing stringent safety regulations for high‑risk AI models.
                  In contrast, some lawmakers and industry advocates have lauded the governor's revisions, viewing them as a pragmatic approach to align New York's AI regulations more closely with federal standards and reduce the potential for a patchwork of conflicting state laws. As highlighted in a Greenberg Traurig LLP report, this alignment could foster innovation by providing a more predictable regulatory environment. Proponents of the revised bill believe it strikes a balance by maintaining necessary safeguards without imposing onerous burdens on developers. This perspective underscores an ongoing debate about finding the equilibrium between regulation and innovation in the rapidly evolving AI sector.
                    The varied responses to the amendments also reflect broader national and industry‑wide tensions regarding AI regulation. The White House's directive to harmonize state laws with federal AI policies suggests an overarching push towards establishing national standards, potentially overriding stricter state mandates to avoid hindering technological advancement. This move is seen by some as necessary to prevent legal fragmentation that could impede the deployment and growth of AI technologies, whereas others fear it might curtail individual states’ abilities to protect their residents from emerging AI risks.
                      Amidst these discussions, advocacy groups remain vocal in their opposition to the governor's amendments, advocating for the reinstatement of stricter provisions to protect against misinformation, bias, and safety concerns associated with advanced AI systems. As noted in Design It For Us, these groups emphasize the necessity of robust regulatory frameworks as AI technologies continue to permeate everyday life. They argue that the integrity of such frameworks is paramount to safeguarding public interest and ensuring accountability in AI development and deployment.

                        Comparisons with California and Federal AI Laws

                        New York's RAISE Act, initially designed to impose stringent safety regulations on AI developers, underwent significant changes under Governor Kathy Hochul that brought its provisions closer to the legal frameworks established by California and the federal government. The original RAISE Act was notable for its rigorous safety requirements and high penalties, which included rapid reporting of incidents and comprehensive risk assessments. However, Hochul's revisions have softened these mandates, potentially altering the landscape of AI regulation in the state according to the New York Times. Her adjustments align more closely with federal guidelines, emphasizing national uniformity and addressing the industry's concerns over a complex web of state laws.
                          The revised RAISE Act that emerged from Governor Hochul's office appears to mitigate some of the complexities that developers might have faced under the original iteration by aligning with standards similar to California's SB 53 law. This move can be seen as a pragmatic approach to ensuring that New York does not become an outlier with overbearing requirements that could stifle innovation and place undue burdens on AI innovation. This adjustment has pleased many in the tech industry who argue that a patchwork of state‑level regulations could create barriers to growth as noted in The Prospect.
                            Despite the softened stance on AI regulation in New York, criticisms have emerged from lawmakers and advocacy groups who argue that the revisions significantly dilute the protections initially intended by the RAISE Act. These critics point out that while the new provisions reduce financial penalties and extend reporting timelines, they may potentially leave New Yorkers exposed to higher risks from frontier AI models. The amendments have sparked a broad debate on social media platforms and in legislative discussions about the balance between fostering innovation and ensuring comprehensive oversight according to Greenberg Traurig's insights.
                              The federal government's posture on AI regulation has heavily influenced New York's decision to adjust the RAISE Act. With the administration advocating for federal standards to prevent a fragmented regulatory environment, states like New York are finding themselves reassessing their positions. The weakening of New York's AI law, driven by an intent to harmonize with national policies, illustrates a broader trend where states are balancing between establishing rigorous local standards and adhering to federal expectations, a move that could see other states following suit as federal preemption becomes more imminent as highlighted in federal executive actions.

                                Implications for AI Developers and Public Safety in New York

                                The implications for AI developers following the revisions to the RAISE Act in New York are substantial. The revised bill, as described in this New York Times article, significantly reduces the obligations that developers must meet. This means developers may face fewer financial penalties and have more extended timelines to comply with safety and reporting requirements. While this reduction might lower the immediate cost of compliance for AI companies, it also raises concerns about whether the safety and accountability measures are sufficient to protect the public from potential harms caused by advanced AI systems.

                                  Federal Preemption Possibilities and Legislative Dynamics

                                  The evolving landscape of federal preemption possibilities and legislative dynamics surrounding AI regulation in the United States represents a complex interplay between state initiatives and national policy aspirations. In the wake of significant revisions to the RAISE Act by New York Governor Kathy Hochul, questions about federal preemption and harmonization of AI laws have come to the forefront. The governor's modifications reportedly bring the state’s approach more in line with federal and California laws, reducing stringent obligations on developers and aligning with national calls for consistency as noted in The New York Times.
                                    The considerations for federal preemption are rooted in a desire to avoid a fragmented regulatory environment, which could complicate compliance for AI developers operating across multiple jurisdictions. The federal executive branch has shown interest in creating a cohesive framework, signaling potential preemption of state laws that conflict with national standards. This federal push aims to address concerns from industry leaders about the economic risks and inefficiencies of patchwork regulations as outlined by the White House.
                                      Such legislative dynamics are not without controversy. Critics argue that by diluting state‑level mandates, such as those originally proposed in New York’s RAISE Act, governors and state legislators may be ceding too much control to federal authorities and the very industries they seek to regulate. Advocates for stringent AI regulation assert that robust state laws are necessary to manage the unique risks posed by advanced AI technologies, including privacy breaches and misinformation as covered by Prospect. Meanwhile, proponents of federal preemption argue that national consistency could foster innovation by providing clear and uniform guidelines, reducing the compliance burden on tech companies according to DLA Piper's analysis.

                                        Role of Advocacy and Public Campaigns in Shaping the Bill

                                        Advocacy and public campaigns have played a fundamental role in shaping legislative outcomes, particularly highlighted in the case of New York's RAISE Act. These campaigns, driven by advocacy groups and civic organizations, have had a significant impact on the public discourse, highlighting concerns about safety regulations and their potential dilution by industry pressures. According to The Prospect, critics argued that Governor Hochul's revised bill was a capitulation to Big Tech, urging stronger protections against AI's potential risks.
                                          Public campaigns often harness the power of social media to amplify their message, mobilizing grassroots efforts to influence legislative processes. In the case of the RAISE Act, platforms like X and Mastodon became arenas for advocacy groups to generate momentum and public awareness. As Design It For Us highlighted, advocacy campaigns sent thousands of emails to Governor Hochul, urging the signing of strong protective measures within the bill. This push exemplifies how coordinated public campaigns can stir significant public and governmental response.
                                            The role of advocacy is not just limited to public campaigns but includes strategic legal and policy analyses which inform and influence decision‑makers. Policy organizations often provide legislative summaries and impact analyses that guide legislators and the public on the implications of certain provisions. These actors, as noted by Greenberg Traurig, play a crucial role in dissecting the legal frameworks and arguing for harmonization with broader policy intentions.
                                              Through persistent advocacy and public awareness efforts, changes in legislative drafts can either be resisted or accelerated, depending on the momentum generated by these campaigns. The criticisms surrounding Hochul’s adjustments to the RAISE Act illustrate the tensions between regulatory intent and industrial influence. As New York Senate documents show, the advocacy community's response was pivotal in keeping the spotlight on the potential dilution of safety and reporting requirements, driving a narrative focused on accountability and protection against AI risks.
                                                Advocacy also acts as a catalyst for political debate, where public campaigns bring diverse viewpoints into the legislative arena. By rallying lawmakers and public interest groups, these campaigns can significantly alter the trajectory of a bill. The debates around New York’s AI regulations showcase advocacy’s influence in framing these legislative discussions, essential for balancing innovation with public safety, as articulated in Governor Hochul's office statements, which reflect the pressure and discourse sculpted by these ongoing advocacy efforts.

                                                  Future Implications and Policy Directions in AI Regulation

                                                  The recent developments surrounding New York's RAISE Act reflect a significant shift in the landscape of AI regulation, with potential implications not only for developers and residents but also for broader policy directions. According to The New York Times, the revisions proposed by Governor Kathy Hochul have stirred considerable debate. These revisions, which align more closely with federal and California policies, trigger concerns about reduced oversight and accountability, as they seem to cede more control to industry players.
                                                    Economically, the move to revise the RAISE Act stands to impact both compliance costs and business operations within New York. By lessening immediate regulatory burdens, it may lower compliance costs for AI companies, as developers face fewer immediate obligations and lighter penalties. However, this reduction in stringent requirements could inadvertently slow innovations in safety protocols and transparency, leading to increased risks over time. This situation might create an environment where voluntary industry standards have to step in to fill the gaps left by reduced state mandates.
                                                      Socially, the adjustments in the bill’s provisions could mean slower implementation of transparency and safety measures required to protect the public from potential AI system failures and other risks. This alteration raises concerns among advocates, who argue that such changes could leave communities without sufficient preventive guardrails. As critics note, reduced penalties and extended reporting timelines weaken deterrence and delay critical interventions necessary to address emerging AI incidents efficiently, which could impact vulnerable populations disproportionately.
                                                        Politically, the revisions to New York's RAISE Act have heightened the national conversation around the balance between state and federal regulatory approaches to AI. The alignment with broader federal signals may energize advocacy for a uniform national policy that preempts state‑level initiatives. As referenced in federal directives, there's increasing pressure for Congress and federal bodies to establish consistent AI governance frameworks to prevent a fragmented regulatory environment. This ongoing debate indicates a likely acceleration of federal legislative and regulatory responses.
                                                          In summary, the future implications of New York's amended AI bill suggest a complex interplay of reduced state‑level obligations and potential increases in federal oversight. Whether these changes will spur innovation while maintaining public safety through voluntary or federalized standards remains a critical question. As noted in discussions surrounding the RAISE Act, the legal and regulatory landscapes will continue to evolve, with implications for stakeholders at all levels, from local communities to multinational corporations. Observers will need to monitor federal actions, industry responses, and subsequent state maneuvers to assess the true impact of these legislative developments on AI governance.

                                                            Resources for Tracking Legislative and Legal Developments

                                                            Staying informed about legislative and legal developments, especially in rapidly evolving areas like AI regulation, is crucial for both industry participants and the public. Given the substantial changes in New York's AI legislation under Governor Hochul, individuals and organizations need reliable resources to track these developments. The New York Times is a vital resource, offering comprehensive coverage and analysis of legal changes like the RAISE Act, enabling readers to understand shifts in policies and their implications.
                                                              Several specialized platforms offer detailed insights into legislative processes and changes. Platforms like Law360 and JD Supra provide legal professionals and interested parties with updates on new bills, amendments, and regulatory impacts. These resources often include expert analyses and opinions on emerging laws, such as the weakened provisions of the RAISE Act in New York, assisting stakeholders in navigating the legal landscape.
                                                                Accessing government websites can also be invaluable. For instance, the New York State Senate's website offers direct access to bill texts and amendments, allowing readers to track legislative changes in real time. For those interested in the outcomes of New York's AI legislation and the RAISE Act, such sites provide firsthand information on the legislative process and final text details.
                                                                  In addition to regional sources, federal resources play a crucial role. The Governor's Office provides official statements and updates on legislation that significantly impacts industries and public policy. Federal‑level insights, such as those from executive orders on AI regulation, highlight broader policy shifts and potential preemption of state laws, essential for aligning state‑level understanding within a national context.
                                                                    News and policy analysis websites like the Prospect offer critical evaluations and public reactions to significant legal developments. These resources help decipher complex legal narratives, providing balanced perspectives on controversial moves such as the adjustments to New York's RAISE Act. Ensuring a holistic view strengthens the ability to anticipate and respond to legislative changes affecting AI technology regulation.

                                                                      Share this article

                                                                      PostShare

                                                                      Related News

                                                                      OpenAI Snags Ruoming Pang from Apple to Lead New Device Team

                                                                      Apr 15, 2026

                                                                      OpenAI Snags Ruoming Pang from Apple to Lead New Device Team

                                                                      In a move that underscores the escalating battle for AI talent, OpenAI has successfully recruited Ruoming Pang, former head of foundation models at Apple, to spearhead its newly formed "Device" team. Pang's expertise in developing on-device AI models, particularly for enhancing the capabilities of Siri, positions OpenAI to advance their ambitions in creating AI agents capable of interacting with hardware devices like smartphones and PCs. This strategic hire reflects OpenAI's shift from chatbots to more autonomous AI systems, as tech giants vie for dominance in this emerging field.

                                                                      OpenAIAppleRuoming Pang
                                                                      Anthropic's Mythos Approach Earns Praise from Canada's AI-Savvy Minister

                                                                      Apr 15, 2026

                                                                      Anthropic's Mythos Approach Earns Praise from Canada's AI-Savvy Minister

                                                                      Anthropic’s pioneering Mythos approach has received accolades from Canada's AI minister, marking significant recognition in the global AI arena. As the innovative framework gains international attention, its ethical AI scaling and safety protocols shine amidst global competition. Learn how Canada’s endorsement positions it as a key player in responsible AI innovation.

                                                                      AnthropicMythos approachCanada AI Minister
                                                                      Anthropic Gets Psyched: Employs Psychiatrist to Decode Claude's Mind

                                                                      Apr 15, 2026

                                                                      Anthropic Gets Psyched: Employs Psychiatrist to Decode Claude's Mind

                                                                      Anthropic has taken a bold step by hiring psychiatrist Dr. Elena Vasquez to psychologically assess their flagship AI, Claude. This unconventional move is stirring debates on the boundaries of AI evaluation, AI alignment, and whether this anthropomorphizes AI by treating it as having a 'mythos.' With the aim to make Claude more interpretable and aligned with human values, critics call the initiative pseudoscience while supporters see it as an innovative stride in AI regulation and safety.

                                                                      AnthropicAI safetyClaude