California Governor Gavin Newsom has vetoed an AI safety bill, effectively siding with Silicon Valley against sweeping requirements for powerful AI models. The Governor’s decision comes after weeks of contentious debates among policymakers, tech companies, developers, civil society, and academics centered around the need to balance safety and competitiveness.
Newsom ultimately took the side of major tech companies and some leading academics like Stanford’s Dr. Fei-Fei Li arguing that while “well-intentioned,” the bill SB-1047 would have been “onerous for the state’s leading AI companies.” The California Senator who proposed the bill has already pushed back on the decision claiming it as a massive setback that leaves technology companies facing “no binding restrictions from U.S. policymakers.”
California's discussions offer a glimpse into what could happen federally if and when Congress decides to engage on AI legislation, though such an outcome seems unlikely given the current polarized Congress and its reluctance to regulate the digital domain. However, the Supreme Court’s recent stance limiting federal agency discretion – and Newsom’s veto of the expansive state legislative effort - suggests a growing need for Congress to step up. With the presidential race heating up, now is the time to examine where U.S. AI policy might head next.
How We Got Here
Obama Administration
The U.S. government’s focus on AI began in 2016 when the Obama administration’s National Science and Technology Council released a report that emphasized AI’s vast potential to open new markets and improve health, education, energy, and the environment. Its recommendations—such as increasing AI talent and ensuring fairness and efficacy in AI systems—remain relevant today. The report also focused on autonomous systems, though it underestimated AI’s progress, stating that “it is very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years.” Just eight years later, the claim has proven overly cautious.
Trump Administration
The Trump administration prioritized AI as critical to U.S. economic and national security. Through the 2019 Executive Order on AI and the National AI Initiative Act of 2020, Trump’s policies aimed at doubling AI research investments, establishing research institutes, and fostering international alliances. These efforts reflected a belief that AI should bolster American innovation, industry, and values. Though the Trump administration focused on AI as a tool for government efficiency and global tech leadership, its approach laid significant groundwork for future administrations.
Biden Administration
The Biden administration’s AI policy began with the AI Bill of Rights (2022), which set out five principles to guide the ethical use of AI, including protecting data privacy and preventing algorithmic discrimination. Soon after, the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF), a voluntary toolkit to help organizations manage AI-related risks. These frameworks helped establish a shared language for AI risks but lacked the regulatory power to compel action.
In October 2023, President Biden issued Executive Order 14110, outlining the most comprehensive U.S. AI policy to date. The order tasked nearly every U.S. department and agency with preparing reports, guidelines, and consultations on AI, emphasizing privacy, equity, civil rights, and competition. Although expansive, the order faced criticism for its lack of enforcement power, leaving its true impact uncertain.
Recent Developments and What’s Next
Since the 2023 Executive Order, the National Telecommunications and Information Administration (NTIA) has released key resources aimed at shaping AI accountability and transparency. The NTIA outlined pathways for improving information sharing, liability rules, and cross-sectoral capacity. In another report focusing on open source, the NTIA concluded that evidence of the need for restrictions on open-weight foundation models remains inconclusive.
In Congress, more than 100 AI-related bills have been introduced since the rise of generative AI, focusing on algorithmic accountability, child protection, deepfakes, and consumer protection. Bipartisan proposals from Senators Schumer, Blumenthal, and Hawley have all attempted to tackle AI regulation, but none have gained clear legislative traction.
Internationally, the Biden administration continues to collaborate with global partners such as the EU and the UK on AI safety through forums like the G7, the OECD and the AI Safety Summit. However, the U.S. still favors a risk management model over Europe’s more rigid regulatory approach.
The Role of the Courts and Future Challenges
The U.S. legal system is also poised to play a more significant role in AI regulation. In June 2024, the Supreme Court’s decision to overturn Chevron Deference—a doctrine that allowed courts to defer to federal agencies’ interpretations of ambiguous laws—signaled a major shift. Now, courts, rather than agencies, will determine legal interpretations. This change could weaken the ability of federal agencies to regulate AI without clear statutory authorization from Congress.
States like California are already stepping into the regulatory void, creating a patchwork of AI-related laws that will lead to legal battles over jurisdiction and authority. Without federal oversight, this fragmented approach risks stifling innovation while leaving major ethical and safety concerns unresolved.
Moving From Frameworks to Practice
Despite the extensive research and frameworks developed by the U.S. government, academia, and private sector, there is now an urgent need to translate these guidelines into actionable and enforceable policy. As we look toward a new presidential administration and Congress, the next steps for U.S. AI policy will require grappling with foundational questions: How can we realign industry incentives toward safe, trustworthy AI? Should regulation focus on the technology stack or its uses? And how will the legal system address the rapid pace of AI development?
The answers to these questions will shape not only U.S. leadership in AI but the global future of Artificial Intelligence.
Authors
Slavina Ancheva - Master in Public Policy candidate at the Harvard Kennedy School, Fulbright Bulgaria Scholar, and Belfer Young Leader, with experience as a Team Leader and Policy Adviser in the European Parliament. Specializes in AI, platform regulation, and transatlantic tech policy.
Paulo Carvão - Senior Fellow at Harvard's Mossavar-Rahmani Center for Business and Government, focusing on AI Policy. A former IBM executive, he advises tech startups and explores technology's impact on democracy and the role of entrepreneurship as a vehicle for social mobility.