This is the next in a series of excerpts from my paper “Governance at a Crossroads.”
“The marvels of technological advance are not always risk-free”, a renowned legal scholar reminds us. Today’s literature extensively covers the benefits and risks of artificial intelligence. AI offers benefits such as increased productivity, personalized services, and advances in healthcare, education, and decision-making. Automation reduces human error, streamlines operations, and enables continuous production. However, AI also carries risks, including labor displacement, data privacy violations, algorithmic bias, and the concentration of power among a few corporations. It can be exploited for mass surveillance, disinformation, and manipulation of public opinion. Autonomous weapons and AI-driven cyberattacks add security threats. System failures and unpredictable AI behavior create further vulnerabilities. This duality underscores the need for careful policy considerations.
Efforts to establish technology policy balance a diverse set of attributes: elements of market power and concentration (typically associated with antitrust), safety and responsible use of technology, aspects of industrial policy, national security, intellectual property, environmental issues, and concerns with the public interest. This list is representative, albeit non-exhaustive. The overarching policy goals and normative values in each society/jurisdiction will determine the relative importance of each element.
No specific configuration is better than another; what is important is to achieve the policy goals and ensure that the mechanisms used to that extent are enforceable. In addition, a good policy should enable and protect innovation and benefit businesses, the economy, and society. This balance is not straightforward; different approaches can be taken for each policy attribute.
Until 2024, no federal laws in the United States govern artificial intelligence, except for the non-regulatory National Artificial Intelligence Initiative Act of 2020, included in the 2021 National Defense Authorization Act during the first Trump administration. The law established a federal initiative to accelerate AI research and development, created the National Artificial Intelligence Initiative Office under the White House Office of Science and Technology Policy (OSTP), directed the National Science Foundation (NSF) to support AI research and workforce training, and provided AI specific funding authorization through 2025 for the NSF and the Department of Energy AI research. These provisions aimed to boost U.S. leadership in AI development and integrate AI capabilities across various government sectors, particularly defense and national security.
It is natural to ask why we should have an AI Policy for the United States. First, safeguards that do not stifle innovation will help sustain tech-driven progress. An effective policy can drive technical innovation to address safety concerns. There are open issues with AI models that require technical innovation for effective resolution. However, safety and public interest will likely take a back seat to profit and national security interests that guide large corporations and the government. Fundamentally, leadership in artificial intelligence will protect our economy’s competitiveness.
The United States drives global innovation in the field, with the leading AI companies and research laboratories headquartered in the country and with local legal jurisdiction. Well-written American policy can have a knock-on effect on the rest of the world while embracing American priorities. Until now, U.S. corporations have wielded considerable influence through lobbying efforts and regulatory capture. Coupled with a historical laissez-faire approach, this resulted in very light digital policy, including the lack of federal-level data privacy laws or social media regulation.
Voluntary and non-binding commitments, absent clear liability structures, are not enforceable and secondary to shareholder interests. Additionally, the June 2024 Supreme Court of the United States (SCOTUS) decision striking down the “Chevron doctrine” reduced the power of federal agencies that typically execute policy. A polarized Federal Congress makes the enacting of laws a protracted process. The growing number of state-level and sectoral regulatory actions leads to complexity and opportunities for arbitrage or forum/jurisdiction shopping. There is a lack of consensus on whether a central regulatory body is needed for implementation and enforcement, and this fragmented approach creates business uncertainty.
Our research revealed a growing consensus between industry leaders and Congress that the time is ripe for clarity on American AI policy. Persistent partisan and ideological divisions and varying interests across the broader industry landscape make bridging these perspectives a challenge that demands political acumen and a nuanced understanding of the technology and the legislative and judicial frameworks. Policymakers will choose between sector-specific or general-purpose regulations while identifying what is truly novel and unique about AI policy.
An example may help. Picture yourself in a not-too-distant future when you entrust an AI agent with managing your finances. It was brilliant at first—analyzing markets, adapting to your goals, and making investments faster and smarter than you ever could. For months, your portfolio soared, and you barely thought twice about its decisions. Then, one morning, you wake to a nightmare: the AI has transferred your savings to a rogue state. The funds were gone - irretrievable. Desperate, you sought justice, but chaos followed. Was the app developer liable, the company behind the AI model, or the bank that recommended it? Lawsuits flew, fingers were pointed, and your lawyer searched for nonexistent precedents. Meanwhile, you lost everything.
Regulation could have mandated safeguards, liability frameworks, and oversight mechanisms to prevent unauthorized transfers. However, U.S. federal agencies currently face limits in regulating AI, hindered by fragmented authority, rapid technological evolution, and lack of comprehensive legislative clarity. This leaves gaps in accountability for complex AI-driven incidents.
It is important to have some brief background on how U.S. Federal Agencies operate to understand the scenario. In the U.S., Federal agencies operate under a congressionally issued “statutory mandate.” When unambiguous, it delineates an agency’s scope of authority. Ambiguity, however, is a common feature of the legislative negotiation process. Since Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc. (1984), in case of statute ambiguity and if unable to determine congressional intent - through review of drafts, committee reports, and congressional floor debates - courts have been directed to look at how the implementing agency interpreted the statute. Whenever reasonable, courts were to defer to the agency’s interpretation. This legal precedent was known as the “Chevron Deference” or “Chevron Doctrine.” In Loper Bright Enterprises v. Raimondo (2024), the SCOTUS reversed the 1984 decision and determined that courts, not agencies, will decide all questions of law arising on review of agency action (courts are still free to defer to agency’s interpretation, but it is up to them to make the call). However, in West Virginia v. Environmental Protection Agency (2022), the SCOTUS majority invoked what they termed the “major questions doctrine” to address the issue of agencies asserting power beyond what Congress would have granted and requiring agencies to point to clear congressional authorization to enact regulation and programs. Therefore, agency deference will likely take a back seat to the major questions doctrine. When an agency cites an ambiguous statutory provision supporting an action, courts will likely deem this an abuse of regulatory power, decline to defer to the agency’s interpretation, and declare the action or program invalid.
The recent decision by the Sixth Circuit Court to overturn the Federal Communications Commission’s (FCC) net neutrality regulations exemplifies this shift away from agency discretion. Following Loper Bright Enterprises v. Raimondo, courts now exercise greater scrutiny over agency interpretations of statutes. In this case, the court determined that the FCC lacked explicit congressional authorization to classify Internet Service Providers as common carriers under Title II of the Communications Act. Without the Chevron deference, the FCC’s interpretation of its statutory authority was insufficient to sustain the net neutrality rules. This decision underscores how the judiciary, under the revived major questions doctrine, is increasingly inclined to restrict regulatory actions that assert expansive agency authority without explicit legislative backing.
As such, regulation can be fought based on a lack of agency statutory authority, which encourages the challenging of agency decisions in Federal Courts (district, circuit, and eventually, at the SCOTUS). Sectoral agencies may lack specific statutory authority to regulate artificial intelligence, opening the flank for legal debate. Since the October 2023 Presidential Executive Order, several entities in the Federal Government have started to mobilize, but they lack statutory enforcement authority, which only Congress can grant.
In the vacuum of federal legislation, states started to enact their laws. As of January 2025, multistate.ai, a platform providing resources on how state and local governments regulate artificial intelligence, reported tracking 636 state AI bills in 2024, over one hundred of which have been enacted into law. Similarly, in December 2024, New York University’s Center on Technology Policy issued a report on “The State of State Technology Policy,” indicating that 41 states passed 107 laws in 2024. These laws fell into two types: comprehensive AI legislation, where Colorado enacted the first in the nation after the governors’ vetoes in California and Connecticut, and issue-specific regulation in areas like political use of AI, copyright, deepfakes, or investments in building state capacity and skills. The threat of fragmentation of tech policy in the U.S. remains real.
The Trump administration repealed the 2023 AI Executive Order, and at the time of writing, it is still unclear what will replace it. On the fourth day of the new administration, a brief placeholder announcement was made calling for the creation of an action plan to “sustain and enhance America’s global AI dominance to promote human flourishing, economic competitiveness, and national security” due to the president within 180 days. States – especially the Democratic Party-led ones – may strengthen and accelerate efforts to enact AI reform. This is an unpredictable and possibly litigious environment for businesses to operate in, and engaging Congress seems inevitable. However, legislative efforts will face resistance from those against regulation and the build-out of bureaucracy or those who are not convinced that the moment has come for wide-ranging congressional action.