One week before it was supposed to go into effect, the Commerce Department AI Diffusion rule is dead.
The rule, announced last January in the waning days of the Biden administration, introduced global licensing requirements for exporting advanced computing integrated circuits—notably GPUs—and AI model weights to manage risks associated with their proliferation. It outlined a framework for controlling AI diffusion with stringent security measures for storing model weights and constructing large computing clusters. It was met with mixed reviews and drew significant criticism, especially from the hardware industry.
Politics and lobbying aside, the rule structure was complex. It classified countries into three groups: trusted allies (no restrictions), restricted nations like China and Russia (presumed denial), and all others, which faced a tiered licensing framework with country-specific chip limits, security conditions for expanded access, and limited exceptions for low-risk exports.
Yesterday’s announcement aims to simplify the process and refocus on bilateral agreements. The AI Diffusion framework now seems to be another bargaining chip, pun intended, in the trade negotiations. It also represents a concession to pressure from the industry, notably from NVIDIA, and a lifeline to countries already engaged in deal-making with the current administration.
AI diplomacy and competition with China will play a significant role in shaping technology policy. All eyes are on the forthcoming AI action plan, expected to be published by July. The more than 10,000 public comments submitted in response to the government’s request for input earlier this year highlight the strong interest the topic has generated across both the market and broader society.
All indications are that the upcoming strategy (action plan) will focus on reducing regulatory burdens, promoting AI-capable infrastructure, including energy and the electrical grid, and incentives to help the United States maintain global leadership. This will increase the pressure on the states to enact regulatory policy, risking fragmentation and increased compliance burden for AI startups and Big Tech. The public comment submitted by the prominent VC investor a16z (Andreessen Horowitz) highlights this concern:
”Given that the AI model market is national in scope and directly implicates interstate commerce, the federal government should take the lead in its promotion and regulation to provide uniformity and certainty for US model developers. In the last two years, the regulatory landscape for AI developers has shifted away from uniformity and toward an increasingly onerous patchwork of state-specific regulations. This patchwork creates compliance burdens that hinder advancement in model development.
Because the AI development market is inherently a national one with potential significant impacts in commerce, national security, and foreign relations, the federal government is best-positioned to regulate this market. The Administration, therefore, should work with Congress to pass legislation that creates a national AI model market and preempts state-specific restrictions on model development and legislation that promotes access to AI infrastructure, data, and talent.”
This position is echoed by the U.S. Chamber of Commerce which states its concern “that a fragmented policy landscape will lead to a patchwork of potentially conflicting federal and state artificial intelligence laws, adversely affecting entrepreneurs, small businesses, and the broader business community. Such a complicated regulatory environment could place small business and startups at a unique disadvantage if they are required to bear massive compliance costs.”
The industry, as we discussed in a previous article, speaks with multiple voices. Open AI favored maintenance of strong export controls and “maintaining the AI diffusion rule’s three-tiered framework to differentiate among countries in the global AI market.”
Google, on the other hand, highlights the “[e]xport controls can play an important role in supporting national security, but only if they are carefully crafted to support legitimate market access for U.S. businesses while targeting the most pertinent risks. AI export rules imposed under the previous Administration (including the recent Interim Final Rule on AI Diffusion) may undermine economic competitiveness goals the current Administration has set by imposing disproportionate burdens on U.S. cloud service providers. While we support the national security goals at stake, we are concerned that the impacts may be counterproductive.” A position that reflects their role as a large, global, cloud services provider.
Game on for the administration and Congress to sort through this cacophony of voices.