TL;DR: One more committee hearing at the Senate and still no action on tech policy. My proposal on how to advance this agenda.
This week, tech industry luminaires testified at the Senate Committee on Commerce, Science, and Transportation on “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation.” The three-and-a-half-hour dialogue with Sam Altman (OpenAI), Lisa Su (Advanced Micro Devices - AMD), Michael Intrator (CoreWeave), and Brad Smith (Microsoft) was chaired by Senator Ted Cruz (R-Texas). Political rhetoric dominated the comments by Senator Cruz, including aphorisms like “in heaven there is no law and the lion will lie down with the lamb. In hell, there is nothing but law, and due process is meticulously observed.” It was an enlightening discussion, demonstrating an increasing degree of maturity in the debate about what should and should not be done to help advance technology innovation and diffusion while maintaining safeguards. But the industry still comes in with a variety of voices, and members of the Committee are still climbing on soapboxes to proclaim their legislative proposals and proselytize their ideas. Consensus or coalition building takes a back seat to individual interests and political platforms.
The current AI-based technology race is too important and has too much economic and social impact to be hostage to political, ideological, or commercial interests. It is time for action by Congress to enact policy and guidelines to allow the industry to continue to flourish, to unleash the power of government investments, and to create safeguards to protect society. Little will happen between now and when the administration publishes its AI action plan, likely in July. My prediction is that the plan will focus on facilitating investments by the private sector, on actions for addressing energy generation and distribution shortages, and on rules for geopolitical leadership, including revised export control guidelines relaxing limits to diffusion to all but the top adversaries. All of that is good. However, the plan would be light on regulatory guidance and may go as far as encouraging Congress to pass legislation to preempt state action. We will learn the details soon.
This is not enough. To fill this gap, I have proposed a Dynamic Governance Model with three steps. The implementation of the three steps is done in the context of a specific policy goal. Many associate policy goals with restrictive regulation; however, a goal can also enable infrastructure investments or create incentives to direct private innovation.
The first step brings industry and government together to collaborate on standards. This public-private partnership should be facilitated and coordinated at the Federal level. The National Institute of Standards and Technology (NIST) is a non-regulatory agency of the U.S. Department of Commerce. NIST needs to be properly funded and its focus on AI protected even during the current round of cuts. Within NIST, the know-how developed under the AI Safety Institute (AISI) should be leveraged, and serious consideration should be given to creating a more permanent funding vehicle for the Institute, even if it undergoes renaming, like its U.K. peer organization, now rebranded as a security institute. The federal agency can be the convener of industry and governmental entities to drive standards creation, adoption, and prepare them for international usage via “AI Diplomacy.” In the absence of federal action, states will move and enact their codes and convene their initiatives. This is a valid approach in which the states will be the laboratories of policy innovation for the nation. But this can lead to fragmentation, hence the preference for federal-level action.
Second, with the standards in place, we need to test compliance. This can be accomplished in a variety of market-friendly ways. Detailed and well-written standards can be automated. Depending on the area covered by the standards and the associated policy goal, one can design a technical solution to automate compliance. There is an opportunity for a flourishing RegTech startup industry in this space. If or while automation is not possible, private actors can audit compliance. Most in the industry oppose third-party auditing, maybe a reaction to the lack of standards, maybe a preference to maintain independence and control. However, examples abound of mechanisms for audit, certification, and compliance driven by private actors–from fire codes and the insurance industry to self-regulatory organizations like the Financial Industry Regulatory Authority (FINRA) that oversees broker-dealer firms and their personnel, focusing on protecting investors and ensuring the integrity of the securities markets. Acknowledging that there are many unknown unknowns in fast-evolving areas like Artificial Intelligence, regulatory sandboxes can be established to provide liability protection for a limited period and under governmental supervision. In these sandboxes, industry players and the government will refine their joint understanding of the standards and adjust policy as they learn. Legislation is required to provide the needed delegations of authority or the regulatory sandboxes. Once again, it would be preferable to see this enacted at the federal level to streamline and simplify the regulatory burden. If the federal congress does not move, states will.
The third step involves accountability and liabilities. Let’s face it, it is not as if regulation is the only source of liabilities. The creation of standards and the testing of adherence to the same can help protect against negligence claims. Today, Big Tech is already under significant antitrust scrutiny in both Europe as well as in the U.S., where the Department of Justice initiated several actions during the first Trump administration. What these two forms of action–litigation and antitrust–have in common is the fact that they seek remedies after a harm has occurred. Going through the process of setting standards, measuring against the same, and adapting (possibly in a regulatory sandbox environment) can help legislators determine what type of ex-ante regulation, the type that is applied before a harm occurs, can be appropriate. This can be done in a measured, focused, incremental way, as we learn and adapt. The hand-waving about comparing this to an intrusive regulatory model like the one allegedly implemented by the EU AI Act is a diversionary tactic aimed at stifling the discussion.
As these three steps get enacted and implemented, we continue iterating and adapting as the technology evolves. Industry, government, and society together. There is a path forward. We only need to decide to take it.