This is the next in a series of excerpts from my paper “Governance at a Crossroads.”
In previous articles, we have set the context and introduced conceptual frameworks. We also highlighted the results of our qualitative and quantitative primary research. We are now going to get into the details of our proposal for a new Dynamic Governance Model and its three components:
Public-private partnerships for the creation of evaluation standards.
A market-based solution for audit and compliance.
A system of accountability and liabilities, set by legislatures, existing executive agencies, and the courts.
A path forward
The rapid adoption of artificial intelligence and its role as a transformative general-purpose technology have brought the United States to a pivotal moment. While the country has historically embraced innovation with a light regulatory touch, AI advancement’s unique pace and scale demand a recalibration of this approach. The challenge is clear: How can we maintain the dynamism of technological progress while ensuring AI’s safe, ethical, and equitable deployment? Addressing this challenge requires a new governance model that balances innovation with regulation, fosters collaboration between public and private sectors, and builds public trust in the potential of AI. This section outlines a strategic path forward, focusing on practical mechanisms to align technological development with societal needs and preserve America’s leadership in this critical domain.
The American system, while imperfect, has consistently demonstrated its ability to drive technological and economic growth. This system, rooted in a combination of entrepreneurial spirit, public and private investment, and an open market, has fostered innovation across industries. From the early days of the Industrial Revolution to the rise of the Internet and modern digital technologies, the U.S. has maintained its leadership by balancing economic incentives with strategic policy interventions. Innovation has been a crucial driver of American economic growth and contributed as much as half of the nation’s economic growth in the 20th century. This technological progress has led to new industries, products, and services that have fundamentally transformed the American economy and improved living standards. The information technology (IT) sector has been a powerful driver of economic growth in recent decades, with an outsized contribution to the U.S. economy, creating high-paying jobs, driving exports, and spurring innovation across various sectors.
Like previous general-purpose technologies such as electricity and the Internet, AI could reshape industries, create new opportunities, and enhance the quality of life. A growing consensus among policymakers, industry leaders, and academics is that safeguards are essential to ensure AI applications’ safe and ethical deployment. These safeguards aim to build public trust and minimize harm, yet the specifics of their implementation remain a matter of debate, as we have seen in the previous sections.
Technological innovation and legislation operate on fundamentally different timelines. Innovation often occurs rapidly and unpredictably, driven by breakthroughs, market dynamics, and entrepreneurial experimentation. By contrast, the legislative process is slow and intentionally designed to ensure thorough deliberation, consensus-building, and stability. This inherent disparity creates challenges in regulating fast-moving technologies like AI, where static regulatory frameworks can struggle to keep pace with the speed of development and deployment. The rapid evolution of AI demands a dynamic approach to governance that allows for adaptability and continuous refinement while preserving the measured oversight required to safeguard public interests.
A strictly ex-post approach to regulating AI, relying solely on antitrust actions and litigation, is insufficient to address this technology's unique challenges and catastrophic risks. While antitrust enforcement and legal recourse play a vital role in correcting abuses and ensuring accountability, they are inherently reactive. Such mechanisms often come into effect only after harm has occurred, making them ill-suited to prevent high-stakes risks like safety failures, systemic biases, or widespread societal disruption. Moreover, the complexity and pace of AI innovation can outstrip the ability of courts and regulators to respond effectively. At the same time, ex-ante regulation, while proactive, has its limitations. Static regulatory frameworks can struggle to keep up with rapid and unpredictable technological innovation, risking either obsolescence or overreach that stifles progress.
These phenomena underscore the need for a dynamic regulatory approach that combines the foresight of ex-ante measures with the adaptability to evolve alongside technological advancements. Such a system would complement existing ex-post mechanisms, creating a comprehensive framework to promote innovation and accountability.
Continued technological innovation is important to maintain the United States’ economic and geopolitical leadership and address the pressing challenges inherent in current AI models. From the energy-intensive demands of large-scale computation to issues of alignment, bias, and safety, technological advancements are critical to ensuring AI systems are both sustainable and ethical. These challenges highlight the need for targeted investments in research and development, fostering breakthroughs in energy-efficient algorithms, bias mitigation techniques, and enhanced model transparency. A well-crafted tech policy and regulatory framework must act as both an enabler and a safeguard, incentivizing the development of technologies that address existing limitations while ensuring accountability and public trust. By embracing a dynamic and collaborative approach to regulation, incorporating public-private partnerships and ongoing dialogue, policymakers can create an ecosystem where safety and innovation coexist.
Evolution in the governance of AI is needed, leveraging the strengths of both industry and government. Industry brings technical expertise, innovation capacity, and real-time insights into technological trends, while the government safeguards public interests through oversight, accountability, and policy direction. By integrating these complementary roles, such a scheme can adapt to the rapid pace of AI advancements, allowing regulatory frameworks to evolve in tandem with technological developments. This collaborative approach could take several forms, including joint standard-setting initiatives, co-development of best practices, and establishing regulatory sandboxes to pilot new technologies under controlled conditions. It also requires continual feedback and iteration mechanisms, such as advisory councils or working groups that bring together stakeholders from diverse sectors. Furthermore, embedding transparency and public engagement in this process will be crucial for building trust and ensuring that the framework remains responsive to societal needs.
A new model for collaboration and accountability
To address the challenges and opportunities presented by artificial intelligence, we propose a Dynamic Governance Model centered on structured collaboration between government and industry. At the heart of this model is creating an entity tasked with setting clear standards in partnership with the private sector. These public-private partnerships would go beyond mere consultation, fostering active collaboration in developing, implementing, and iterating on policies that align innovation with societal priorities. A key feature of this model is the incorporation of commitments by industry players underpinned by a robust accountability scheme, including clear delineation of liability and mechanisms for independent audits.
The extra-regulatory model was designed to be modular, light, and of general application. The modular nature of the model allows it to be implemented in phases, starting from different entry points. It is light enough to be built on top of and complement existing legislation and judicial sectoral frameworks, reducing the implementation burden on Congress and the industry ecosystem. Its general applicability allows policymakers to decide what use cases to prioritize (a risk-based approach, sectoral phased implementation, focus on larger companies, and gatekeepers are examples of policymaking and political decisions that can be made at implementation time).
This proposal requires decisive congressional action, including a review and potential expansion of statutory authorities to support the framework. By enacting clear policies, Congress can enable adaptive regulation that steers AI innovation in a safe direction and safeguards the nation’s leadership in this critical enabling technology. This balanced approach ensures that while the transformative potential of AI is fully realized, its deployment remains aligned with ethical standards and public trust.
The NTIA’s efforts, featured in its March 2024 Artificial Intelligence Accountability Policy Report, underscore the federal government’s appetite to foster an AI accountability ecosystem. The NTIA has laid a foundation for ensuring trustworthy AI systems by emphasizing independent evaluations, standard-setting, and transparent disclosures. These initiatives resonate with our proposal for a dynamic and collaborative governance model, as both approaches highlight the importance of public-private partnerships, adaptive regulation, and the integration of technical standards to address AI’s societal impact. Both frameworks advocate for accountability inputs, such as audits, disclosures, and documentation, as pillars of responsible AI governance. Likewise, we agree with the need for tiered risk-based approaches and the involvement of multiple stakeholders across the AI lifecycle. Our model diverges in its emphasis on extending public-private partnerships to include commitments enforced through a novel accountability scheme incorporating liability delineation and audit mechanisms. This extension creates a dynamic, self-reinforcing system that evolves alongside technological advancements.