It may come as a surprise to see a bipartisan United States Senate Committee recommending a large spending initiative, especially if the price tag is $32 billion. This happened yesterday and the topic is the safe development of Artificial Intelligence. In a departure from the traditional American laissez-faire approach to Tech Policy, this can sound like a return to the Keynesian recipes of active government intervention. Done the right way, this can be the balance that we need.
A group of four U.S. senators, led by Majority Leader Chuck Schumer and including Democrats Martin Heinrich and Republicans Todd Young and Mike Rounds, recommends that Congress allocate $32 billion over three years to advance and regulate AI. Their 33-page report is the result of a year-long review on the topic, acknowledging rapid AI advancements and global competition, particularly from China, and urging emergency legislation to boost U.S. AI research, development, and testing standards. It also calls for transparency in AI rollouts and studies on its impact on jobs. The next steps include pitching these recommendations to Senate committees for review and subsequent deliberation. In parallel, the Senate Rules Committee is voting on Wednesday on three election-related bills that would ban deceptive AI content used to influence elections, require AI disclaimers on political ads, and create voluntary guidelines for state election offices that oversee candidates. While these are proactive approaches to managing AI's growth and implications nationally, there is still a long legislative process ahead.
Until recently, the government stayed on the sidelines while the private sector drove innovation in a mostly unregulated space. The October 2023 White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence laid out the template for action and a framework for legislative progress.
There has been a lot of buzz about whether we should speed up tech development or hit the brakes to ensure we don't end up in some sci-fi dystopia. I do not see this as an either/or situation. We can drive technical innovation while ensuring safety, integrating both to reap AI’s benefits responsibly. Technical innovation and invention are required to address the safety risks associated with frontier models and the limitations of the current Large Language Models, like Open AI’s ChatGPT, Google’s Gemini, Meta’s Llama, Anthropic’s Claude, and others. Regulation should be designed to incentivize these investments. Since the White House Executive Order, we are making some headway in this balanced approach and this blend of innovation and regulation is crucial for nurturing a tech landscape that is both advanced and secure.
I commend the joint efforts of the Senate, Executive, and industry sectors. From the voluntary commitments made in early 2023 to the recent Executive Order and Senate recommendations, seeing such collaboration is uplifting. We must stay vigilant to ensure that public interests are safeguarded and to prevent undue regulatory capture.
The future of AI shouldn't be driven solely by market forces or technological determinism. It requires an effort from governments, corporations, civil society, and the scientific community. This partnership must integrate cutting-edge technology, strong public policy, and thorough social analysis.
Much progress in this direction has been made during the last 18 months. As we move forward, we should heed President Dwight Eisenhower’s words from his 1961 farewell speech to avoid that “public policy could itself become the captive of a scientific-technological elite.”