With less than one week to go until the presidential election, Kamala Harris and Donald Trump are in a deadlock, with the latest national polls showing a 48 percent - 48 percent divide in voter sentiment. But while the country has its eyes set on the presidential race, there is a different race unfolding in Silicon Valley. Sam Altman has predicted that machines capable of “human-level intelligence” are just a few thousand days away. Regardless of when this moment actually comes, will the next U.S. President prepare us for it?
At a recent high-profile fundraiser in New York City, Kamala Harris optimistically stated: “I will bring together labor, small business founders and innovators and major companies… We will encourage innovative technologies like AI and digital assets, while protecting our consumers and investors.”
Meanwhile, the Republican Party’s presidential platform clearly writes: “We will repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.” Former President Trump has gone as far as to use AI and deepfakes himself for election purposes.
The presidential candidates' courtship of the AI and tech community extends far beyond electoral politics. The successful integration of AI into economies, societies, and national security applications could determine global leadership for decades to come.
At its core, our tech policy should address everyday Americans' concerns about artificial intelligence. Recent polls show 52% of Americans feel more concerned than excited about AI's expanding role, while 57% demand greater transparency in AI business practices. These concerns persist even as tech companies race toward human-level AI capabilities, promising breakthroughs in healthcare, scientific research, and quality of life. The challenge lies in preserving these benefits while ensuring safe, trustworthy development—a balance that effective policy must strike.
Yet U.S. AI policy remains lacking. Despite more than 100 AI-related bills introduced in Congress through bipartisan efforts, none has become law during this session. The Executive Branch has attempted to fill this void through Biden's 2023 Executive Order on AI, directing federal agencies to develop guidelines and standards. However, recent Supreme Court decisions limiting agency discretion have hampered these regulatory efforts and will curb agency’s appetite and reduce their authority to enforce behavioral or structural remedies.
States have begun filling the regulatory vacuum, led by California's passage of multiple AI-related bills. Similar initiatives have emerged across the country, from Pennsylvania's ban on AI-generated child abuse material to Utah's AI consumer protection laws. New Hampshire has prohibited AI-powered public surveillance, while other states explore various regulatory frameworks. However, this patchwork approach creates uncertainty for businesses and consumers alike, highlighting the need for unified federal action.
The next president's approach to AI policy may mirror their previous administrative experience. The Trump administration emphasized AI's role in economic and national security, focusing on research investment and international competitiveness. Their policies aimed at doubling AI research investments and establishing research institutes—all reflecting a belief that AI should bolster American innovation, industry, and values.
The Biden-Harris administration pursued comprehensive regulation through executive action and voluntary industry commitments, which Harris personally championed. Their expansive Executive Order tasked nearly every U.S. department and agency with preparing reports, guidelines, and consultations on AI, emphasizing privacy, equity, civil rights, and competition.
Looking ahead, either candidate's success in shaping AI policy will require Congressional cooperation—a significant challenge given historical reluctance to regulate digital technologies. Trump might resist regulation to maintain a competitive edge against China, viewing even minimal oversight as an impediment to America's geopolitical interests. Harris could favor light-touch federal oversight building on her work with tech companies, following the lead of Democratic leaders like Gavin Newsom and Nancy Pelosi who have shown restraint in confronting AI companies.
The changing legal environment at the federal level, combined with fragmentation at the state level, has created an urgent need for coherent national strategy. Without renewed political vision and Congressional action, the U.S. risks falling behind in the global AI race while leaving crucial ethical and safety concerns unaddressed.
Some may say the next U.S. President will have more important pressing issues to address, with the economy, healthcare, and immigration being top of mind for American voters. But AI will impact every one of these policy domains - in fact, it already has.
The Mayo Clinic is setting a precedent for early adoption of AI to improve care and research and the U.S. Customers and Border Protection is using AI to detect illicit cross-border traffic, amongst other publicly-registered use cases. As AI takes on a larger role, the next administration must address it more systematically – not only navigating immediate challenges, but also building the political consensus necessary for meaningful federal legislation to empower agencies with the authority they need while establishing a unified framework replacing fragmented state efforts.
The new President must set the tone and direction, while Congress must act. An enforceable AI policy will guide incentives, steering American innovation toward sustainable, safe progress. Elections have consequences, including for AI development.