What we heard from AI industry leaders
Optimism tempered by caution and divergent views on regulation
This is the next in a series of excerpts from my paper “Governance at a Crossroads.”
We examined the U.S. digital tech policy landscape to explore a path forward that bridges the tech industry as the driver of innovation with Congress as representatives of American society. We did 23 long-form, qualitative interviews with AI industry leaders . These non-attributable interviews, with signed confidentiality and consent forms, averaged thirty minutes to an hour and generated hundreds of pages of transcripts, allowing the complexity of the subject to emerge. When quoted in the text, they are marked in italicized font.
Participants were selected based on seniority and expertise in AI technology. This cohort included 24 industry leaders (one interview included two persons), encompassing investors, engineers, startup founders, board members, and executives from AI companies. The were chosen to represent organizations ranging from Big Tech to startups, consulting firms, and research labs. All will be directly impacted by future U.S. AI policy. Additionally, most have significant global engagement in their professional roles. Each interviewee possessed multiple years of experience in Artificial Intelligence.
The interviews revealed a multifaceted perspective on AI, highlighting optimism tempered by caution, divergent views on regulation, and the unique challenges posed by this transformative technology. Participants expressed broad optimism about AI’s potential, particularly as a co-pilot, to augment human work rather than replace it. However, significant uncertainty remains, driven by global compliance challenges and fragmented state-level regulations in the United States. A recurring concern is the difficulty of measuring AI’s return on investment (ROI). CEOs and investors struggle to assess its tangible value, raising doubts about its overall business impact. This challenge extends to regulatory efforts, as participants highlighted the absence of tools to effectively evaluate AI’s outcomes, which hampers the development of meaningful policies. We have seen this in earlier phases of adopting other disruptive technologies like the transition to the cloud and, before that, with the Internet and e-commerce. The increased level of scrutiny on ROI typically means the start of the transition from a “lab experiment” to production deployment.
The discussions surfaced stark differences in perspectives on regulation. Startups prioritize innovation, assuming regulatory inspection will primarily target big tech companies. Meanwhile, large technology firms not only lead the shaping of AI policy but also take much responsibility for it, a role broadly accepted across the industry.
Participants generally supported a use-case-based regulatory approach rather than overarching oversight of algorithms, aside from notable outliers like Anthropic. Some advocated for regulation during development, but most opposed creating new regulatory agencies. Instead, they favored sector-specific frameworks or leveraging existing bodies to minimize complexity. The EU AI Act emerged as a closely monitored model, with expectations that it could influence U.S. regulatory approaches akin to the GDPR. Paradoxically, as we will see when reviewing our congressional interviews, U.S. legislators resist modeling American policy on international experiences. At the same time, the industry has made a strong call for global harmonization of standards to streamline compliance and foster innovation.
In the experience of a former government official and vice president of regulatory affairs in the industry: “it was probably two years ago now where several senators were saying, you know, we have to regulate the algorithms […] Our point of view was no. We need to regulate the use case, not the algorithms. And we got some pushback from senators because of the fear and the hype. I think that’s calmed down in the last couple of years, but you know, that was one area. The other is the very real and significant debate between those more on the side of open-source AI, putting algorithms out for open source and allowing them to be improved and developed by the global community, and what I would call more proprietary AI models like OpenAI, the Microsoft approach, to some extent, what Google is trying to do, what Anthropic is doing. You know, on the other side of the debate, you have Meta, IBM, Hugging Face, and other companies, which are built on open-source models.”
AI is perceived as distinct from other technologies due to its dual potential for misuse and harm and its broad accessibility – although, as discussed earlier, dual use is not new. A vice president of technology advocacy in a large corporation explained the uniqueness of AI in this way: “I think of the launch of chat GPT and the fact that this came into everybody’s living room, […] you do a Google search, the first thing that pops up now is an AI-generated answer. Those with young children are thinking about their children and what they’re being exposed to […] whether you think you’re dealing with AI or not, you’re dealing with it every moment of every day, in some way, and that makes it different. It’s different from Quantum. Quantum is that big center, you know, the cool-looking chandeliers. There’s a distance there, that’s in the back office, that’s in some data center. This is touching everybody and every job.”
Participants emphasized the societal risks AI poses, underscoring the need for its ethical implementation and thoughtful regulation. There was general agreement that regulation is necessary and urgent in this space. The ethical use of AI remains a central concern. Large technology companies advocate for voluntary commitments and see themselves as key players in shaping ethical standards while acknowledging the importance of serious industry accountability. In contrast, smaller companies tend to deprioritize ethical considerations, focusing instead on growth and scaling. Nonetheless, they often adopt the strategies and standards established by larger tech firms, demonstrating the latter’s influence over the ecosystem. A COO of a small startup and former FinTech CEO discussed accountability: “Existing [sectoral] frameworks are built around holding people accountable to adhere to the regulatory frameworks and the guidelines […] accountable and liable […] There needs to be a change in the framework such that the regulators can drive accountability, but it doesn’t have to be in this very narrow […]. It needs to accommodate the fact that AI is there doing many of the things that they assigned humans to. […] We must make it diffused enough that there is somebody accountable, or a corporate entity that’s accountable, that will lose licenses, get fined, etc. if something goes wrong. But it needs to move beyond this notion of individuals.”
The discussion with a member of the safety team at a leading frontier AI research lab highlights how they are grappling with issues of regulation and liability. “[We should regulate] usage […] this is already nascent and […] you’re going to see a continuation of it: the idea of model developers wanting to be a little bit like a utility. […] We just provide it, and it is not our responsibility what you use it for. And […] some sort of liability, or some sort of clear delineation of responsibility […] if our model is used to do something bad, how much responsibility do we bear before you should bear some? […] Everything sort of flows back from that. You know, if you understand that you are ultimately responsible for, it affects what you're going to train, it affects what you're going to test, and it affects how you’re going to sell that model out into the world.”
Large technology firms dominate AI innovation, market standards, and policy discussions. Feeling marginalized in these conversations, smaller players largely align with big tech’s direction without attempting to carve out their influence. This dynamic perpetuates widespread distrust in government regulators, with participants citing a lack of expertise and industry knowledge among regulatory bodies.
Our findings reveal a lack of trust that discourages industry engagement with policymakers, compounding the industry’s limited grasp of the legislative process, key stakeholders, and regulatory developments. This unfamiliarity and uncertainty about what a workable regulatory framework entails creates a substantial obstacle. Many in the industry feel ill-prepared to navigate or influence these processes, resulting in apathy or withdrawal. As a result, a few well-resourced players dominate meaningful dialogue and collaboration, leaving smaller entities and emerging voices sidelined.