This is the next in a series of excerpts from my paper “Governance at a Crossroads.”
Until this point, we have explored the historical precedents in technology governance and the unique characteristics of artificial intelligence. We delved into the AI triad - algorithms, data, and computing power – and introduced a new AI triad - energy, land, and labor. We also laid out a framework for looking into AI policy goals through the lens of its attributes or objectives. We investigated examples worldwide and dove into how the United States implements policy via anti-trust, sectoral regulation, and litigation.
Now, we will get into the details of our primary research, what we learned during qualitative interviews, and a quantitative analysis of legislation introduced during the 118th United States Congress Session. These findings will establish the foundation for our governance proposal.
The term “industry” is often invoked in academic and policy discussions, with researchers and policymakers striving to understand its perspective. This work aimed to capture these viewpoints, leading to a key conclusion: no singular, unified tech industry exists. Instead, the ecosystem comprises a diverse array of interests that are at times aligned and at other times in conflict. Navigating this complexity requires a discerning ear and the ability to distinguish meaningful insights from background noise.
Uncertainty, coupled with the rapid evolution of AI technology, obscures the ecosystem’s boundaries, raising questions: Who comprises the ecosystem, and who has the authority to influence and shape the trajectory of AI policy? We asked industry leaders: Who deserves a seat at the decision-making table?
Some readily assert that all voices should be heard in the process. For instance, a senior executive leading emerging technology at a Big Tech company shared their perspective: “Look, AI has got to touch everybody and everything. You know [...] everybody needs to understand these capabilities. It will touch everything we do in our lives [...], and this is why […] you do need to have forums where you’re bringing together all different types of people from different backgrounds, even if they don’t think they have a stake [...] Government, core industry, model developers, system AI providers, users, academia, and civil society […]. You’ve got to bring all those voices to the table because this will impact us currently and into the future.”
While all interviewees broadly agreed on the policy leadership role played by the large companies driving AI development, there was far less consensus about including other stakeholders. Some voices criticized Big Tech’s outsized influence and lobbying efforts. However, much of the industry regards these companies as possessing the resources, expertise, and long-term vision needed for shaping AI policy. The rest of the ecosystem either aligns with their leadership willingly or feels compelled to follow their lead.
In addition to these dominant companies, several interviewees emphasized the importance of including subject matter experts in policymaking. As the CEO of an AI company explained: “The people who are involved in creating the frontier models must be part of the conversation [...] there are a few specific subject matter experts [...] we [ …] are one of them. I think there are a few nonprofit organizations that are some of them. […] Regulation and other policies are a lot of times like the devil is in the details [...]. And so sometimes it means talking to a tiny nonprofit that kind of feels like a weird fit for this sort of conversation, but they have done the research on this specific type of risk [...] You must have them at the table.”
The role of startups, particularly those in early to mid-stage growth, was another topic of significant discussion. The prevailing argument was that startups often lack the maturity and resources to contribute meaningfully to strategic policy debates—an assertion consistently echoed by the startup CEOs interviewed. While this perspective acknowledges smaller companies’ resource constraints, it raises critical concerns about power concentration in the U.S. AI market. When large companies dominate the policymaking process, they gain the power to shape a future market structure aligned with their interests, potentially stifling the emergence of innovative solutions from smaller, more agile players.
AI integrators—companies that apply AI solutions across various sectors—were highlighted as a critical yet underrepresented voice in shaping AI policy. These integrators play a vital role in the broader AI landscape but often lack visibility and influence in regulatory and industry discussions. One of our interviewees clearly outlined the various aspects of the AI value chain and the distinct roles played by different stakeholders within it:
“So those people need to be at the table talking about models and […] the role for the model developers, versus those who are then taking those models and building on, building off them. […] One is, as I said, you’re developing, furthering the development of the model […] where you’re teaching the model new skills or adding new data or new context, or new industry-specific customizations to the model and RAG [Retrieval-Augmented Generation] can do that. […] The second thing is, of course, AI systems developers. […] An AI system is kind of end-to-end; it’s something that is targeted to do something that includes and embeds AI or leverages a model. […] They’re building a system around leveraging AI capabilities, leveraging what they know […] to create a set of tools that somebody else is going to deploy. […] And then […] the deployers. Yeah, what are you going to use these models for? Use Case matters 100%. Are you comfortable with that use case? You know, from an ethics perspective, do you have the controls in place? Do you have to the human in the middle? How are you making sure that the uses are appropriate, that you don’t have those hallucinations, and that you’re putting good judgment around them? The deployer has responsibilities. They also are probably going to bring some of their data to the table [….] with the prompting that they’re doing and other things. So, everybody has a role to play […], and if you’re going to put all the responsibility on this player or that player, that’s just not […] appropriate. You’re not going to get what you want, which is the oversight, the decision making, the responsibility, […] you need to make sure that the right people in the value chain have the right level of responsibility, obligations, and liability.”
A former tech chief marketing officer and now executive director of a group of enterprises that use AI in their business brings up the point of view of those deploying the technology. “We’ve debated this as a lion. I’ll share with you where the companies came out […] they don’t want to regulate the technologies, because you’re going to stifle innovation […] Therefore, a risk-based approach, which seems to be the EU’s approach and Biden’s approach has been supported. Second, differentiate the responsibilities of developers and deployers because we’re not all the same, right? Third, don’t create any new regulatory bodies; use the existing regulators to interpret AI through the lens of aviation, financial services, or healthcare. […] Please harmonize […] you’ve got California, you’ve got other states, you’ve got the EU, you’ve got the United States, you’ve got Japan, you got China. So, businesses do not like inconsistency. They want consistency. They’re not opposing regulation; they’re almost inviting it because it will help provide their safeguards. It will be the rules of the road. […] We’re going to go through a period now of massive fragmentation, and that’s not good for business.”
Our deep-dive interviews with industry members revealed that “industry” is an oversimplification, and a few different personas emerged[1]. We have grouped them under six distinct segments:
Accelerationists - Emphasize rapid AI development, pushing boundaries of innovation without heavy regulation and believing in fast-tracking advancements. Typical stakeholders include a few tech giants, some startup founders, and most VCs focused on high returns from AI. Their priorities are speed to market, competitive edge, and high-growth potential, but they face challenges such as risks of ethical, safety, and societal impacts due to lack of oversight.
Responsible AI Advocates - Develop AI ethically, ensuring fair and unbiased systems while prioritizing accountability and transparency. Stakeholders are academics, policymakers, nonprofits, and some corporate entities with dedicated AI ethics teams. Their priorities include inclusivity, bias mitigation, and creating frameworks to govern AI’s ethical use, though challenges arise in balancing innovation with extensive regulatory processes and maintaining consistency in ethics standards.
Open AI Innovators - Commit to open-source models and datasets, promoting transparency and broad access to AI technologies. Typical stakeholders are research institutions, open-source communities, tech enthusiasts, and organizations committed to democratizing AI. Priorities include collaboration, transparency, and broad accessibility to drive collective progress, while challenges involve intellectual property concerns and the misuse of open-source AI models.
Safety Advocates - Prioritize the safe deployment of AI, emphasizing risk mitigation and long-term societal implications. Stakeholders include AI researchers in safety-focused fields, a few policy advisors, and some regulatory bodies. Their priorities are addressing alignment, reducing existential risks, and promoting safe usage and deployment. Still, they face challenges in defining and enforcing safety standards and addressing fears of AI’s unpredictable consequences.
Public Interest AI - Ensure AI development aligns with public welfare and social justice and addresses issues like accessibility, equality, and inclusion. Stakeholders include nonprofits, some government entities, public welfare advocates, and citizen groups. Their priorities are building AI that supports the public good and promoting socially beneficial AI use cases. Public interest advocates face challenges such as funding limitations and overcoming commercial interests that may not prioritize public welfare.
National Security Hawks - Prioritize AI for national security, economic stability, and global competitiveness, viewing AI as a strategic asset. Typical stakeholders include government defense departments, national security agencies, and geopolitical analysts. Their priorities are protecting critical infrastructures, deploying AI in defense systems, and maintaining technological advantage in global conflicts; their challenges include the escalation of an AI arms race, concerns over civil liberties, and balancing national security with responsible AI use.
Our interviews with Congress reflected a similar understanding, with multiple interviewees referring to “industry from different parts of the ecosystem” and the need to “divvy [the industry] up into different buckets.” Staffers have cited the “ideological approach and view about open source versus closed source” or the “vicious tactics and resources of certain venture capital firms” to name just some of the distinguishing features that emerged.
Companies can fit more than one segment, and we propose this segmentation as a starting point for understanding where a given company sits or to map out behaviors by specific individuals within a company. One can consider these groups as areas of concentration or major/minor behaviors. Bear in mind that inside companies, one can find different camps as this discussion with a frontier AI research lab shows: “[T]here are definitely actors within companies that are very pro-societal good. There are also actors who aren’t […] One thing that […] exists in a lot of these companies is a real ideological split between people that were there pre-2022 […] pre GPT blowing up [who] tend to be a bit more ideological and really believe that we should be using the technology for good. And the people who came after, […] got incredibly rich doing it, […] and tend to gain enormously if they have unchecked freedom to develop and to innovate. And I think it’s important to hear from voices that don’t have a financial stake in a certain outcome.”
[1] We recognize that the term industry is used loosely throughout the text, with varying meanings. In the context of our qualitative interviews, it primarily refers to members of the tech sector and adjacent fields. When categorizing different voices and later discussing stakeholders in the proposed governance model, industry encompasses investors, auditors, consultants, academia, and civil society - contrasting with government as the other key player in a public-private partnership.