This is the next in a series of excerpts from my paper “Governance at a Crossroads.”
Besides setting the policy goals, balancing the policy attributes, and navigating the political landscape, one also needs to understand the different mechanisms to implement policy. There are multiple ways one can drive policy in the United States, and ultimately, several end up being exercised over the lifetime of a technology. The U.S. has historically adopted a laissez-faire or hands-off approach to the tech sector. “The United States has a history of promoting innovation until innovation has proven to be a problem, and then they will put laws in place to limit the impact of innovation […] because it was developed without rules and constraints. […] We react to problems, as opposed to trying to anticipate problems” says a Senate staffer we interviewed.
This was epitomized by Mark Zuckerberg’s 2009 Facebook motto of “move fast and break things,” capturing the hacker culture of Silicon Valley and the ethos of an era where disruption of pre-existing industries and business models was the way to go. With the benefit of hindsight, one can now see the resulting progress and the harm this approach generated. Issues with social media include the increased spread of misinformation, privacy violations, cyberbullying, mental health issues like depression and anxiety, and a lack of accountability for harmful content due to the prioritization of rapid innovation over user safety and well-being.
Did we allow too much power to go unsupervised, and should we have regulated some of the use of the technology? These questions go to the core of the policy mechanisms we can adopt to drive policy and highlight the dilemma of ex-ante (which refers to future events based on forecasts or predictions rather than concrete results) or ex-post regulation (after the event). We propose a simplified framework to group policy mechanisms into three segments:
Antitrust and competition policy – an ex-post mechanism enacted to mitigate harm via behavioral or structural remedies.
Sectoral regulation – an ex-ante mechanism that establishes expectations and obligations of behavior and conduct, requiring companies to comply before harm occurs. We are also including general statutory actions in this group for simplification purposes. For example, laws that drive policy in the national security space by investing in R&D, funding government departments or agencies, establishing public-private partnerships, etc.
Litigation– ex-post, where you need to prove the defendant owes a duty to the plaintiff, that this duty was breached, and that the breach caused the plaintiff's injury, harm, or loss (damages). Common law establishes liability precedents through judicial decisions that analyze case facts, apply legal principles, and consider policy implications. These precedents evolve to reflect changing societal norms while maintaining legal consistency.

During the last two United States presidential administrations (Trump 45 and Biden 46), the Federal Trade Commission (FTC) has pursued antitrust cases against Big Tech. There are active cases against Google, Amazon, Apple, and Meta. Similar action has been taken in Europe. Margrethe Vestager, EU’s antitrust chief from 2014 to 2024, aggressively pursued Big Tech companies and initiated major cases against Google, Apple, Amazon, and Meta, resulting in billions in fines. The European Commission introduced new regulations like the Digital Markets Act (DMA), reshaping Europe’s digital regulatory landscape and influencing global tech policies. The incoming European Commissioner for Competition, Teresa Ribera, who took office in December 2024, was tasked with tackling killer acquisitions, speeding up antitrust enforcement, and implementing a new approach to mergers to help the EU’s competitiveness.
In the U.S., there are no federal-level ex-ante laws governing digital platforms or AI. Europe’s ex-ante digital regulations, the Digital Markets Act (DMA) and the Digital Services Act (DSA), aim to foster fair competition and ensure a safer digital environment. The DMA targets “gatekeeper” platforms (large tech companies) to prevent anti-competitive practices, mandating interoperability, transparency, and restrictions on self-preferencing. The DSA addresses online content by holding platforms accountable for illegal content, requiring risk assessments, content moderation, and transparency in algorithms and ads. Together, they establish a robust framework for digital fairness and accountability.
Litigation is increasingly shaping policy in response to Generative AI’s challenges to copyright frameworks. U.S. copyright law, which requires human authorship for protection, excludes purely AI-generated works, though AI-assisted works may qualify if significant human creativity is demonstrated. The legality of training AI on copyrighted materials hinges on the fair use doctrine, which courts interpret considering AI’s capacity to mimic and compete with original works. Globally, litigation complements policy development, as seen in the EU’s restrictive text data mining regulations and differing international approaches. These legal disputes underscore the role of courts in balancing innovation with creators’ rights, setting precedents that will influence legislative efforts and the broader trajectory of copyright governance in the age of AI. Amongst others, New York Times v OpenAI in the U.S. and Getty Images v Stability AI in the UK are two cases to follow.
The law’s slow and incremental response to technological change is not new. In 1949, legal historian James Willard Hurst examined the early decades of the automobile and its profound impact on American society. Using it as a case study, he explored the dynamic relationship between technological advancements and legal development, highlighting legal inertia and unintended consequences. Hurst argued that most legal adjustments were reactive and incremental rather than deliberate - a pattern that persists today in debates over the pacing problem, where law struggles to keep up with innovation, and the interplay between norms, common law, and formal regulation.
One pressing question: Are we sufficiently exploring whether existing constitutional frameworks could be expanded to address the unique challenges of artificial intelligence? This debate warrants deeper examination, including assessing specific legal frameworks that might be adapted to AI governance. While many existing laws align with certain risks associated with AI, their application to this rapidly evolving technology remains unclear. Moreover, AI introduces novel challenges that demand a reinterpretation of current laws and a critical analysis of their limitations in addressing latent ethical, social, and economic concerns.
The three mechanisms of antitrust, sectoral regulation, and litigation are not mutually exclusive and typically co-exist. Today, the U.S. lacks ex-ante AI legislation. We will propose a new governance model as an alternative to a more rigid sectoral regulation.