Emerging AI policy approaches
Comparative analysis of the policy strategies of the U.S., the E.U., and China
This is the next in a series of excerpts from my paper “Governance at a Crossroads.”
UNESCO
In August 2024, the United Nations Educational, Scientific and Cultural Organization (UNESCO) released a consultation paper on global efforts to develop legislative frameworks addressing AI’s impact on democracy, human rights, and the rule of law. Emerging regulatory approaches include principles-based guidance for ethical AI use, standards-based frameworks involving technical oversight, and agile schemes like regulatory sandboxes for innovation. It also examines risk-based and rights-focused approaches to prioritize human rights and manage AI’s potential harm. The report emphasizes the need for transparency, liability measures, and tailored amendments to existing laws to address sector-specific challenges. The document aims to assist legislators in addressing key regulatory questions - why, when, and how to regulate AI - through global examples and stakeholder feedback. It emphasizes international cooperation, adaptability, and the inclusion of human rights and ethical principles in shaping AI policy. In a recent article, I highlighted the U.S., Europe, and China as the main approaches starting to dominate the regulatory scenario. Let’s run these three jurisdictions through the UNESCO model.
European Union (EU)
Europe has taken several approaches to regulating AI, the most notable being the European Union Artificial Intelligence Act, the world’s first comprehensive law on AI. Yet, even before the adoption of this landmark law, the EU has led regulatory innovation through other important digital regulations, including the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), and the Digital Markets Act (DMA), to name a few. These laws govern various aspects of online data, platform operations, and user privacy within the EU and often have an extraterritorial effect referred to as the “Brussels Effect” with other jurisdictions adopting similar laws – when it comes to privacy, for example – or companies choosing to make changes to global practices because of EU laws.
The EU’s AI regulation emerged well before ChatGPT’s launch, originating with a 2021 white paper that explored regulatory options ranging from a hands-off approach to comprehensive oversight. After extensive impact assessments and stakeholder consultations, the European Commission proposed the EU AI Act, a risk-based horizontal framework for regulating AI. The EU AI Act entered into force on August 1, 2024, with its first prohibitions coming into force as of February 2025.
The law has two objectives: to protect individuals’ fundamental rights and to enable the free movement of AI systems and data within the European Union. It establishes obligations based on different risk levels: unacceptable, high, limited, and minimal. The act represents an ex-ante accountability framework for AI since it requires proof of compliance with requirements before systems can be taken to market, even though most of these compliance checks are done internally by each company. The EU AI Act promotes a standards-based approach, acknowledging the role of standard-setting organizations guiding the implementation of mandatory rules. It encourages a holistic stakeholder representation in the development of the standards. The AI Act mandates that providers of high-risk AI systems test their products against harmonized European standards (hENs) before affixing the European Conformity (CE) mark, granting free circulation within the European market. While the CE mark and hENs are established tools for ensuring product safety across other sectors, this is the first time they are used to certify compliance with fundamental rights, one of the two key objectives of the law. Under the EU AI Act, complying with harmonized standards creates a “presumption of conformity,” meaning that companies adhering to these standards are largely compliant with the regulatory requirements of the act unless evidence proves otherwise; essentially, it shifts the burden of proof to demonstrate non-compliance if a company follows the established standards.
The EU’s approach includes agile and experimentalist elements, establishing a framework for AI regulatory sandboxes. These sandboxes allow providers to test AI systems under flexible regulatory conditions with oversight from competent authorities. It also takes an innovative approach with general-purpose AI (GPAI) systems, establishing “Codes of Practice,” an interim measure between GPAI model obligations and adopting standards. The Codes are being developed through an ongoing multi-stakeholder process involving hundreds of companies, non-governmental organizations (NGOs), academics, and independent experts. As mentioned before, beyond the law, Europe acknowledges that some AI-related activities are already regulated through existing laws like the GDPR. This approach illustrates an adaptation of existing laws to address AI-related challenges.
United States
The U.S. approach to AI regulation, while lacking a comprehensive federal law like the EU’s AI Act, involves a combination of strategies, including adapting existing laws, fostering an environment for AI innovation, and exploring principles-based regulation. This is demonstrated through initiatives like the White House Executive Order on AI and state-level legislation focusing on specific areas like employment decisions and consumer protections. Like Europe’s use of the GDPR for specific AI concerns, the U.S. leverages existing legal frameworks to regulate aspects of AI. Regulations concerning issues like data protection and privacy (mostly governed at the State level), consumer protection, economic competition, and liability rules apply to activities and processes throughout the AI system’s lifecycle. The U.S. also fosters a conducive environment for AI development and use. Furthermore, the U.S. is considering principles-based regulation.
In a previous article, we describe the recent history of “ruling by executive orders” over the last three U.S. administrations. Under the Obama Administration, AI policy began with a 2016 report emphasizing AI’s promise for societal benefits and economic growth while urging investments in talent and fairness. The report, however, underestimated AI’s rapid progress. The Trump Administration focused on AI’s role in U.S. economic and national security, enacting policies to boost AI research funding, establish institutes, and foster global alliances. It framed AI as a tool for efficiency and global leadership. The Biden Administration introduced the AI Bill of Rights and the National Institute of Standards and Technology (NIST) Risk Management Framework to address AI ethics and risks, culminating in Executive Order 14110 - Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence - which prioritized equity, privacy, and competition but lacked enforcement mechanisms. A common issue with executive orders is that they may lack enforceability and are subject to being reversed during an administration change. Mostly, they do not have the force of law. An example of a standards-based approach and technical oversight is the U.S. Artificial Intelligence Safety Institute (AISI), which advances AI safety science, practices, and adoption, develops guidelines, tools, and standards for AI measurement and evaluation, conducts safety assessments of AI models, facilitates collaboration between stakeholders, and contributes to international AI safety efforts. AISI resides under NIST and aims to promote responsible AI innovation while mitigating risks.
People’s Republic of China (PRC)
China’s stated goal is to be a global leader in AI by 2030. Its regulatory activity reflects a balancing act between the desire for innovation and the need to regulate the development and use of AI that is safe and ethical and “reflects Socialist Core Values”. While Europe and the U.S. are developing comprehensive frameworks like the EU AI Act and exploring principles-based regulation, respectively, China appears to focus on a strong compliance to standards-based approach, particularly concerning data governance and security.
The Cyberspace Administration of China (CAC) released a draft of measures for Generative AI in April 2023. These measures emphasized a strict regulatory compliance framework, particularly regarding data used in AI development. Providers must undergo a CAC-approved security assessment before offering services using Generative AI products. In practice, this has not been strictly enforced yet. The data used to train AI models must comply with China’s Cybersecurity Law, ensuring it is obtained legally and doesn’t infringe on intellectual property rights.
China’s National Information Security Standardization Technical Committee (NISSTC) in May 2024 released draft regulations outlining cybersecurity measures for Generative AI services. These regulations, open for public comment, address the security of training data, AI models, and overall service provision. They define “harmful” data as including content violating socialist values, promoting violence or obscenity, or infringing on legal rights. The draft also details security requirements across the AI model lifecycle, from training to monitoring subsequent model updates, and proposes various safety measures for service providers to protect users, particularly minors. Compliance will likely increase costs for providers but also foster user trust and responsible AI development. This is an attempt to create a unified technical standard for security and safety measures.
The CAC and NISSTC are distinct entities with complementary roles in shaping the country’s cybersecurity and information governance landscape. The CAC serves as the primary regulator and policymaker for China’s Internet and data governance, including cybersecurity, data privacy, and AI. It enforces and oversees compliance with the law and manages incidents in the digital and cyberspace domains. The CAC often mandates or references NISSTC-developed standards in its regulatory enforcement. The NISSTC focuses on developing technical standards for information security, including data protection, cybersecurity, and AI governance. It is primarily a standardization body. NISSTC’s standards often serve as the technical foundation for laws, regulations, and enforcement in cybersecurity and data governance.
China’s strategy aligns with several of UNESCO’s regulatory approaches but primarily focuses on two. First is the standards-based approach, with a compliance focus, as China has been developing specific standards for AI governance, particularly concerning data security and algorithmic recommendations. Second, the approach of adapting existing laws is where China leverages its legal framework, such as the Cybersecurity Law, to regulate AI. The measures require that data used to train AI models must comply with China’s Cybersecurity Law.
China’s AI Safety Governance Framework, released in September of 2024 to implement the Global AI Governance Initiative, adopts the risk-based and principles-based approach when outlining the principles for AI safety governance. It is more prescriptive than purely principles-based approaches. Unlike the EU AI Act, which assigns AI systems to four risk levels with specific regulations, the framework identifies risk areas without evaluating their severity. It emphasizes targeted risk mitigation through technological measures like improving data quality and reliability. Sensitive or critical data in high-risk fields must comply with strict privacy laws. The framework highlights adaptive AI governance through stakeholder collaboration, tiered risk management, traceability, ethical standards, and global alignment. It prioritizes transparency, safety, education, and cross-border cooperation to address evolving AI challenges. The “Global AI Governance Initiative” is a proposal by China that aims to establish a framework for international cooperation on governing AI development, advocating for a people-centered approach, mutual respect, and equality among nations while developing and utilizing AI technology, with a focus on ensuring AI aligns with human values and benefits all of humanity; it calls for discussions within the UN framework to establish an international institution to oversee AI governance.
The CAC measures and the AI Safety Governance Framework are interconnected. The measures, which have been active since August 2023, serve as an early, specific application of governance principles that the framework later formalized and expanded, particularly in their shared focus on content safety and risk assessment. While the CAC measures provided detailed guidelines for Generative AI providers to meet content and cybersecurity requirements, the framework broadens these efforts within a tiered, risk-based governance system. This integration ensures that higher-risk applications like Generative AI receive enhanced oversight. At the same time, the framework guides the ongoing revision and enforcement of the CAC measures, maintaining consistency with China’s broader AI governance goals. China’s regulatory approach is characterized by a step-by-step, dynamic strategy. This allows for agility in addressing new and complex risks while aiming to protect users and the government from harm. The country’s centralized approach to AI development and firm stance on regulatory oversight distinguish it from other nations’ strategies.
During our congressional interviews, a recurring theme of American exceptionalism emerged. While interviewees expressed some deference to and awareness of initiatives in other jurisdictions, such as the EU or the UK, their acknowledgment was largely superficial - more a recognition of these efforts’ existence rather than a deep understanding of their intricacies. Interviewees frequently depicted China as a counterpoint in geopolitical discourse rather than recognizing its thought leadership. They emphasized that any actions undertaken must be carefully tailored to the unique context of the United States, considering the dynamics of Congress and the challenges of legislating within the U.S. framework.
A similar sentiment was apparent in the interviews with leaders in the tech industry. Business leaders closely monitor the EU AI Act as a template for future regulation, expecting a GDPR-like influence on the U.S. The “Brussels Effect” started with data privacy.
A national security example
In January 2025, the Bureau of Industry and Security of the U.S. Department of Commerce issued an interim final rule (IFR) to enhance export controls on advanced computing integrated circuits. It imposed new controls on AI model weights to protect U.S. national security. An IFR is a regulatory rule issued by a U.S. government agency that takes immediate effect without first going through the typical notice-and-comment period. This IFR goal was to prevent the technology from falling into the hands of adversaries, primarily China, and incentivize the purchase of American products from American companies.
The introduction of the rule followed a month of statements about a significant increase in AI capabilities and the achievement of a new technical milestone, bringing us closer to the mythical Artificial General Intelligence (AGI – when machine intelligence can understand or learn any intellectual task that a human being can). The ARC Prize team – a group of scientists and developers who created an AI benchmark to measure progress towards AGI – highlighted that the latest OpenAI model represented “a surprising and important step-function increase in AI capabilities, showing novel task adaptation ability never seen before in the GPT-family models.” Sam Altman, CEO of OpenAI, mused on his year-end reflections: “We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies. […] We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.” Dario Amodei (CEO of Anthropic) and Matt Pottinger (former Deputy National Security Advisor of the United States during the first Trump administration) alerted the importance of the U.S. leading the world in artificial intelligence to preserve national security. They highlighted that “AI will likely become the most powerful and strategic technology in history. By 2027, AI developed by frontier labs will likely be smarter than Nobel Prize winners across most fields of science and engineering, designing new weapons or curing diseases. […] The nations that are first to build powerful AI systems will gain a strategic advantage over its development. […] Export controls, which ban shipments to China of the high-end chips needed to train advanced AI models, have been a valuable tool in slowing China’s AI development.” These developments set the stage for the policy initiative to preserve America's advantage in AI over rivals such as China.
The announcement triggered mixed reactions. Industry associations complained about the lack of stakeholder engagement. NVIDIA expressed concerns about market interference. Their reaction, however, needs to be taken with perspective since they currently operate in a supply-constrained environment, with the demand for their products far exceeding their production capacity. If supply catches up with demand, they will need new markets, and any trade restrictions are not in their interest.
What happens next is hard to predict. The new administration can decide to reverse or use it as a negotiating tool akin to the strategy used with tariffs.
Immediately after the rule’s announcement, the Biden administration issued a companion executive order focused on AI infrastructure. Successful implementation of the Framework for Artificial Intelligence Diffusion will increase the demand for U.S. AI technology, further stressing the country's energy supply. The order recognizes the substantial energy needs of AI infrastructure and seeks to leverage this opportunity to advance American leadership in clean energy technologies. It addresses topics discussed in the “The second AI triad” article, such as land availability, environmental impact, permitting reform, energy transmission infrastructure, consumer prices, and labor implications.