Blog Post

AI/AGI Regulation in the EU and China and the Proposed Framework for Global AI Governance

Author(s): Priyasha Sharma, UCL, The Bartlett, Institute for Global Prosperity

Introduction

The One World Trust has been working on AI governance for some years now, arguing for the need for effective, timely, and inclusive global governance of artificial intelligence. As a master's student at UCL, I took the opportunity to collaborate with One World Trust through the Community Research Initiative to undertake this research question as my final dissertation. The organisation was particularly interested in analysing two major examples of AI regulation, seeking to understand the level of commonality of thinking and to identify clearly the areas of difference.

The advent of artificial intelligence (AI) and artificial general intelligence (AGI) has catalysed transformative changes across industries, economies, and governance systems, presenting a complex array of opportunities and risks. While AI systems have the potential to revolutionise healthcare, education, and public administration, they also pose significant ethical, legal, and societal challenges, ranging from algorithmic bias to privacy violations and threats to democratic institutions. Against this backdrop, the European Union (EU) and China have emerged as pioneers in AI governance, albeit with fundamentally different approaches rooted in their distinct socio-political and economic priorities. The EU has prioritised a human-centric and ethical approach, embedding principles such as transparency, accountability, and individual privacy into its regulatory frameworks (European Commission, 2024; European Commission, 2016). Conversely, China’s strategy emphasises state-led innovation and national security, reflecting a pragmatic approach that seeks to harness AI as a tool for economic growth and geopolitical influence (State Council of China, 2023; National People’s Congress, 2017).

This comparative analysis delves into the regulatory frameworks of the EU and China, exploring the extent of their convergence and divergence. It aims to highlight the implications of these differences for global AI governance, particularly in fostering innovation while safeguarding human rights and addressing systemic risks. To achieve this, the research employed a rigorous deductive thematic analysis, systematically evaluating 15 regulatory documents—5 from the EU and 10 from China. These documents, listed comprehensively in the Appendix (Table 1: List of Documents Analysed), include foundational texts such as the EU’s Artificial Intelligence Act and GDPR, and China’s Model Artificial Intelligence Law and National AI Development Plan. The methodological process, illustrated in Figure 1: Data Analysis Process, ensured a structured approach to identifying key themes such as transparency, innovation, and accountability. This analysis not only provides a comparative lens but also lays the groundwork for a proposed global governance framework, designed to bridge the strengths and address the shortcomings of both systems in managing the multifaceted impacts of AI.

A row of blue and white arrows
Figure 1: Data Analysis Process based on Braun & Clark’s (2006) Guidelines. SOURCE: Author

Comparative Analysis: EU vs. China

Key Similarities

Despite their divergent approaches, the EU and China share several core principles in AI regulation:

  1. Transparency and Explainability: Both frameworks mandate transparency in AI operations. The EU’s AI Act requires providers to ensure AI systems are explainable, particularly high-risk ones, while China’s Model Artificial Intelligence Law (MAIL) incorporates similar requirements for disclosing AI decision-making processes (European Union, 2024; China State Council, 2023). Table 2: List of Codes and Themes Created in the Full Research Report illustrates how transparency emerged as a central theme during the analysis. These measures demonstrate a shared commitment to mitigating opacity in AI systems, though the specific mechanisms vary significantly between the two regions. However, while the EU mandates comprehensive auditing and public disclosures for high-risk AI systems under its Artificial Intelligence Act, China’s approach focuses on state-driven content moderation and stringent licensing requirements for systems on its Negative List. This distinction underscores how their respective socio-political priorities shape the practical implementation of transparency measures in AI governance.

  2. Accountability Mechanisms: Accountability is a cornerstone in both regions. The EU’s emphasis on human oversight and quality management systems aligns with China’s requirements for full-lifecycle risk management and authorised representatives for compliance (European Union, 2024; China State Council, 2023). Both frameworks underscore the importance of defining roles and responsibilities to ensure that AI systems operate within prescribed ethical and legal boundaries. In the EU framework, roles include compliance officers and accountability measures that enforce GDPR-related requirements, such as data protection impact assessments. Conversely, China emphasises state-appointed representatives and comprehensive oversight mechanisms tied to its licensing requirements, particularly for AI applications listed under its Negative List. This dual focus reflects distinct governance priorities, where the EU leans towards decentralised oversight and stakeholder engagement, while China centralises control to align AI development with national objectives. Table 3: Comparison Analysis of EU and China AI Regulations visually compares these accountability mechanisms comprehensively in the Full Research Report.

  3. Support for Innovation: Regulatory sandboxes feature prominently in both systems, enabling controlled environments for testing AI technologies. The EU offers priority access to small and medium-sized enterprises (SMEs), while China’s mechanisms focus on fostering rapid deployment through state-led initiatives. This shared emphasis highlights a global recognition of the need to balance safety with innovation, particularly in advancing AI technologies that drive societal and economic progress For instance, the EU’s regulatory sandboxes prioritise support for small and medium-sized enterprises (SMEs) by reducing administrative burdens, whereas China’s state-led initiatives accelerate innovation deployment in critical sectors such as healthcare and military applications (Floridi, 2018). These approaches demonstrate how both regions navigate the interplay between fostering technological advancement and addressing the risks associated with AI.

Key Differences 

These differences reflect deeper ideological divides, with the EU’s frameworks rooted in democratic principles prioritising individual rights, transparency, and human dignity, whereas China’s governance aligns with utilitarian state objectives, emphasising national security, social stability, and rapid technological advancement. These ideological foundations manifest in regulatory outcomes, such as the EU’s insistence on robust privacy protections under the GDPR, and China’s prioritisation of licensing mechanisms for AI systems deemed critical to national interests. Table 3: Comparison Analysis of EU and China AI Regulations provides a detailed breakdown of these divergences which can be found in the Full Research Report.

  1. Governance Philosophy: The EU prioritises a decentralised, multi-stakeholder governance model that emphasises inclusivity and individual rights. This is supported by the European Commission’s Ethics Guidelines for Trustworthy AI (2019), which stress transparency, fairness, and public deliberation as cornerstones of effective governance. By contrast, China centralises AI governance under state control, ensuring alignment with national security and developmental priorities, as outlined in the Model Artificial Intelligence Law (2024). This divergence stems from their differing political structures—the EU’s democratic orientation promotes checks and balances (Floridi, 2018), whereas China’s single-party framework fosters a top-down governance model prioritising state interests (Jobin, Ienca, & Vayena, 2019).

  2. Implementation of Safety Measures: While both regions enforce safety protocols, the EU’s approach is centred on democratic accountability, emphasising public oversight and the involvement of civil society. The Artificial Intelligence Act (2024) mandates a risk-based classification system and requires human oversight for high-risk applications to ensure safety and ethical alignment. China’s safety measures, detailed in the Model Artificial Intelligence Law (2024), focus heavily on national security through licensing systems and rigorous content moderation to safeguard social stability (State Council of China, 2024).

  3. Public Engagement: Public deliberation in the EU involves consulting various stakeholders, including civil society and industry, to ensure AI systems align with ethical values and societal needs. Stilgoe, Owen, and Macnaghten (2013) highlight the importance of inclusivity in governance, a principle echoed in the EU’s policies. In contrast, China’s engagement strategies are less consultative and more directive, with public inputs largely filtered through state mechanisms (Veale & Binns, 2017). This reflects a broader ideological underpinning where state-led decision-making supersedes grassroots participation.

The EU’s frameworks, rooted in democratic ideals, prioritise individual rights and transparency. In contrast, China’s pragmatic model emphasises national interests and social stability, reflecting distinct ideological and cultural priorities.

Unexpected Findings and Their Implications

Several unexpected findings emerged from the comparative analysis:

  1. Convergence in Safety Protocols: Both frameworks stress robust safety measures, such as risk assessments, internal management systems, and post-market surveillance. While the EU emphasises transparency and democratic accountability, China’s measures focus on protecting social stability and national security (Veale & Binns, 2017). These convergences in safety protocols are depicted below in Figure 2: Map of Thematic Codes Created.

  2. The Role of Licensing: China’s Negative List system, which requires licensing for high-risk applications, provides a unique approach to controlling potentially harmful technologies. This contrasts with the EU’s broader risk classification but offers insights into alternative oversight mechanisms that could be adapted for international governance models (Brundage et al., 2018).

  3. Integration of Public Oversight: China’s emphasis on content moderation—evidenced by laws like the Deep Synthesis Services Law—is unparalleled in its rigour. While the EU also incorporates public deliberation, its focus is more consultative, highlighting the importance of adapting governance to socio-political contexts. These differing approaches provide a broader spectrum of strategies that can inform a global governance framework (Jobin, Ienca, & Vayena, 2019).
A diagram of a diagramDescription automatically generated with medium confidence
Figure 2: Map of Thematic Codes Created and their Inter-Relationships. SOURCE: Author

Proposed Framework for Global AI Governance

The proposed framework (Figure 3: Proposed Global Governance Framework) integrates the strengths of both systems into a cohesive model addressing key challenges in global AI governance, including ensuring ethical accountability, fostering innovation without compromising safety, and balancing national priorities with global collaboration. To improve readability and engagement, the framework’s components can be better outlined as a narrative that weaves together international, national, and industry-level approaches.

Several colorful papers with textDescription automatically generated with medium confidence
Figure 3: Proposed AI/AGI Global Governance Framework

At its core, the framework synthesises the EU’s prioritisation of ethical accountability, as exemplified by its Artificial Intelligence Act and GDPR, with China’s focus on innovation scalability, demonstrated through its National AI Development Plan and licensing systems for high-risk AI applications. These approaches highlight the EU’s emphasis on safeguarding individual rights and China’s strategic focus on rapid technological development to support national priorities. Global entities such as the United Nations and the OECD would lead international governance efforts, fostering cooperation on cross-border standards for data sharing and ethical AI guidelines. These organisations would also help create an adaptive governance structure, reflecting the evolving needs of technology.

At the national level, the framework allows for flexibility, enabling nations to adopt context-specific models based on their unique socio-political and economic contexts. For instance, the EU’s risk-based classification system provides a clear mechanism for regulating high-risk systems while fostering innovation in low-risk areas. In contrast, China’s licensing approach for AI technologies listed on the “Negative List” demonstrates a stricter, top-down method aimed at managing risks associated with national security and social stability. These approaches, although distinct, share a common goal of mitigating risks without compromising on technological advancement.

The industry-level strategy emphasises practical implementation by mandating increased transparency, explainability, and regular auditing of AI systems. Enforcement of these mandates relies on a combination of regulatory oversight and self-regulatory measures. Compliance audits conducted by independent organisations ensure adherence to standards such as those outlined in the EU’s Artificial Intelligence Act. In China, state-led inspections and penalties for non-compliance provide a robust enforcement mechanism. Monitoring is further supported by real-time data reporting requirements and the use of compliance dashboards. Here, regulatory sandboxes play a critical role in balancing innovation with risk control, while open-source models foster global collaboration.

For further details, readers can refer to the Full Research Report here:

Full Research Report, which elaborates on the nuanced elements of the analysis and their broader implications.

Appendix

Table 1: List of Documents Analysed

References

  1. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802.07228. Retrieved from https://arxiv.org/abs/1802.07228
  2. European Union. (2024). Artificial Intelligence Act. Brussels: European Union Publications. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024RAI001
  3. European Commission. (2019). Ethics Guidelines for Trustworthy AI. Retrieved from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  4. State Council of China. (2024). Model Artificial Intelligence Law. Beijing: State Council Publications. Retrieved from https://www.chinalawtranslate.com/en/ai-draft/
  5. Floridi, L. (2018). Soft ethics and the governance of AI. Philosophy & Technology, 31(1), 1-8. Retrieved from https://doi.org/10.1007/s13347-017-0263-2 
  6. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. Retrieved from https://doi.org/10.1038/s42256-019-0088-2 
  7. Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 1-17. Retrieved from https://doi.org/10.1177/2053951717744925 
  8. National People’s Congress. (2021). The PRC Personal Information Protection Law. Retrieved from https://www.china-briefing.com/news/the-prc-personal-information-protection-law-final-a-full-translation/
  9. Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568-1580. Retrieved from https://doi.org/10.1016/j.respol.2013.05.008 

EU Documents:

  1. European Union. (2024, June). EU Artificial Intelligence Act. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689 
  2. European Union. (2022, October). Digital Services Act. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R2065 
  3. European Union. (2022, September). Digital Markets Act. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022R1925 
  4. European Commission. (2019, April). Ethics Guidelines for Trustworthy AI. Retrieved from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai 
  5. European Union. (2016, April). General Data Protection Regulation. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679 

China Documents:

  1. National People's Congress. (2024, April). Model Artificial Intelligence Law (MAIL) v.2.0. Retrieved from https://www.chinalawtranslate.com/en/ai-draft/ 
  2. State Council of China. (2024, May). Artificial Intelligence Law of the People's Republic of China. Retrieved from https://www.chinalawtranslate.com/en/china-ai-law-draft/ 
  3. Cybersecurity Administration of China. (2024, April). Basic Safety Requirements for Generative Artificial Intelligence Services. Retrieved from https://www.cset.georgetown.edu/publication/china-safety-requirements-for-generative-ai-final/ 
  4. Standardization Administration of China. (2024, January). Guidelines for the Construction of a Comprehensive Standardization System for the National Artificial Intelligence Industry. Retrieved from https://www.cset.georgetown.edu/publication/china-ai-standards-system-guidelines-draft/ 
  5. State Council of China. (2022, November). Provisions on the Administration of Deep Synthesis Internet Information Services. Retrieved from https://www.chinalawtranslate.com/en/deep-synthesis/ 
  6. State Council of China. (2022, March). Opinions on Strengthening the Management of Science and Technology Ethics. Retrieved from https://www.gov.cn/gongbao/content/2022/content_5683838.htm 
  7. Cybersecurity Administration of China. (2021, December). Provisions on the Management of Algorithmic Recommendations in Internet Information Services. Retrieved from https://www.chinalawtranslate.com/en/algorithms/ 
  8. Beijing Academy of Artificial Intelligence. (2021, September). Ethical Norms for New Generation Artificial Intelligence. Retrieved from https://cset.georgetown.edu/wp-content/uploads/t0400_AI_ethical_norms_EN.pdf 
  9. Standardization Administration of China. (2021, July). Artificial Intelligence Standardization White Paper (2021 Edition). Retrieved from https://www.cset.georgetown.edu/publication/artificial-intelligence-standardization-white-paper-2021-edition/ 
  10. National People's Congress. (2021, November). The PRC Personal Information Protection Law (PIPL). Retrieved from https://www.china-briefing.com/news/the-prc-personal-information-protection-law-final-a-full-translation/ 
  11. Chinese Expert Group. (2019, June). Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial Intelligence. Retrieved from https://digichina.stanford.edu/work/translation-chinese-expert-group-offers-governance-principles-for-responsible-ai/ 
  12. Beijing Academy of Artificial Intelligence. (2019). Beijing AI Principles. Retrieved from https://www.baai.ac.cn/blog/beijing-ai-principles