A Comparison Of National Artificial Intelligence Regulations

A Comparison Of National Artificial Intelligence Regulations

Margaux Schexnider

Margaux Schexnider

Oct 09, 2025

As the countries of the world are racing to become the next artificial intelligence (AI) superpower, world leaders are attempting to regulate technologies that a minority can comprehend. Several national legal instruments have already emerged, with the EU AI Act being the first among them.

Other countries and international organizations are also tackling these issues. What makes national regulation relatively faster to publish is that  they are more focal, and each country can identify different key aspects that they deem of greater or lesser importance. Whether it be innovation, transparency, if they are more less risk averse, ethical principles, or other, regulations are able to adapt to specific goals and needs. For this research, the main recurring objectives from countries who have released regulations, or are in the process to release regulations, have been broken down into metrics, and the currently published regulations examined for their individual strengths and weaknesses. 

This research aims to breakdown these regulations, explore their objectives, and compare them based on the following metrics: 

  1. Scope and coverage

  2. Transparency and accountability

  3. Ethical principles

  4. Enforcement and penalties

  5. Adaptability and flexibility

  6. Support for innovation

  7. Monitoring and reporting


The first regulation to get published was the European Union’s Artificial Intelligence Act (EU AI Act), which was formally adopted in March 2024, and entered into force August 1st 2024. While some EU countries, like France, have recently enacted national AI-specific laws, the EU AI Act is recognized as the first comprehensive, horizontal regulatory framework for AI. 

Major national AI regulations now exist in many jurisdictions, each focused on local priorities in ethics, innovation, security, and human rights. With all of these regulations coming out, the question then becomes how do these regulations compare? 



Country/Region

Main Regulation

Summary

Strengths

Weaknesses

EU

EU AI Act

Risk-based framework classifying AI by risk level, strict rules for high-risk AI, transparency and conformity requirements,  significant penalties for non-compliance.

Comprehensive, strong protection of rights, clear guidance for business.

Complex, may hinder innovation.,slow clarity on. Specific applications.

USA

Executive orders, state laws

Sector-specific, mostly voluntary guidelines (healthcare, finance, etc.), with ongoing federal and state bills; focus on industry-led standards

Flexible and innovation-friendly, lets industries set standards

Patchy coverage,  regulatory. Gaps,  inconsistent protections across states.

China

AI Security Law, Data Governance

Centralized state-led approach prioritizing national security, data localization, algorithm regulation, links to social credit system.

Fast rulemaking, strong government support, encourages domestic AI.

Risks of overreach, surveillance, weak individual privacy rights.

Brazil

Bill2338/2023

Human-rights focus, bans excessive-risk AI,civil liability regime, regulatory authority,and reporting on incidents.

Robust civil and data rights, international cooperation, supports innovation.

Implementation challenges, possible regulatory fragmentation.

India

Principles for Responsible AI, National Strategy.

Ethical roadmap, capacity building, responsible AI design; ongoing development of operational standards.

Focus on ethical deployment, incentives for responsible innovation.

Lacks binding law, slow regulatory progress, mostly guidelines.

Indonesia

Stranas KA, MinisterialGuidelines

National AI priorities (health, education, smart cities),ethical guidelines, pending new regulation.

Tailored for national needs, risk mitigation in key sectors. 

Fragmented, weak enforcement, real regulations pending. 

Nigeria

Data Protection Act, AI Guidelines

AI governed by data, cybersecurity, and consumer protection laws, specific sectoral guidance and ongoing legislation.

Multi-sector coverage, incentives for responsible use, protection against bias.

Overlap and confusion between laws, main AI law still pending. 

Pakistan

Regulation of Artificial Intelligence Act (proposed) 

Established oversight commission, penalizes unethical use, strengthens consumer rights, aligns with international standards.

Strong accountability, focus on social welfare, commitment to transparency.

Regulation not yet enacted, possible bureaucratic delays. 

France

National Institute for Evaluation & Safety of AI (INESIA).

Specialized evaluation authority, supports AI safety, aligns with EU rules.

Focused expertise, bridges national/EU priorities.

Still developing framework, limited scope outside EU AI Act.

Table 1: Breakdown of National AI Legal Instruments


Findings from Table 1:

Comprehensive versus Sectoral:

EU’s AI Act is the most comprehensive, covering all types of AI, with clear risk categories and enforced by a dedicated regulatory office. This offers transparency and rights protection but slows innovation and creates uncertainty for businesses. 

The US champions sector-by-sector mainly relying on nonbinding industry standards. While this boosts rapid progress, it leaves regulatory gaps and inconsistent consumer protection. 

Centralised vs Decentralized: 

China’s central approach supports rapid deployment but raises privacy and surveillance concerns.

Countries like Nigeria and Pakistan use a patchwork of existing sector laws and new regulatory proposals, generating confusion and slow adoption of uniform standards. 

Innovation vs Ethics: 

Brazil’s new bill and India’s strategy highlight civil rights, ethics, and human oversight, balancing growth and responsible use. 

Indonesia and Nigeria prioritize AI’s positive economic and social impacts, but face fragmentation and enforcement gaps as regulations evolve. 

Enforcement Mechanisms:

EU penalties are severe (up to 7% of annual global turnover or 35 million euros), ensuring compliance but risking deterring business investment. 

Elsewhere, penalties and oversight bodies are still being established, as in France with INESIA, or are sector-specific. 

Global Observations:

Most countries have

  1. Some form of legislation, policy, or guidance written, proposed, or enforced. 

  2. Real gaps between proposed and enforced laws, particularly for AI ethics, human rights, and consumer protections. 

  3. A trend toward harmonizing with major frameworks (EU, OECD, UN) but differences persist in real-world impact enforcement. 


In order to further compare these regulations, metrics were identified based on their occurrence in  discussions regarding emerging  technologies. The following metrics have been identified. 

  1. Scope and coverage

  2. Transparency and accountability

  3. Ethical principles

  4. Enforcement and penalties

  5. Adaptability and flexibility

  6. Support for innovation

  7. Monitoring and reporting

The points can be seen below in Table 2. 

Metric EU AI Act US AI Regulatory ApproachChina AI RegulationsBrazil (Bill 2338/2023)India (National Strategy)Indonesia (Stranas KA, PDP Law)Nigeria (Data Protection Act & Guidelines)Pakistan (Proposed AI Law)France (INESIA, EU-aligned)Scope and CoverageComprehensive, risk-based; covers all AI applications, prohibits unacceptable uses, strictly regulates high-risk systems.Fragmented, largely sector-based (health, finance, labor); guidance via federal agencies and state laws, variable coverage.Broad and expanding, prioritizes algorithmic, generative, and recommendation systems with sector-specific rules.Comprehensive, risk-based, covers wide sectors—health, finance, public admin; excludes some social media algorithms.Framework and sectoral strategies for health, finance, law, focuses on responsible AI and privacy mostly as guidelines. No central AI law; sectoral laws (data, electronic transactions), national strategy on AI use and priorities.Multi-sector coverage via general data, consumer, and cybersecurity law; specific AI guidelines in progress.Framework proposed, aims for broad coverage, oversight for ethical/safe AI across sectors, not yet enacted.Specialized agency evaluates AI safety; aligns with EU AI Act for full scope, sectoral details developing.Transparency & AccountabilityMandatory documentation, explainability, and human oversight for high-risk AI; conformity assessments required.Varies by sector and state; voluntary standards (NIST), sporadic documentation and oversight requirements, some transparency in state law.Algorithm registry filings, disclosure of training and operation, required security and content assessments for public opinion AI.Transparency obligations for high-risk AI, public reporting, impact assessments; rights to explanation and redress.Emphasis on explainable, responsible AI; voluntary standards, focus on guiding principles; weak enforcement.Ethical guidelines and ministerial codes promote transparency/accountability—mostly voluntary, not regulatory.Requires reporting/auditing for data handling, sectoral guidelines advise transparency; enforcement gaps.Mandates oversight and consumer rights, establishes regulator for compliance and incident reporting.ENISA-led audits for high-risk AI, transparency/review requirements, but core approach set by EU Act.Ethical PrinciplesEmbedded: fairness, non-discrimination, data protection, human rights, safety prioritized for all risk levels.Principles-driven but unenforceable at federal level—sectoral agencies may implement fairness, safety, and anti-discrimination.Content moderation, ideological alignment, security, and social harmony; ethics codes for compliance, sectoral ethical guidelines.Fairness, non-discrimination, safety, privacy, human oversight, rights protection central to the bill.“AI for All” principle, sector equity, privacy, accountability, inclusion and data ethics adopted. AI ethics codes in finance/tech, and national strategy pillars for responsible/beneficial AI.Ethics guidelines for bias/fairness, risk mitigation, data protection driven by existing law.Embeds human rights, transparency, contestability, fairness, requires ethical conduct by AI developers/users.Ethical evaluation embedded; harmonized with EU’s fairness, human rights, safety values.Enforcement & PenaltiesFines up to €35M or 7% global revenue, audits, market ban for non-compliance, robust regulatory authority (EU Commission).Enforcement mainly through existing laws (consumer, antitrust, discrimination); federal penalties limited, stronger at state level in some sectors.Regulatory agencies (CAC); non-compliance means business bans, service suspensions, criminal liability, strong public oversight.Up to BRL 50M penalty, mandatory incident reporting, new authority for AI oversight, strong liability regime. No central authority or penalties—enforced via sector law, weak compliance mechanisms.Penalties under PDP Law for privacy/data breaches; AI-specific enforcement lacking, ethics codes are advisory. Fines/audits for violations of data law, weak specific AI enforcement pending new law.Penalizes unethical/unsafe AI, regulator empowered for suspension and fines, but not yet in force.Fines/audits for non-compliance, strong enforcement via EU Commission/EU law.Adaptability & FlexibilityPeriodic review and amendment; conforms with fast-moving tech, risk-based allows flexible, yet strict adaptation.Highly adaptable, market-driven; changes quickly at state level, not locked into statute, but lacks clear national standards.Rapid regulation cycles; new and draft rules are frequent, allows targeted updates and experimental approaches in specific sectors.Bill allows sector-specific rules, review by expert group, ties to international standards, can evolve with tech.Flexible, policy-based, adapts to emerging risks/technologies via evolving guidelines.National strategy adaptable, current regulatory gaps; ongoing legislative development for more binding regulation.Regulatory overlaps create flexibility, but patchiness; new AI bill to address gaps.Designed for evolution, oversight body to review and update rules regularly.National agency supports adaptation to new risks, strong flexibility expected as EU alignment progresses. Support for InnovationRegulatory sandboxes, exemptions for R&D, risk-proportionate compliance burden for low-risk AI; some concern about heavy high-risk compliance.Innovation prioritized, especially following recent executive orders; voluntary frameworks and rapid infrastructure rollout.Strong focus on domestic innovation, support for key sectors, but strict controls on information and foreign player participation.Strong safeguards for innovation, public investment in R&D, risk-based regulation supports start-ups.Sandboxes for testing, grants/tax benefits for ethical AI, upskilling programs, collaborative policymaking.National priorities support health, education, smart cities; innovation promoted via ethical guidelines and sector growth.Responsible innovation incentivized, focus on reducing bias, pending new AI-focused incentives.Encourages responsible AI growth, focus on social welfare and ethical tech without stifling progress.R&D support, harmonized with EU innovation incentives, sector-specific programs for safe innovation.Monitoring & ReportingMandatory incident reporting, annual conformity assessment reports, real-time monitoring and logs for high-risk systems.Varies; risk assessments sometimes required by NIST framework, inconsistent monitoring/reporting, sporadic at state/sectoral level. Security assessments, algorithm/database filing, routine reports, audits, government access to monitoring data for approved systems.Required incident reporting, impact assessments, annual reviews for high-risk systems, safety tests.Risk/impact assessments in guidelines for critical sectors, voluntary reporting, uneven monitoring.Sectoral law (PDP, EIT) mandates some monitoring; AI-specific reporting not yet standardized but in development.Sectoral monitoring for data and consumer protection, some domain-specific AI guides for audits.Incident reporting required for high-impact AI, regulator empowered to oversee and audit systems.Monitoring via INESIA for safety/impact, annual reviews/audits, consistent with EU requirements for high-risk AI.

Table 2: Comparison based on Metrics of AI National Regulations



Findings from Table 2

The EU AI  Act excels in several categories, such as; scope and coverage,transparency and accountability,  enforcement and penalties, and monitoring and reporting. France’s regulation excels in ethical principles,  with a deep integration of ethical evaluation and oversight agency. The strongest for adaptability and flexibility is India, as it uses evolving sectoral guidelines and policy frameworks, regularly updating strategy as a response to technological advancements. For support for innovation, Brazil champions this metric. The regulation has a strong emphasis on research and development investment, risk-proportionate compliance, exemptions for start-ups, and incentives for responsible innovation. 

Global Trends and Lessons: 

  • Harmonization is still lacking, leading to compliance burdens and barriers for multinational AI deployment and trade.

  • Risk-based regulation, ethical guidelines, and stronger enforcements are emerging global best practices.

  • Policy-makers face the challenge of balancing innovation, ethics, security, and cross-border consistency, with international organizations urging greater cooperation on global standards.

  • Comprehensive frameworks like those of the EU and, increasingly, Brazil and France, serve as templates for other nations, but the world remains far from singular consensus or universal coverage.

In summary, while the EU AI Act leads in most metrics, other countries and regions demonstrate strengths in flexibility, support for innovation, and ethical adaption—yet global harmonization and full enforcement remain a work in progress.

Technology
Innovation
Opinion
Analysis