Guide

A Curated Global Guide to
AI Compliance:
Navigating International AI Regulations

Artificial intelligence is rewriting the rules of technology development, from automating mundane tasks to driving innovations. But as AI systems grow more sophisticated, so do the challenges of ensuring they operate responsibly. Every line of code, every training dataset, and every decision made by an AI model now comes with an important question: Does it comply with the law?

As AI adoption accelerates, governments worldwide are introducing regulations to keep pace with this powerful technology. From Brazil to South Korea to the United States, these laws aim to manage risks, protect users, and promote transparency. For organizations, this creates a complex regulatory maze where staying compliant requires more than just understanding one law; it demands navigating overlapping and sometimes conflicting frameworks.

This guide offers a curated overview of key AI regulations shaping the global compliance landscape. While it doesn't cover every single regulation out there, it's designed to highlight the most impactful laws and provide actionable insights for businesses.

The Rise of AI Governance: Why It Matters

Artificial intelligence has gone from an experimental technology to a ubiquitous presence, shaping everything from how we interact online to life-changing decisions in healthcare, finance, and beyond. But this rapid adoption hasn't been without its challenges. AI's unchecked potential has raised ethical, legal, and societal concerns that no longer remain hypothetical.

High-profile incidents have underscored the urgent need for governance. From algorithms that unintentionally discriminate in hiring to AI-driven surveillance systems that threaten privacy, these examples have sparked global conversations about the responsible use of AI. Trust is the cornerstone of technology adoption, and without accountability, AI's promise can quickly turn into public distrust.

Governments are stepping up to address these challenges. The European Union has taken a leading role with its ambitious AI Act, introducing risk-based classifications and strict rules for high-impact AI systems. South Korea has implemented similar efforts through its Basic Act on AI, emphasizing trust and safety in critical AI applications. Even the United States, often considered hesitant on regulatory frameworks, is advancing state and federal laws to bring structure to AI development and deployment.

This wave of AI regulations matters because it sets the boundaries for innovation. Companies must now navigate a landscape where regulatory compliance is not just a checkbox; it's a strategic imperative. Those who proactively adapt to these frameworks can gain a competitive advantage, establishing themselves as trusted players in an increasingly scrutinized market. On the other hand, organizations that overlook these developments risk financial penalties, reputational damage, or even being excluded from markets altogether.

As AI governance matures, it becomes clear that regulatory compliance isn't just about following the law. It's about ensuring ethical, transparent, and responsible AI that benefits both businesses and society.

The Challenges of Compliance Across Borders

For organizations operating across borders, AI compliance can get even more complicated. What may be considered compliant in one region can easily violate regulations in another. This growing patchwork of laws and frameworks places significant burdens on companies wanting to use AI responsibly while remaining competitive globally.

One of the biggest challenges lies in conflicting requirements. For example, the EU's AI Act introduces a comprehensive risk classification system, requiring stringent impact assessments for high-risk applications. Meanwhile, Brazil's Proposed AI Bill (PL 2338/2023) focuses more on transparency and prohibited uses, creating differences in implementation priorities. For companies working across these regions, ensuring compliance means reconciling varied obligations without compromising operational efficiency.

The cost of compliance also continues to rise. Frequent audits, detailed impact assessments, and documentation requirements demand significant investments in both time and resources. Smaller organizations often face a steeper hill to climb, as they may lack the internal expertise or budget to meet regulatory demands. Even larger companies find themselves allocating substantial resources to stay ahead of evolving rules.

Uncertainty compounds the issue. Many regulations, such as the US Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312) or Texas TRAIGA, are still in the proposal stage. Businesses are left guessing how these laws will evolve and how to future-proof their compliance strategies.

Navigating this regulatory maze requires a deep understanding of individual laws and a holistic approach to compliance—exactly what the Modulos AI Governance Platform is built for. Companies must think globally while acting locally, adapting their practices to meet regional requirements while maintaining principles of transparency, fairness, and accountability.

Key AI Regulations Around the World

As AI adoption grows, governments are stepping in to ensure this transformative technology is developed and deployed responsibly. Below is a curated list of some of the most impactful AI regulations shaping compliance today. While not exhaustive, these laws represent the diversity of approaches to governing AI across the globe, from risk classification frameworks to transparency mandates.

The EU AI Act

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, following a risk-based approach to ensure safety, transparency, and accountability. Like GDPR did for data privacy, the EU AI Act is expected to shape global AI compliance standards.

Key Elements of the EU AI Act

Risk-Based Classification: AI systems are categorized into four levels—Unacceptable Risk (banned), High Risk (strictly regulated), Limited Risk (transparency obligations), and Minimal Risk (no specific requirements).

Compliance Requirements: High-risk AI systems must adhere to strict obligations, including risk management, transparency, human oversight, and data governance.

Penalties: Non-compliance can result in fines of up to €35 million or 7% of global turnover, whichever is higher.

The Act officially entered into force in August 2024, with phased implementation through 2027. Businesses operating in or interacting with the EU market must assess their AI systems and ensure compliance.

Learn More: Explore the full EU AI Act Guide here

Council of Europe Framework Convention on AI and Human Rights

Signed in September 2024 in Vilnius, this Convention establishes a comprehensive legal framework designed to ensure that all activities within the AI lifecycle adhere to fundamental human rights, democratic principles, and the rule of law. It applies to public authorities and private actors acting on their behalf, mandating that states implement graduated, context-specific measures to manage AI-related risks—ranging from potential discrimination and privacy breaches to threats against democratic processes.

Under the Convention, Parties are required to embed core principles such as transparency, accountability, and effective oversight throughout the design, development, deployment, and decommissioning of AI systems. This includes performing thorough risk and impact assessments, establishing accessible remedies for affected individuals, and ensuring continuous monitoring and international cooperation.

By aligning domestic legal frameworks with internationally recognized human rights standards, the Convention not only sets a global benchmark for responsible AI governance but also complements national regulatory efforts aimed at fostering safe innovation.

The Convention will be adopted by Switzerland into national law.

Brazil's Proposed AI Bill (PL 2338/2023)

Brazil's Proposed AI Regulation Bill (PL 2338/2023) aims to establish a comprehensive framework for the ethical and responsible development, deployment, and use of AI systems. Approved by the Senate in 2023, it is currently under review by the Chamber of Deputies and is expected to take effect 12 months after publication, giving organizations a one-year grace period to comply.

The bill is built on four key pillars: risk-based governance, human rights protection, transparency, and innovation. Its objective is to ensure that AI systems align with democratic values, safeguard user rights, and promote responsible technological advancement in Brazil.

Who Does PL 2338/2023 Apply To?

The regulation covers any organization or individual that develops, deploys, or benefits from AI systems within Brazilian territory. This includes AI suppliers (the developers) and operators (those deploying AI in real-world applications), spanning across public, private, and nonprofit sectors. While personal or nonprofessional use of AI is exempt, micro and small businesses may benefit from simplified compliance obligations, which will be detailed in future regulations.

What Does the Law Require?

Risk-Based Classification

PL 2338/2023 introduces a three-tiered risk framework:

  • Excessive-Risk AI: Prohibited outright, these systems include applications such as subliminal manipulation, social scoring by public authorities, and exploitation of vulnerable individuals.
  • High-Risk AI: Systems used in critical domains like healthcare, education, justice, and infrastructure fall under this category. They require strict oversight, including impact assessments and continuous monitoring throughout their lifecycle.
  • Low-Risk AI: While subject to fewer restrictions, these systems must still adhere to principles of transparency, fairness, and accountability.

For high-risk systems, a mandatory AI Impact Assessment is required before deployment. This document evaluates potential harms, discrimination risks, and security vulnerabilities, ensuring proactive risk mitigation.

Transparency and Accountability

Transparency is a cornerstone of the bill. Organizations must disclose when users are interacting with AI and provide clear explanations of how decisions are made. This includes detailing the logic, data categories, and methodologies used in AI outputs when requested.

Human Oversight

To maintain accountability, the bill mandates human oversight for high-risk AI applications. Operators must be able to intervene in or override AI decisions, particularly when those decisions significantly affect individual rights.

Data Privacy and Security

The legislation aligns closely with Brazil's General Data Protection Law (LGPD), reinforcing protections around sensitive data and emphasizing principles like data minimization and purpose limitation.

Enforcement and Penalties

To oversee compliance, a federal regulatory authority will be established. Penalties can reach up to R$50 million per infraction or 2% of an organization's annual revenue in Brazil.

South Korea's Basic Act on AI Advancement and Trust

South Korea's Basic Act on AI Advancement and Trust, passed in November 2024, establishes a regulatory framework to foster responsible AI development while safeguarding public trust. Scheduled to take effect in late 2025, it introduces obligations for both domestic and foreign entities offering AI products and services within South Korea.

Core Requirements of the Basic Act

1. Risk Assessments for High-Impact AI

High-impact AI systems, defined as those that significantly affect human safety, rights, or critical infrastructure, are subject to stringent risk management requirements. Organizations must identify risks throughout the AI lifecycle, document risk assessments, and submit these assessments to the Ministry of Science and ICT if computational thresholds are exceeded.

2. Transparency Obligations

Organizations must notify users when interacting with AI, especially in critical areas like credit scoring or medical triage. Label generative AI outputs, such as synthetic images, text, or videos, to ensure users are aware of AI-generated content.

3. Human Oversight

The law mandates human management and supervision for high-impact AI applications. Operators must be able to override or halt AI outputs if they pose risks to human rights or safety.

4. Data Privacy and Security

The Basic Act references compliance with existing Korean laws such as the Personal Information Protection Act (PIPA).

Enforcement and Penalties

The Ministry of Science and ICT is the primary enforcement body. Penalties include fines up to KRW 30 million (approximately USD 25,000) for failing to meet labeling or transparency obligations, and up to three years imprisonment for leaking confidential information.

California's Generative AI Training Data Transparency Act (AB 2013)

California's Generative AI Training Data Transparency Act (AB 2013) sets a precedent as the first law in the United States to mandate disclosure of training data for generative AI systems. Signed into law on September 28, 2024, it will take effect on January 1, 2026, applying retroactively to AI systems made available to Californians since January 1, 2022.

What Does AB 2013 Require?

Training Data Disclosure

Developers must publicly disclose detailed documentation about the datasets used to train their generative AI systems, including:

  • High-level dataset summaries, including sources, owners, size, and data types
  • Copyright and ownership status
  • Personal information content and any data-cleaning or anonymization processes
  • Dates of collection and first use of datasets
  • Information on whether synthetic data was used

Transparency to End Users

Developers must provide accessible explanations of training data sources and methodologies, and inform users of the system's generative nature.

Enforcement and Penalties

AB 2013 integrates into California's Civil Code, allowing for enforcement by the California Attorney General. Violations may lead to injunctive relief or lawsuits under consumer protection statutes.

Colorado Senate Bill 24-205: Consumer Protections for AI

Colorado Senate Bill 24-205 is a landmark regulation aimed at protecting residents from algorithmic discrimination in high-risk AI systems. Set to take effect on February 1, 2026, the bill requires developers and deployers of AI systems to prioritize transparency, risk management, and consumer rights.

What Defines a High-Risk AI System?

A high-risk AI system is any machine-based system that significantly influences decisions in areas such as employment, education, finance, healthcare, and housing.

Main Requirements for Compliance

1. Risk Assessment and Governance

Developers must exercise "reasonable care" to identify and mitigate foreseeable risks. Deployers must adopt formal risk management policies aligned with recognized frameworks such as the NIST AI RMF or ISO/IEC 42001.

2. Documentation and Reporting

Developers must provide detailed documentation on the AI system's purpose, data sources, known risks, and mitigation strategies. Deployers must complete an AI Impact Assessment before deploying high-risk AI systems.

3. Human Oversight

Deployers must provide consumers with an appeals process or human review for decisions affecting their rights.

4. Transparency to End Users

Consumers must be informed when a high-risk AI system has been used to evaluate them. For adverse decisions, deployers must provide the principal reasons for the outcome.

Enforcement and Penalties

Enforcement is managed exclusively by the Colorado Attorney General. Violations are classified as unfair or deceptive trade practices.

US Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312)

The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 (S. 3312) seeks to establish a federal regulatory framework for AI systems in the United States. Although not yet enacted, it signals a growing push for structured AI governance across industries.

Risk Classification and Governance

The bill introduces two key risk categories:

  • High-Impact AI Systems: Systems influencing decisions in sensitive areas such as housing, education, healthcare, and credit. Annual transparency reports are required.
  • Critical-Impact AI Systems: Includes applications in biometric surveillance, critical infrastructure, and criminal justice. Compliance with Testing, Evaluation, Validation, and Verification (TEVV) standards is mandatory.

Transparency Obligations

Generative AI platforms must label AI-generated content. High-Impact and Critical-Impact AI may need to disclose training data sources, methodologies, and known system limitations.

Enforcement and Penalties

The bill designates the Secretary of Commerce as the primary enforcement authority. Civil fines of up to $300,000 or twice the value of the AI system involved in noncompliance.

NIST AI Risk Management Framework (NIST AI RMF)

The NIST AI Risk Management Framework (AI RMF 1.0), developed by the National Institute of Standards and Technology (NIST), provides organizations with a structured approach to identifying, assessing, managing, and monitoring AI risks.

Why NIST AI RMF Matters

  • Voluntary but Influential: While not a law, NIST AI RMF is widely adopted by businesses and governments to ensure trustworthy, responsible AI development.
  • Risk-Centered Approach: It helps organizations manage AI risks throughout the entire AI lifecycle, from design to deployment.
  • Global Impact: Although a US-based framework, NIST AI RMF is recognized worldwide and aligns with many AI regulations, including the EU AI Act and ISO 42001.

Key Components of NIST AI RMF

  1. Govern: Establish AI governance policies and ensure compliance.
  2. Map: Identify and document AI risks.
  3. Measure: Develop metrics to assess AI risks.
  4. Manage: Implement strategies to mitigate and monitor risks.

Learn More: Explore the full NIST AI RMF Guide here

Texas Responsible AI Governance Act (TRAIGA)

The Texas Responsible AI Governance Act (TRAIGA) is a forward-looking regulatory framework designed to govern the development, deployment, and use of artificial intelligence systems in Texas. With an effective date of September 1, 2025, this legislation introduces strict requirements for high-risk AI systems.

Who Does TRAIGA Apply To?

TRAIGA applies to mid to large-sized organizations and entities conducting business in Texas, specifically those developing, distributing, or deploying AI systems that influence significant, consequential decisions.

What Does the Law Require?

Risk Management and Assessments

Organizations working with high-risk AI systems must implement robust risk management processes, including semiannual Risk Impact Assessments.

Transparency and Accountability

Users must be notified when interacting with AI systems. Organizations must clearly explain the purpose of the AI system and the factors influencing decisions.

Human Oversight

Deployers must assign qualified personnel to oversee critical decisions made by AI. Operators must be able to intervene or override AI outputs.

Data Privacy and Security

Organizations are prohibited from capturing or using biometric data without explicit consent.

Enforcement and Penalties

The Texas Attorney General is responsible for enforcement. Fines for violations can reach up to $100,000 per violation. Daily fines range from $1,000 to $20,000 for continued noncompliance.

Encouraging Innovation: The Regulatory Sandbox

TRAIGA includes a 36-month regulatory sandbox, allowing companies to test AI technologies in a controlled environment with relaxed compliance requirements.

A Side-by-Side Look at Global AI Regulations

To help organizations better understand the complexities of global AI compliance, we've compiled a comparison showing key elements of the most prominent AI regulations. This structured comparison provides a quick and actionable reference to help you navigate the global AI regulatory landscape and align your strategies for compliance wherever your operations are based.

Common Threads in AI Regulation

Despite their regional differences, global AI governance frameworks share several common principles. These shared threads reflect a growing consensus on the need for transparency, risk management, human oversight, and data privacy.

Transparency

Transparency is a cornerstone of nearly every AI regulation discussed. Governments recognize that users have a right to know when they are interacting with AI and how decisions impacting them are made.

User Disclosures: Laws such as California's Generative AI Training Data Transparency Act (AB 2013) and South Korea's Basic Act require clear notifications when AI is involved. Generative AI outputs must often be labeled as such.

Decision-Making Logic: Regulations like Colorado's Senate Bill 24-205 and the EU AI Act take transparency further by requiring explanations of decision-making logic.

Risk Management

A proactive approach to risk is at the heart of many AI laws. By identifying and mitigating risks, governments aim to prevent harm before it occurs.

High-Risk Classifications: Brazil's AI Bill and the EU AI Act classify AI systems based on their potential risks, imposing stricter rules on high-risk applications.

Impact Assessments: The US AI Accountability Act (S. 3312) and Texas TRAIGA mandate detailed assessments to identify potential harms.

Human Oversight

Even the most advanced AI systems must be subject to human judgment to ensure ethical and accountable outcomes.

Human-in-the-Loop Requirements: South Korea and Colorado explicitly require human oversight for high-risk AI systems.

Accountability: These frameworks emphasize human responsibility, ensuring that automated processes do not replace the accountability of decision-makers.

Data Privacy

Data protection remains a crucial element of AI governance, with many regulations aligning their requirements with broader privacy laws.

Privacy Integration: Brazil's AI Bill reinforces its alignment with the LGPD, while California's laws overlap with the CCPA. Similarly, the EU AI Act works alongside GDPR.

Minimization and Safeguards: Across the board, laws emphasize data minimization, lawful processing, and robust security measures.

The Role of Technology in Simplifying Compliance

Technology simplifies the challenge of AI compliance. With the increasing complexity of global regulations, relying solely on manual processes is no longer sustainable. Technology, especially solutions like the Modulos AI Governance Platform, provides the tools to manage risk, ensure transparency, and maintain accountability.

How Modulos Aligns with Major Regulations

Modulos AI Governance Platform takes compliance to the next level by combining advanced AI governance capabilities with an intuitive platform designed to make compliance simple, scalable, and efficient.

Built-In Transparency & Data Disclosure

Transparency is no longer optional; it's a regulatory requirement. Modulos simplifies transparency compliance by centralizing your audit trails, data lineage, and system documentation.

Human Oversight & Accountability

Modulos integrates human oversight at every stage of the AI lifecycle, from design to deployment. The platform ensures auditable decision-making chains.

AI Compliance by Design

Modulos embeds compliance into every stage of AI development. From initial data gathering to model deployment, the platform incorporates regulatory requirements from the outset.

Always Up to Date with Emerging Rules

The platform continuously monitors updates to global AI regulations and automatically adjusts controls to reflect the latest requirements.

Conclusion

Global AI regulations are not just a passing trend. They're here to stay, and their complexity will only deepen as technology evolves. While the current landscape may seem fragmented, efforts to harmonize these laws through global standards like ISO frameworks or potential UN AI codes are already gaining traction.

Emerging trends, such as the governance of generative AI, the push for ethical AI development, and the challenges of cross-border data handling, highlight the direction regulation is taking. Staying ahead of these trends requires more than compliance; it demands agility and foresight.

Proactive compliance today is not just about avoiding penalties; it's about building resilience for tomorrow. By aligning with universal principles like transparency, risk management, and human oversight, organizations can navigate the nuances of regional differences while fostering trust and accountability.

Technology platforms like Modulos are no longer optional; they're essential. They enable organizations to scale their compliance efforts efficiently, integrate governance into AI lifecycles, and stay ahead of changing regulations.

Need help navigating the complexities of global AI regulations? Request a free demo to see how the Modulos AI Governance Platform can simplify compliance for your organization and turn regulatory challenges into opportunities for innovation and growth.

Download the EU AI Act Guide

Learn how to ensure your AI systems comply with the EU AI Act. This guide provides a clear overview of the regulation, mandatory compliance requirements, and how to prepare your AI operations for these changes.

Download the Guide
Modulos

EU AI Act Guide

Foundations and
Practical Insights

Need Help Navigating Global AI Regulations?

Request a free demo to see how the Modulos AI Governance Platform can simplify compliance for your organization and turn regulatory challenges into opportunities for innovation and growth.