In December 2022, CEO Chris Babel and a panel of privacy industry experts discussed accelerated demand for strong regulation of artificial intelligence (AI) in 2023. At the time, privacy professionals were coming to terms with the risks presented by the massive data-gathering capabilities of new-generation AI chatbots like Open AI’s ChatGPT.
As artificial intelligence and generative AI services rapidly entered mainstream business use, AI regulations became a critical concern for organizations navigating AI and data protection, responsible AI innovation, and compliance with emerging AI regulation laws.

A few months later, the pace quickened. On March 14, 2023, Open AI released its next model, GPT-4 (also integrated as ‘Bing AI’ into Microsoft’s search engine), followed a week later by Google with the launch of its Bard AI chatbot on March 21, 2023.
These developments intensified global attention on AI governance regulations, particularly around AI systems that rely on training data, automated decision making, and the processing of personal data.
Our experts anticipated lawmakers would scramble in 2023 to address ethical and privacy concerns with AI-assisted search and other automated services.
They were right. But even they couldn’t have predicted just how quickly advances in machine learning would widen the gap between how AI is used and how policymakers regulate it. Below is an overview of key AI-focused regulations and governance frameworks around the world.
European Union (EU) AI Act
The EU Artificial Intelligence Act is the world’s first comprehensive legal framework on AI. Passed in the European Parliament on March 13, 2024, it positions the EU as a global standard-setter, much like the GDPR did for privacy in 2018.
The EU AI Act represents one of the most influential AI regulations globally, directly shaping AI governance regulations, AI and data protection standards, and future AI regulation laws across industries.

Enforcement timeline:
- August 1, 2024 – Act enters into force.
- February 2, 2025 – Ban on prohibited AI systems takes effect.
- August 2, 2025 – General-purpose AI model requirements begin.
- August 2, 2026 – Most remaining rules take effect.
Organizations covered: Any organization offering a product or service in the EU that uses AI to make or contribute to decisions, recommendations, or predictions, or generate content.
This broad scope reinforces the EU AI Act as a cornerstone of AI regulations affecting global AI developers, AI services, and AI applications involving personal or professional services.
Regulatory focus: The EU AI Act is built around risk-based tiers:
- Unacceptable risk – a ban on all AI systems “considered a clear threat to the safety, livelihoods, and rights of people.”
Examples: social scoring by governments, untargeted scraping of facial images, and toys with voice assistance that could encourage dangerous behavior. - High risk – strict obligations on AI systems that could cause risks to people’s health, wellbeing, life, or fundamental rights. These obligations include risk assessment and mitigation systems, transparency, logging of activity to ensure traceability, human oversight of risk, and high-level robustness, security, and accuracy and security of data and systems.
Examples: CV-sorting software for recruitment, identifying people with biometrics, scoring exams, and applying credit scores. - Limited risk – control of AI systems with specific transparency obligations, such as ensuring users know they are interacting with a machine and can make an informed decision to continue or stop the interaction.
Examples: customer service chatbots, generative AI tools for content creation. - Minimal/no risk – free use of AI systems determined to be minimal-risk, which covers most AI systems used in the EU today.
Examples: email spam filters and AI-enabled video games.
The Act emphasizes human-centric design, accountability, transparency, and safety. Providers and deployers of high-risk AI must meet stringent documentation, testing, and oversight requirements. Importers and distributors must also verify compliance before placing AI systems on the EU market.
High-risk AI systems under the EU AI Act must comply with rigorous AI governance regulations, data protection impact assessment requirements, and safeguards for automated decision making.
Organizations operating in the EU should already be preparing for risk classification, conformity assessments, and documentation obligations.
How mature is your AI risk management? Take the quiz.
UK Government AI regulation guidelines
The UK has taken a lighter-touch, regulator-led approach. In March 2023, the Department for Science, Innovation, and Technology and the Office for Artificial Intelligence released a white paper on AI regulation. The government invited consultation through June 2023, with sector regulators tasked to implement guidance rather than a single binding law.
Unlike the EU’s prescriptive AI Act, the UK approach emphasizes flexible AI governance regulations aligned with existing data protection laws and sector-specific oversight.
Enforcement: Regulators were given 12 months to create guidelines and tools for AI oversight, with the option for Parliament to introduce umbrella legislation after April 2024 if necessary.
Organizations covered: Any organization developing or using AI in the UK. Oversight is handled by existing sector-specific regulators, such as the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority.
Regulatory focus:
The UK government expects regulators to balance innovation with enforcement of five guiding principles:
- Safety, security, and robustness
- Transparency and explainability
- Fairness (including compliance with the Equality Act and UK GDPR)
- Accountability and governance
- Contestability and redress
While the EU has enacted binding regulation and the U.S. has shifted to an infrastructure-heavy action plan, the UK’s flexible, principle-based approach could prove more adaptable but also risks fragmentation across industries.
This approach seeks to balance AI innovation with AI and data protection, though it raises questions about consistency across AI systems and state-level enforcement.
US AI Action Plan
The United States has dramatically shifted its approach. The non-binding 2022 Blueprint for an AI Bill of Rights has effectively been superseded: in January 2025, Executive Order 14179 revoked prior Biden-era AI directives (including EO 14110) and directed the development of a new national strategy. On July 23, 2025, the White House released America’s AI Action Plan, also referred to as Winning the AI Race.
Rather than comprehensive AI regulation law, the U.S. approach emphasizes infrastructure, national competitiveness, and federal government leadership in AI innovation.
The plan outlines more than 90 federal policy actions under three pillars:
- Accelerating innovation – removing federal regulatory barriers, fast-tracking private sector adoption, and encouraging industry-driven standards.
- Building American AI infrastructure – expediting permits for semiconductor fabs and data centers, investing in workforce development (e.g., electricians, HVAC technicians, chip specialists).
- Leading internationally – exporting U.S. AI “full-stack packages” (hardware, models, applications, and standards) to allied nations, reinforcing U.S. global dominance.
This strategy deprioritizes broad AI governance regulations in favor of existing federal regulations, raising complexity for organizations managing AI regulations across regions.
Key themes:
- Prioritizing free speech and ideological neutrality in frontier AI models used by government contractors.
- Establishing AI as a cornerstone of American military, economic, and diplomatic power.
- Reducing regulatory burdens to accelerate deployment of AI technologies.
The divergence between U.S., EU, and UK AI regulations underscores the challenge of global AI governance and AI and data protection compliance.
Where the EU AI Act centers on fairness, rights, and accountability, the U.S. plan emphasizes speed, infrastructure, and strategic dominance. For global companies, this divergence underscores the complexity of navigating competing regulatory priorities.
The big picture
In just two years, predictions of fragmented AI oversight have become reality. The EU AI Act now sets the global benchmark for rights-based AI governance. The U.S. AI Action Plan prioritizes infrastructure and geopolitical competitiveness. The UK continues to rely on a regulator-led, principle-based approach.
For privacy and compliance leaders, the message is clear: AI regulation is no longer theoretical. Organizations must assess their AI risk management maturity, adapt to regional divergences, and prepare for ongoing updates as lawmakers race to keep up with innovation.
A centralized privacy management platform like TrustArc helps organizations monitor evolving AI regulations, align AI governance regulations with data protection obligations, and manage AI and data protection risks across jurisdictions.
FAQs: AI Regulations
What are AI regulations?
AI regulations are laws and governance frameworks that govern the use of artificial intelligence, addressing AI and data protection, fairness, transparency, and accountability.
Why are AI governance regulations important?
AI governance regulations ensure AI systems operate responsibly, protect personal data, and reduce risks from high-risk AI systems and automated decision making.
How does the EU AI Act affect organizations outside Europe?
The EU AI Act applies to any organization offering AI services in the EU, making it a global driver of AI regulation law and AI governance standards.
How do AI regulations differ between the EU, UK, and US?
The EU focuses on rights-based AI governance, the UK emphasizes principles, and the U.S. prioritizes innovation and infrastructure over comprehensive AI regulation.
How can organizations prepare for future AI regulation?
Organizations should implement AI governance programs, conduct risk assessments, monitor AI regulations globally, and align AI development with data protection laws.
One Platform. Total Privacy Control.
Unify privacy, governance, and risk management in a single platform to scale compliance across regions and regulations without added complexity.
Explore the platformAI Governance, Built for Reality.
Assess AI risk, document accountability, and align with evolving laws and ethical expectations without slowing innovation.
Govern AI Responsibly