In December 2022, CEO Chris Babel and a panel of privacy industry experts discussed accelerated demand for strong regulation of artificial intelligence (AI) in 2023. At the time, privacy professionals were coming to terms with the risks presented by the massive data-gathering capabilities of new-generation AI chatbots like Open AI’s ChatGPT.
A few months later, the pace quickened. On March 14, 2023, Open AI released its next model, GPT-4 (also integrated as ‘Bing AI’ into Microsoft’s search engine), followed a week later by Google with the launch of its Bard AI chatbot on March 21, 2023.
Our experts anticipated lawmakers would scramble in 2023 to address ethical and privacy concerns with AI-assisted search and other automated services.
They were right. But even they couldn’t have predicted just how quickly advances in machine learning would widen the gap between how AI is used and how policymakers regulate it. Below is an overview of key AI-focused regulations and governance frameworks around the world.
European Union (EU) AI Act
The EU Artificial Intelligence Act is the world’s first comprehensive legal framework on AI. Passed in the European Parliament on March 13, 2024, it positions the EU as a global standard-setter, much like the GDPR did for privacy in 2018.
Enforcement timeline:
- August 1, 2024 – Act enters into force.
- February 2, 2025 – Ban on prohibited AI systems takes effect.
- August 2, 2025 – General-purpose AI model requirements begin.
- August 2, 2026 – Most remaining rules take effect.
Organizations covered: Any organization offering a product or service in the EU that uses AI to make or contribute to decisions, recommendations, or predictions, or generate content.
Regulatory focus: The EU AI Act is built around risk-based tiers:
- Unacceptable risk – a ban on all AI systems “considered a clear threat to the safety, livelihoods, and rights of people.”
Examples: social scoring by governments, untargeted scraping of facial images, and toys with voice assistance that could encourage dangerous behavior. - High risk – strict obligations on AI systems that could cause risks to people’s health, wellbeing, life, or fundamental rights. These obligations include risk assessment and mitigation systems, transparency, logging of activity to ensure traceability, human oversight of risk, and high-level robustness, security, and accuracy and security of data and systems.
Examples: CV-sorting software for recruitment, identifying people with biometrics, scoring exams, and applying credit scores. - Limited risk – control of AI systems with specific transparency obligations, such as ensuring users know they are interacting with a machine and can make an informed decision to continue or stop the interaction.
Examples: customer service chatbots, generative AI tools for content creation. - Minimal/no risk – free use of AI systems determined to be minimal-risk, which covers most AI systems used in the EU today.
Examples: email spam filters and AI-enabled video games.
The Act emphasizes human-centric design, accountability, transparency, and safety. Providers and deployers of high-risk AI must meet stringent documentation, testing, and oversight requirements. Importers and distributors must also verify compliance before placing AI systems on the EU market.
Organizations operating in the EU should already be preparing for risk classification, conformity assessments, and documentation obligations.
How mature is your AI risk management? Take the quiz.
UK Government AI regulation guidelines
The UK has taken a lighter-touch, regulator-led approach. In March 2023, the Department for Science, Innovation, and Technology and the Office for Artificial Intelligence released a white paper on AI regulation. The government invited consultation through June 2023, with sector regulators tasked to implement guidance rather than a single binding law.
Enforcement: Regulators were given 12 months to create guidelines and tools for AI oversight, with the option for Parliament to introduce umbrella legislation after April 2024 if necessary.
Organizations covered: Any organization developing or using AI in the UK. Oversight is handled by existing sector-specific regulators, such as the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority.
Regulatory focus:
The UK government expects regulators to balance innovation with enforcement of five guiding principles:
- Safety, security, and robustness
- Transparency and explainability
- Fairness (including compliance with the Equality Act and UK GDPR)
- Accountability and governance
- Contestability and redress
While the EU has enacted binding regulation and the U.S. has shifted to an infrastructure-heavy action plan, the UK’s flexible, principle-based approach could prove more adaptable but also risks fragmentation across industries.
US AI Action Plan
The United States has dramatically shifted its approach. The non-binding 2022 Blueprint for an AI Bill of Rights has effectively been superseded: in January 2025, Executive Order 14179 revoked prior Biden-era AI directives (including EO 14110) and directed the development of a new national strategy. On July 23, 2025, the White House released America’s AI Action Plan, also referred to as Winning the AI Race.
The plan outlines more than 90 federal policy actions under three pillars:
- Accelerating innovation – removing federal regulatory barriers, fast-tracking private sector adoption, and encouraging industry-driven standards.
- Building American AI infrastructure – expediting permits for semiconductor fabs and data centers, investing in workforce development (e.g., electricians, HVAC technicians, chip specialists).
- Leading internationally – exporting U.S. AI “full-stack packages” (hardware, models, applications, and standards) to allied nations, reinforcing U.S. global dominance.
Key themes:
- Prioritizing free speech and ideological neutrality in frontier AI models used by government contractors.
- Establishing AI as a cornerstone of American military, economic, and diplomatic power.
- Reducing regulatory burdens to accelerate deployment of AI technologies.
Where the EU AI Act centers on fairness, rights, and accountability, the U.S. plan emphasizes speed, infrastructure, and strategic dominance. For global companies, this divergence underscores the complexity of navigating competing regulatory priorities.
The big picture
In just two years, predictions of fragmented AI oversight have become reality. The EU AI Act now sets the global benchmark for rights-based AI governance. The U.S. AI Action Plan prioritizes infrastructure and geopolitical competitiveness. The UK continues to rely on a regulator-led, principle-based approach.
For privacy and compliance leaders, the message is clear: AI regulation is no longer theoretical. Organizations must assess their AI risk management maturity, adapt to regional divergences, and prepare for ongoing updates as lawmakers race to keep up with innovation.
Decoding AI Governance
Discover key pillars of AI risk governance and how to implement them effectively to build a strong, ethical AI ecosystem.
Download the ebook7 Steps to AI Compliance
Maintain continuous compliance with this straightforward roadmap to managing AI technology within your organization.
View the infographic