Skip to Main Content
Main Menu
Articles

AI Regulations: Prepare for More AI Rules on Privacy Rights, Data Protection, and Fairness

In December 2022, CEO Chris Babel and a panel of privacy industry experts discussed accelerated demand for strong regulation of artificial intelligence (AI) in 2023. At the time, privacy professionals were coming to terms with the risks presented by the massive data-gathering capabilities of new-generation AI chatbots like Open AI’s ChatGPT.

A few months after our panel, the ante was upped: on March 14, 2023, Open AI released its next model, GPT-4 (also integrated as ‘Bing AI’ into Microsoft’s search engine), followed a week later by Google with the launch of its Bard AI chatbot on March 21, 2023.

Our experts anticipated lawmakers would scramble in 2023 to address ethical and privacy concerns with AI-assisted search and other automated services.

But few could have predicted how quickly advances in machine learning and other AI innovations are causing a widening gap between how automation is used versus the draft regulations to control it.

Below is a summary of some of some key AI-focused regulations and governance frameworks around the world.

European Union (EU) AI Act

The EU Artificial Intelligence Act is a set of regulations on the development and use of artificial intelligence across Europe aiming to “safely harness AI’s potential”.

Proposed: April 21, 2021 (final version of the EU AI Act published).

Enforcement: Expected in 2025 if the EU AI Act is passed in 2023, allowing a two-year grace period. The AI Act mentions fines of up to €30 million per violation or 6% of global profits, whichever is higher.

Organizations covered: Any organization offering a product or service in the EU that uses AI to make or contribute to decisions, recommendations, or predictions, or generate content.

Regulatory focus: The EU AI Act aims to strengthen rules for data quality, transparency, human oversight, and accountability when organizations use AI to manage or create data, particularly when personal information is processed.

The AI Act’s regulatory framework proposal on AI defines four levels of risks created by AI applications and sets rules for each:

  1. Unacceptable risk – a ban on all AI systems “considered a clear threat to the safety, livelihoods, and rights of people.”
    Examples: social scoring by governments and toys with voice assistance that could encourage dangerous behavior.
  2. High risk – strict obligations on AI systems that could cause risks to people’s health, wellbeing, life, or fundamental rights. These obligations include risk assessment and mitigation systems, transparency, logging of activity to ensure traceability, human oversight of risk, and high-level robustness, security, and accuracy and security of data and systems.
    Examples: CV-sorting software for recruitment, identifying people with biometrics, scoring exams, and applying credit scores.
  3. Limited risk – control of AI systems with specific transparency obligations, such as ensuring users know they are interacting with a machine and can make an informed decision to continue or stop the interaction.
    Examples: chatbots for content creation or customer service.
  4. Minimal/no risk – free use of AI systems determined to be minimal-risk, which covers most AI systems used in the EU today.
    Examples: email spam filters and AI-enabled video games.

UK Government AI Regulation Guidelines

The UK Government’s Department for Science, Innovation, and Technology and the Office for Artificial Intelligence released AI regulation guidelines in March 2023, inviting interested parties to participate in open consultation until June 21, 2023.

Proposed: March 29, 2023 (UK Government’s AI regulation white paper published).

Enforcement: The UK Government has given industry regulators 12 months to release guidelines and tools for regulating AI, while keeping the option to introduce umbrella legislation after April 2024 if it decides regulators need stricter rules.

Organizations covered: Any organization developing or using AI in the UK. AI governance will be overseen by relevant existing industry sector regulators, including the UK’s Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority.

Regulatory focus: The UK Government’s AI regulation whitepaper advocates for industry regulators to balance promoting innovation while enforcing rules for privacy, safety, and fairness.

The government’s approach gives regulators authority to update rules for AI as needed rather than waiting for top-down direction.

However, it expects regulators to consider five principles for AI regulation in their respective sectors:

  • Safety, security, and robustness – to ensure risks are managed.
  • Transparency and explainability – to ensure organizations explain when and how an AI-powered systems makes decisions “in an appropriate level of detail that matches the risks posed by the use of AI.”
  • Fairness – to ensure AI is not used to discriminate or create unfair commercial outcomes. Use of AI must comply with the UK’s existing laws, such as the Equality Act or UK GDPR.
  • Accountability and governance – to ensure human oversight of when and how AI is used, with clear accountability for outcomes.
  • Contestability and redress – to ensure UK citizens have the right to dispute “harmful outcomes or decisions generated by AI.”

US Blueprint for an AI Bill of Rights

The White House Office of Science and Technology began developing a Blueprint for an AI Bill of Rights in 2021, which includes a non-binding set of principles for the responsible use of AI.

Proposed: October 4, 2022 (White House published The Blueprint for an AI Bill of Rights).

Enforcement: The blueprint itself is not enforceable. On December 10, 2021, the Federal Trade Commission (FTC) filed for rulemaking authority over AI and privacy, but the government is still considering these powers.

Organizations covered: Any organization that develops and/or uses automated systems in the US that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”

Regulatory focus: The Blueprint for an AI Bill of Rights builds on several guidelines which could influence future regulation, including the FTC’s AI guidance releases and the Artificial Intelligence Risk Management Framework released by the United States Department of Commerce’s National Institute of Standards and Technology in January 2023.

The blueprint sets out five key principles to protect US citizen’s civil rights during the development and use of AI:

  • Safe and effective systems – all automated systems should undergo pre-deployment testing, risk identification and mitigation.
  • Algorithmic discrimination protections – all automated systems should ensure equity. Businesses should perform equity assessments when designing systems to remove discrimination risks.
  • Data privacy – all automated systems should ensure citizens’ privacy rights are protected by default, and consent meaningfully sought and given for appropriate data collection. Individuals should also be protected from abusive data practices (such as behavior monitoring that could impact privacy rights).
  • Notice and explanation – businesses should notify users when an automated system is being used and clearly explain how and why it contributes to outcomes that impact a user.
  • Human alternatives, consideration, and fall back – where appropriate, businesses should support consumers’ right to opt out of interactions with automated systems and give them access to a person who can quickly consider and resolve any problems they encounter.
Key Topics

Get the latest resources sent to your inbox

Subscribe
Back to Top