Skip to Main Content
Main Menu
Articles

Artificial intelligence: All you need to know about the new European Union AI Act

Passed in March 2024, the European Union’s Artificial Intelligence (AI) Act aims to ensure consumer rights are safe and AI applications are ethical without placing undue burden on businesses.

Artificial intelligence is part of our daily lives, transforming industries from healthcare to entertainment, transport to education. Streaming services can use algorithms to suggest playlists and create personalized content; AI-powered digital assistants set reminders and help manage daily tasks; online shopping systems provide recommendations based on digital history; and AI helps identify patterns of fraudulent activity in banking transactions, among many other applications.

Artificial intelligence can help personalize, target, recognize and predict information. In many ways, it’s a huge asset to businesses and society in general and helps us solve many problems. But as AI becomes smarter and smarter, it also brings challenges, particularly when it comes to privacy, fairness, ethics accountability, and safety.

While most AI systems will pose low to no risk, certain AI systems create risks that need to be addressed to avoid undesirable outcomes.

Setting the AI standard

The European Union has always been a trendsetter regarding privacy laws, establishing the General Data Protection Regulation (GDPR) – the toughest privacy and security law in the world – in 2018. Several countries and individual U.S. states have followed suit since.

Now, in the face of booming AI applications, the European Union has established the AI Act, passed in the European Parliament on 13 March 2024, becoming the first legislation of its kind in the world.

“Europe is NOW a global standard-setter in AI,” Thierry Breton, the European commissioner for internal market, wrote on X (formerly known as Twitter).

What is the AI Act?

The AI Act is the first-ever legal framework on artificial intelligence, which addresses the risks of AI and positions Europe to play a leading role globally. It sets out strict requirements for both AI developers and deployers and aims to reduce the burdens to businesses while respecting fundamental rights, safety, and ethical principles.

Key principles of the AI Act include:

  1. Human-centric approach: The AI Act puts humans at the center of AI development and use. It emphasizes that AI systems should be designed to serve the best interests of people and society as a whole.
  2. Transparency: This is crucial for building trust in AI. The act requires that AI systems be transparent in their operations, meaning that users should be aware when they are interacting with an AI system, and they should understand how it works.
  3. Accountability: When something goes wrong with an AI system, there should be someone responsible. The AI Act introduces the concept of ‘provider accountability’, meaning that the individuals or organizations developing, deploying, or operating AI systems are held responsible for their actions.
  4. Safety and security: AI systems must be safe and secure for users and the broader public. The AI Act sets requirements for risk management, data quality, and cybersecurity to ensure that AI systems do not pose undue risks.
  5. Data governance: Data is the lifeblood of AI. The act establishes rules for the quality and governance of data used to train and operate AI systems, with a focus on protecting personal and sensitive information.

How does the AI Act work?

The AI Act divides tech into various categories of risk. The riskier the AI application, the more scrutiny it faces.

The levels of risk are:

  • Minimal risk: Think AI-enabled video games or filters, content recommendation systems, spam filters… It’s expected the vast majority of AI applications will fall into this category.
  • Limited risk: Risks associated with a lack of transparency in AI usage. For example, letting humans know they are working with machines when using chatbots, and identifying AI-generated content to providers.
  • High risk: Tech used in critical infrastructure, essential services, educational training, law enforcement, voter behavior, administration of justice, migration and border control, among others. AI systems will always be considered high-risk if they perform profiling of humans.
  • Unacceptable risk: This includes AI systems considered a threat to safety, for example from social scoring by governments to emotion recognition, untargeted ‘scraping’ of the internet for facial images, and toys using voice assistance that encourage dangerous behavior. These will be banned.

How do I know whether an AI system is high-risk?

The AI Act clearly defines what it considers to be ‘high risk’, and sets out a solid methodology that helps identify these systems within the legal framework. Given that this is a constantly and fast-evolving industry, the European Commission has stated that it will ensure what is on this list is updated regularly.

Who does the AI Act apply to?

The AI Act covers a broad spectrum of AI systems, ranging from simple chatbots to sophisticated autonomous vehicles. This legal framework extends its reach to both the public and private sectors within and beyond the EU borders, provided that the AI system is introduced into the Union market or its usage impacts individuals within the EU.

It pertains to both providers, such as developers of screening tools, and deployers of high-risk AI systems, like a bank acquiring said screening tool. Additionally, importers of AI systems must ensure that the foreign provider has completed the necessary conformity assessment process, bears a European Conformity (CE) marking, and is accompanied by the requisite documentation and usage instructions.

Providers of free and open-source models are mostly exempt from these requirements. Furthermore, the obligations do not cover research, development, and prototyping activities conducted before market release. Additionally, the regulation excludes AI systems intended solely for military, defense, or national security purposes, regardless of the entity carrying out these activities.

What does compliance with the AI Act involve?

For organizations developing or using AI systems within the EU, compliance with the AI Act means adhering to its requirements and following specific procedures.

Some aspects of compliance include:

  • Documentation and transparency: Organizations must keep detailed documentation on their AI systems, including how they work, their purpose, and potential risks. They also need to ensure transparency in their communication with users about AI involvement.
  • Risk assessment and mitigation: High-risk AI systems require thorough risk assessments to identify potential harms. Organizations must implement measures to mitigate these risks and ensure the safety and rights of individuals.
  • Data protection and privacy: Compliance with existing data protection regulations, such as the GDPR, is essential. Organizations must handle personal and sensitive data ethically and securely.
  • Testing and quality assurance: Before deploying AI systems, organizations need to conduct rigorous testing to ensure they operate as intended and meet safety standards. Ongoing monitoring and updates are also necessary.

Does the European AI Act impact the rest of the world?

The main goal of the new EU AI Act is not just to promote trustworthy AI within Europe, but also to spread this standard globally, ensuring that all AI systems uphold fundamental rights, safety, and ethical practices.

In China, companies are required to obtain proper approvals before offering AI services.

On the other hand, the United States is still developing its approach to regulating AI. Although Congress is considering new laws, some cities and states in America have already passed their regulations. These laws restrict the use of AI in various areas, such as police investigations and employment practices.

How will the AI Act be enforced?

Implementing the AI Act comes with its challenges, including the need for resources, expertise, and ongoing monitoring. Additionally, as AI technologies evolve, the regulations will need to adapt to address emerging risks and opportunities.

For now, European Member States play a crucial role in making sure regulations are followed and enforced. To do this, each Member State needs to choose one or more national authorities to oversee how the rules are applied and put into action. These authorities will also be in charge of keeping an eye on the market to make sure everything is working as it should.

To make things smoother and have an official contact point for the public and others, each Member State will pick one national authority to supervise everything. This authority will also represent the country in the European Artificial Intelligence Board.

For extra knowledge and advice, there will be an advisory group made up of different kinds of people, like those from the industry, small businesses, civil society, and universities.

Additionally, the Commission will create a new European AI Office inside itself. This office will watch over AI models that are used for general purposes. It will work closely with the European Artificial Intelligence Board and will have support from a group of independent experts with scientific knowledge.

How will the AI Act impact innovation?

While the AI Act introduces new responsibilities and regulations, it also aims to foster innovation and competitiveness within the EU. By providing a clear framework for ethical AI development, businesses can build trust with consumers and investors, leading to greater adoption of AI technologies.

When does the AI Act come into force?

The European Union’s AI Act was adopted by the European Parliament in March 2024 and is expected to enter into force at the end of the legislature in May 2024, after passing final checks and receiving endorsement from the European Council. Implementation of the AI Act will then be staggered from 2025 onward.

What are the implications of breaking the AI Act?

Non-compliance with the rules can lead to fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.

Essential Guide to GDPR

Practical steps to manage the EU General Data Protection Regulation.

Download now

Responsible AI Certification

Demonstrate your organization’s commitment to data protection and governance.

Get certified
Key Topics

Get the latest resources sent to your inbox

Subscribe
Back to Top