Sauter les liens
eu ai act scope

Does The EU AI ACT Apply To Me? [2025 Guide]

In 2021, the European Union introduced the world’s first artificial intelligence règlement, the EU Artificial Intelligence Act (‘AI Act’) which aims to encourage the development of trustworthy AI and reduce the risk of harm to humans.

The focus of this article is to help you determine whether the AI Act applies to you, and if so, what are some steps you should take to ensure you remain compliant.

Why is compliance important? Well, for starters, the heftiest fine for non-compliance with the prohibitions is up to 35 000 000 euros or approximately 7% of the annual worldwide turnover of the company. And if companies fail to perform certain obligations listed in the AI Act, then they can be fined up to 15 000 000 euros or up to 3% of their annual worldwide turnover. It is important to note that SMEs are not exempt from these fines, but that some support and guidance will be provided to SMEs to aid in compliance including potential graduated compliance measures and access to dedicated resources like AI innovation hubs. Furthermore, recent indications suggest that initial enforcement may prioritize high-impact applications and larger entities, allowing for a phased implementation and greater clarity for all businesses.

Scope of Application

The AI Act applies to a wide range of entities that are involved in the development, marketing, deployment and utilization of AI in the European Union.

In the previous article, we explained how the AI Act could also apply to providers and deployers of AI systems established in a third country, especially if the output produced by the AI system is intended to be used in the EU. For example, if an EU-based Company A contracts Company B, based outside the EU, to supply an AI system, tool or service, then Company B must comply with the standards and obligations in the AI Act.

The AI Act applies to all sectors, not just a specific industry.

It specifically applies to:

  1.   Providers – a person, company, public authorities, agency or other bodies that developed or have an AI system or general-purpose AI model and introduced it in the EU market or use it to provide a service under its name or trademark, for a payment or free of charge. This includes those who develop General Purpose AI models (GPAI) and especially those GPAI models that pose systemic risk. GPAI models pose unique challenges and have additional obligations.
  2.   Deployers– a person, company, public authorities, agency or other bodies that use an AI system. But this does not apply to those who use an AI system for personal, non-professional activities such as personal smart speakers, translation apps or photo editing apps.
  3.   Importers- a person, company, public authorities, agency or other bodies located or established in the EU and introduces an AI system in the EU that was developed by another person or company in a third country.
  4.   Distributors – an entity in the supply chain (not including the provider or importer) that makes an AI system available in the EU.

The AI Act also categorizes AI systems into four risk levels based on their potential harm to the safety and health of individuals and society. The four risk levels include unacceptable risk, high risk, limited risk and minimal or no risk.

The AI Act’s main concern is AI systems that pose unacceptable risks and high risks. AI systems that pose unacceptable risks are banned in the EU as they present serious threats to public health, safety and fundamental rights. Whereas AI systems that pose high risks will have to follow a significant number of regulatory obligations.

Common Use Cases Analysis – Does The EU AI Act Apply To You?

Since the application of the AI Act varies based on the type of AI system and its risk level, we’ve provided a breakdown of how it will apply to some use cases:

  1.   Deepfake Technology

This is a type of AI that is used to create convincing bogus and fake images, videos and audio recordings. The greatest danger about this is that it can be used to spread misinformation and deceive viewers into believing that it is legitimate.

In the EU, AI deepfakes were used to manipulate foreign policy, political deception and election interference. For instance, in 2022, Pro-Russian deepfakes made a fake video of President Zelensky announcing a surrender and a deepfake of Kyiv’s mayor scheming European officials. Although it did not significantly affect the EU-Ukraine relations, the fake videos caused widespread confusion. Then, in 2023, election manipulation became a concern during the Polish national elections. The Civic Platform party used AI-generated videos of Prime Minister Morawiecki in campaign ads without disclosing it to the public.

The AI Act does not ban deepfake technology entirely but imposes strict transparency obligations on the providers and users.

  1.   Decision-making Systems

AI-enabled manipulative techniques and systems that are designed to distort human behavior or deceive them into making decisions, undermining their autonomy and free choice, are prohibited. This is because it can pose significant risks to the physical, psychological and financial status of a person. This also includes AI systems that use subliminal techniques (such as audio, visual, or video) that are manipulative and undermine a person’s decision-making and free choice – without being aware.

For decision-making AI systems not to be considered as posing an unacceptable risk, it should not impair a person’s autonomy, decision-making and free choice.   

  1.   AI Content Generation Systems

What is it? It is any content (text, image, video or audio) that AI models create. These types of models are developed using machine learning algorithms that learn patterns, styles and structures from large amounts of data. The Gen-AI models are trained by analyzing the data to generate content that it was trained on. Some popular models are ChatGPT, Gemini, Adobe Firefly and IBM Granite.

Although AI content generation systems are not high-risk by default, they can pose different levels of risk depending on their use and impact. AI systems like ChatGTP, Claude and other AI tools for writing are considered low risk because it is used to enhance creativity, productivity and assist human professionals without affecting their decision-making process. It is important to understand if the Gen-AI tool is a GPAI model with systemic risk, because if it is, there are additional obligations. Also, if the output of these tools is used within a high risk AI system, the final product becomes a high risk AI system.

But AI content generation systems can also be at high risk if they produce deceptive and harmful content that can be used for fraud, spread misinformation or defamation, influence the decision-making of its users; use for identity theft or impersonation; and cause manipulation or deception.

 

  1.   Customer service AI

An AI system that does not affect the substance or outcome of a decision is considered low risk. For example, chatbots and virtual assistants primarily provide informational support, automate routine tasks and enhance user experience. It does not significantly influence a person’s decision or pose serious risks to their health or fundamental rights. 

However, these AI systems can be considered high risk only if they give misleading information, deceive users by not disclosing that it is an AI chatbot and if it is used for automated decision-making, for example, evaluating loan applications. An investigation by Reuters showed that AI chatbots cannot be entirely relied on for election-related news. The authors used ChatGPT40, Google Gemini and Perplexity.ai to ask basic election information and some of the answers were partially correct and a few were false. A German investigative outlet also found that similar AI chatbots provided inaccurate political information and fabricated sources. For example, AI used in HR for automated CV screening is a high risk AI system, and AI used in financial services for credit scoring is also high risk. AI used in a doctors office for diagnostics is also a high risk AI system. Therefore, each of these examples require specific compliance actions. 

It is important to note that the risk level of an AI system also depends on its purpose and impact.

Self Assessment Framework

 

Key definitions:

  • AI System means a machine-based system that is designed to operate with different levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

 

Action Steps

After completing the self-assessment, you will have determined which risk category your AI system falls under.

Your obligations vary depending on your role i.e. whether you are a provider, deployer, importer or distributor. If your AI system poses an unacceptable risk to the health, safety and fundamental rights of a person, then you can’t introduce it in any of the EU countries because it is banned.

It is important to note that if a deployer, distributor or importer makes a considerable amount of modification to the AI system, then they will be considered a provider of that system. The original provider will not be regarded as the provider anymore but is obliged to provide the new provider with the technical documentation, information regarding the capabilities of the AI system, technical access and assistance to the new provider.

If your AI system poses a high-risk then you must follow the obligations in the AI Act which are:

  • Develop and enforce risk management procedures
  • Utilize high-quality training, validation and testing data. This means that data used must be relevant, representative, free of errors and complete. Documentation of data sources and data preparation is also required.
  • Maintain documentation and design logging features. Detailed documentation of the AI system, its development, and its performance is required. This also means that logs of the system’s activity must be kept.
  • Provide clear transparency and essential user information
  • Ensure human oversight measures are built into the system and/or implemented by the users
  • Guarantee robustness, accuracy and cybersecurity
  • Establish a quality management system
Monica Aguilar photo
Monica Aguilar

Monica est une professionnelle du droit et des affaires qui possède une expérience variée dans les domaines du droit, des médias et de la gouvernance d'entreprise. Elle a commencé sa carrière dans le journalisme, où elle a travaillé comme animatrice radio, journaliste économique à la télévision et dans la presse écrite, et productrice de télévision, avant de s'orienter vers le droit commercial. En tant qu'avocate à Fidji, elle a fourni des conseils en matière de gouvernance d'entreprise, d'investissements étrangers, de fusions et d'acquisitions, de droit des contrats et de conformité réglementaire.

Tags :