Romania: EU Artificial Intelligence Regulation: What companies need to know

As of 1 August 2024, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence (AI Regulation) officially entered into force. The effective application of its provisions will take place in a phased manner. Thus, from February 2025, some of the requirements of the new legislative act will become mandatory, and only from August 2026 will all the rules imposed on artificial intelligence technologies, as detailed in the text of this European legislation, come into force.

The regulation is directly applicable in Romania, without further transposition measures, and affects commercial or institutional providers and users of artificial intelligence systems used in the EU, regardless of where they operate or are developed.

It introduces a specific legal framework for high-risk AI systems, with a significant impact on companies in sectors such as finance (banking, insurance companies), human resources, technology, pharma, healthcare and utilities.

The aim of this new regulation is to create a unified European legislative framework to ensure that artificial intelligence systems used in the European Union comply with citizens’ fundamental rights and ensure a high level of security.

Through specific data, development, deployment and monitoring requirements, the new legislation aims to minimize the risk of algorithmic discrimination, broadly defined as any systematic error that favours certain groups over others.

 What does “artificial intelligence system” (AI system) mean?

Within the meaning of Article 3(1) of the Regulation, an ‘artificial intelligence system’ is software which is developed by one or more of the techniques and approaches listed in the Annexes to the Act and which, for a given set of human-defined objectives, can generate outputs such as content, predictions, recommendations or decisions that influence the environments with which it interacts.

Who does it apply to?

The new legislation on artificial intelligence has major implications both for the providers of artificial intelligence systems, who must comply with rigorous technical and ethical standards, and for the end-users of these systems, who are obliged to implement security measures and ensure transparency in their use, regardless of the physical location of the equipment they run on.

The use of an artificial intelligence system located outside the European Union does not exempt the user from the obligation to comply with European legislation on artificial intelligence when the effects of the use occur on EU territory.

Furthermore, importers, distributors and manufacturers of artificial intelligence systems have specific responsibilities to verify compliance and inform end-users.

There are no exceptions and any company operating on the European market and using artificial intelligence must comply with this legislation.

What are the criteria for classifying artificial intelligence systems under the European Regulation?

The AI Regulation introduces a methodology for classifying AI systems, called the ‘pyramid of criticality’, through a risk-based approach and aims to strike a balance between the need for regulation and its impact on those involved.

This hierarchical structure allows an assessment of the risks associated with each system, from those with minimal impact to those with the potential to seriously affect fundamental rights. It is structured into the following risk levels:

  • Artificial intelligence systems that pose unacceptable risk, such as those used to assign social scores and discriminate against people, are strictly banned on the European market. The EU imposes a categorical ban on AI systems that can lead to harmful consequences, such as social scoring systems used to marginalize certain groups. These systems include AI practices that can lead to exploitation, subliminal manipulation or disproportionately affect certain social groups. In addition, it bans the use of certain biometric systems and the concept of “social scoring”.
  • High-risk AI systems are (i) IA systems intended to be used as a safety component of products that are subject to third party conformity assessment or used in the management and operation of certain critical infrastructures or (ii) stand-alone IA systems that have fundamental rights implications and are explicitly provided for in law. These must comply with most of the obligations under the IA legislation. They are generally single-purpose or limited-purpose IA systems that interact with people in education, the workplace, public services, etc. Mandatory measures include implementing risk management systems, record keeping, data governance, ensuring transparency, human oversight and compliance with strict cybersecurity standards.
  • AI systems with limited risk, such as customer service chatbots, are subject to simplified requirements, focusing mainly on transparency and focusing on ensuring that users are aware that they are interacting with an AI system.
  • Low-risk AI systems, such as email spam filters, are not subject to detailed regulation under the AI Act.
  • The AI Regulation also distinguishes between General Purpose AI Models (“GPAI” i.e. Chat GPT) and AI systems, applying specific rules to each. While GPAI models are subject to transparency and documentation obligations, AI systems built based on GPAI models can be classified as high risk, depending on their impact on fundamental rights. Since GPAI models are regulated separately from IA systems, a model will never constitute a high-risk IA system because it is not an IA system. On the other hand, a GPAI system built since a GPAI model may constitute a high-risk IA system.

General-purpose AI models do not easily fall into a single risk category. ChatGPT, for example, while appearing to be just a text generator, can produce harmful content due to the absence of an understanding of ethics and morality. Without proper oversight, the system can generate responses that can be used in ways that may mislead or even become dangerous to the user.

Interaction with other EU legislation and next moves by Member States.

The EU has a complex approach to regulating artificial intelligence. It builds on existing laws such as the General Data Protection Regulation (‘GDPR’) and more recent ones such as the Digital Services Regulation (EU) 2022/2065 (‘DSA’) and the Digital Markets Regulation (EU) 2022/1925 (‘DMA’), and the AI Regulation will complement this framework.

For example, the GDPR already imposed restrictions on important decisions made by algorithms, such as dismissal of employees. It also gives us the right to understand how algorithms work to affect our lives.

The DSA and DMA focus on large online platforms. The DSA forces these platforms to be more transparent about how their algorithms work and give users more control over the content they see. The DMA aims to increase competition between these large platforms and imposes restrictions on how they use their algorithms to favor their own products or services.

In essence, the EU seeks to create a safer and fairer online environment, where AI is used in a responsible and transparent way and is developed and used in a way that protects citizens’ rights and promotes a healthy and competitive digital economy.

To implement the AI Regulation, Member States are required to designate a specialized national authority by 2 August 2025. This authority will be responsible for overseeing compliance at local level and cooperating with the European Commission to ensure uniform application of the rules across the Union.

What does a company planning to use artificial intelligence need to know?

The AI Regulation complements and does not replace the existing EU legal framework. Thus, existing EU regulations on data protection, product safety, consumer protection, social policy and labor law remain in force and continue to apply. Organizations using artificial intelligence systems must ensure compliance with both these existing laws and the new requirements imposed by the AI Act.

The AI Regulation also provides for severe penalties, comparable to those for data protection, ranging up to €35 million or 7% of global turnover, for violations.

Regarding SMEs, the sanction mechanism is more flexible as small and medium-sized companies that find it difficult to comply with the complex IA rules will face a less drastic approach, with fines proportionate to their ability to pay.

To comply with the new AI regulations, companies need to start by identifying all the AI systems and models they are using or developing.

Such an inventory can be done by analyzing existing software and applications or by consulting with IT and risk departments. This initial inventory is important even for companies that do not currently consider themselves to be using AI, as this is likely to change in the coming years.

To effectively manage these changes in legislation, companies should consider several steps such as:

  • Identify artificial intelligence systems:

The first step is to make a list of all programs and equipment used in the company and check which ones use artificial intelligence.

  • Assess the applicability of the AI Regulation:

Check if the AI systems you have found need to comply with the new EU rules on artificial intelligence, especially if they are used by individuals from different European countries.

  • Classification of systems:

Establish which specific rules each AI system must comply with, depending on how risky it is considered.

  • Determine the role of these systems within the company:

Identify the concrete steps you need to take to ensure that you are using artificial intelligence systems in compliance with the law. For systems classified as high-risk, identify the exact role of the organization in the process (vendor, implementer, or other) to determine the related responsibilities.

  • Develop a compliance plan:

Create a plan to help ensure that your company’s activities are in compliance with IA Regulation requirements.

AI Regulation compliance deadlines:

The AI Regulation came into force on August 1, 2024, but provides a transition period of almost two years, until August 2026, for all companies to adapt to the new requirements, with the following compliance deadlines applying:

  • 6 months: Begin ban on AI practices deemed dangerous.
  • 12 months: Generative models of artificial intelligence (GPAI) will be subject to strict rules, except for those already on the market, which grant a further two-year adaptation period.
  • 24 months: Most provisions of the AI Regulation enter into force, establishing a clear legal framework for the development and use of artificial intelligence.
  • 36 months: High-risk AI systems will have to comply with additional requirements.
  • 48 months: High-risk AI systems used by public authorities and in existence before the law comes into force will have to comply with the new rules.

 What next?

The AI Regulation is a first step towards a comprehensive regulation of artificial intelligence, but its impact on other related areas will be largely determined by future implementing acts, both at European and national level.

Issues such as intellectual property, data protection, cybersecurity and human resources will be profoundly influenced by how the Regulation is interpreted and applied.

Given that the IA Regulation has about 20 pieces of secondary legislation, EU countries and industry stakeholders may still have a significant influence on its implementation in the future as delegated acts are still to be developed in particular on high-risk systems.

Although Romania has a National Strategy for Artificial Intelligence, the specific national legislative framework is still under development. The EU Regulation on AI, on the other hand, provides a much more comprehensive set of rules and standards, with a particular emphasis on transparency, accountability and ethics, so Romanian legislation will need to align with these fundamental principles to ensure public trust in AI technologies.

Subscribe to our newsletter

By pressing Subscribe you consent to our data processing terms