AI tools in HR often violate labor law; fines of up to EUR 35 million may be imposed

AI tools in HR often violate labour laws – they discriminate, collect prohibited data and invade privacy. From August 2026, the EU will impose fines of up to EUR 35 million.

The implementation of artificial intelligence (“AI”) systems in HR contexts presents employers with new fundamental legal issues that require a thorough analysis of the relationship between existing legislation and the possibilities unlocked by AI technologies, as well as thorough preparation for the full applicability of the so-called AI Regulation (EU Regulation 2024/1689 on artificial intelligence – the “AI Regulation”), which will take effect on 2 August 2026.

Existing Labour Law Framework and AI

The Czech Labour Code (Act No. 262/2006 Coll.) does not contain any specific regulations on AI, but its existing provisions create significant barriers to the deployment of these technologies. A key misconception of many employers is that the absence of specific legislation implies a legal vacuum, but the opposite is true. The general provisions of the Labour Code in fact fully apply to AI systems, and violations of these rules have the same consequences as they would have for traditional HR processes.

Specific manifestations of these legal restrictions can be demonstrated on the following provisions.

Equal Treatment and Anti-Discrimination (Section 16)

Section 16(2) of the Labour Code sets out a list of protected characteristics relating to employees, and employers must guarantee equal treatment regardless of these characteristics at all stages of the employment relationship.

Section 16(3) of the Labour Code goes on to explicitly mention indirect discrimination, i.e. a situation where an apparently neutral criterion places persons with a certain characteristic at a disadvantage, which is a fundamental problem with the use of most current AI systems trained on historical data, as this data included discriminatory patterns.

A typical example is an algorithm that penalises gaps in a job seeker’s employment history, i.e. “gaps in their CV”, which indirectly discriminates against women who are more likely to interrupt their careers due to motherhood.

Data Processing Limits Before and During Employment (Sections 30, 312, 316)

Restrictions on Pre-Employment Data Collection (Section 30(2))

Section 30(2) of the Labour Code limits the scope of data processed prior to the commencement of employment exclusively to information directly related to the conclusion of the employment contract, thereby excluding common practices of AI tools such as:

  • social media scraping
  • psychological profiling
  • collecting data on family circumstances

Given that, according to Section 316(4) of the Labour Code, employers may not request or obtain the above-mentioned information through third parties.

Employee Access to AI-Generated Records (Section 312(3))

Section 312(3) of the Labour Code guarantees employees the right to access all documentation in their personal file, including AI-generated evaluations. According to Article 26 of the AI Regulation and Articles 13 and 15 of the GDPR, employers are therefore obliged to ensure the full transparency and explainability of automated decisions. Many current AI systems are unable to meet this requirement.

Monitoring of Employees (Section 316)

The strictest limits on the use of AI are set out in Section 316 of the Labour Code, which regulates the monitoring of employees. This provision allows only a reasonable degree of monitoring as to the use of work resources, prohibits the invasion of an employee’s privacy through monitoring or reviewing their communication without a serious reason, and requires prior notification of employees if such a reason exists.

These requirements fundamentally limit the possibilities for deploying AI systems for productivity analysis or communication monitoring, as most of these technologies require continuous collection and analysis of data on employee behaviour. This exceeds the limits of reasonable monitoring and infringes on privacy without the existence of a serious reason based on the specific nature of the activity.

High-Risk AI Classification Under the AI Regulation

The AI Regulation further tightens the legal framework, as Annex III categorizes all AI systems as high-risk that are intended for use in the recruitment or selection of natural persons, in particular for the publication of targeted job offers, the analysis and sorting of job applications, and the evaluation of candidates.

Also considered high-risk are all AI systems intended for use in decision-making with an impact on employment conditions, career advancement or the termination of contractual employment; for the purpose of assigning tasks based on individual behaviour, personality traits or characteristics; or for the purpose of monitoring and evaluating the performance and behaviour of individuals within these relationships.

Employers in the capacity of entities implementing high-risk AI systems are then obliged under Article 26 of the AI Regulation to entrust human oversight to persons with the necessary competence, to continuously monitor operations and to inform the provider and the competent authorities in the event of risks or incidents.

They are also required to keep automatically generated logs within their control for a period appropriate to the purpose, though for at least six months, and to inform employee representatives and affected workers that they are exposed to an AI system before it is used in the workplace.

This classification takes effect on 2 August 2026, with non-compliance with obligations related to high-risk systems being punishable by fines of up to EUR 35 million or 7% of the company’s global annual turnover, whichever is higher.

GDPR and Data Protection Impact Assessment

Another obligation may also arise from Article 35 of the GDPR, which regulates the assessment of the impact on personal data protection when using AI technologies.

Conclusion

This is a comprehensive set of new obligations that goes beyond mere formal compliance with the legislative requirements before August 2026 and requires a deep understanding of the interaction between AI tools, Czech labour law, the GDPR and the European regulatory framework.

Employers must therefore begin thorough preparations well in advance and secure the appropriate expertise to successfully meet these requirements.

Subscribe to our newsletter

By pressing Subscribe you consent to our data processing terms