top of page

AI in Recruiting: What the EU AI Act Means in Practice – and What It Doesn’t


May AI Still Be Used in Recruiting in the Future? And if so – under what conditions? What many HR leaders are currently dealing with is no longer a theoretical debate. With the EU AI Act, the European Union introduces, for the first time, a binding legal framework for the use of artificial intelligence – including in human resources. For companies, this primarily means one thing: gaining orientation, understanding risks, and critically reviewing existing processes.


What the EU AI Act Regulates in Principle

The EU AI Act is not an abstract ethical guideline, but a concrete product regulation. It defines which AI systems are permitted in the European Union, which are prohibited, and under which conditions high-risk systems may be used. The regulation establishes a harmonized legal framework across all 27 EU member states and applies both to placing AI systems on the market and to their use in everyday business operations.


The EU AI Act follows a risk-based approach. What matters is not which technology is used, but what impact it has on people and their fundamental rights. The greater the potential consequences of an AI-driven application, the stricter the regulatory requirements.


In principle, the EU AI Act distinguishes between four risk classes:

  • Minimal risk: for example, many spam filters or translation tools.

  • Limited risk: AI systems subject to transparency obligations, such as chatbots where users must be informed that they are interacting with AI.

  • High risk: AI systems that significantly influence key areas of people’s lives – including education, creditworthiness, justice, and employment/recruiting.

  • Unacceptable (prohibited) risk: certain practices such as manipulative systems, social scoring, or emotion recognition in the workplace context (with narrow exceptions).


For HR in particular, the following is crucial: AI systems used for “employment, workers management and access to self-employment” – including recruiting, selection, promotion, or termination – are, in practice, almost always classified as high-risk, especially when profiling is involved.


This high-risk classification has significant practical consequences. It entails extensive obligations, including registration, technical documentation, continuous monitoring, risk management, and so-called AI vigilance processes. The associated administrative and financial burden is substantial, particularly for HR teams and mid-sized companies. In practice, this often makes the use of high-risk AI systems economically unattractive or only limited in scalability – despite their potential benefits.


How Recruiting is affected by the EU AI Act

Primarily affected are systems that influence pre-selection or decision-making processes or that profile candidates. These include, among others:

  • AI-based analysis and evaluation of CVs

  • Matching algorithms and candidate ranking systems

  • Tools that automatically generate shortlists or recommendations

  • Chatbots in the application process that pre-qualify or filter responses

  • AI-supported interview or behavioral analyses, especially for profiling and scoring


The guiding principle is clear: the closer an AI system comes to the actual hiring decision and the more intensively it profiles individuals, the more likely it falls into the high-risk category – and the stricter the requirements for transparency, control, and documentation.


In addition, AI systems for emotion recognition in the workplace – such as tools that infer emotions from facial expressions or voice for performance or suitability assessments – are generally prohibited.


What Companies Need to Consider Now

The EU AI Act does not prohibit AI in recruiting. AI-supported screening, matching, or evaluation tools remain permissible in principle, but are subject to strict regulation as high-risk systems. What matters most is a clearly structured and accountable decision architecture.


For companies, this results in several key requirements:

  • Transparency: It must be clear how AI outputs are generated and which factors influence scoring results.

  • Documentation: Criteria, data sources, versions, and evaluation logic must be documented and auditable.

  • Traceability: Decisions must remain reviewable by humans; logs and results must be traceable.

  • Human oversight: Final decisions must not run unnoticed through automation; genuine human oversight is mandatory, not a box-ticking exercise.

  • Clear responsibilities: Accountability for errors, bias, or complaints must be clearly defined – both for providers and for deploying organizations.


The EU AI Act does not demand perfect AI – but responsible, explainable, and controllable use of high-risk systems.


Was weiterhin möglich (und sinnvoll) bleibt

The use of AI in recruiting is not being abolished, but reclassified. Systems that support rather than independently decide remain not only permitted, but highly valuable – especially when combined with clearly defined human decision-making processes.


This includes, in particular, applications that:

  • Structure, normalize, and make application documents comparable without automatically issuing acceptances or rejections

  • Pre-sort and prioritize large volumes of applications while final decisions remain with humans

  • Provide transparent analysis and decision support (e.g. skill profiles, deviations, consistency checks)

  • Improve efficiency without crossing into prohibited practices such as emotion recognition or opaque black-box decisions


In short: AI may support, structure, and prepare – but must not replace responsible human decision-making.


Conclusion

The EU AI Act does not change whether AI may be used in recruiting – high-risk systems in HR are explicitly permitted, but subject to strict requirements. What it does change is how companies must assume responsibility: through clear governance, documented processes, genuine human oversight, and clearly defined boundaries around prohibited practices such as emotion recognition in the workplace.


Transparency, traceability, and clear decision structures thus become quality benchmarks of modern HR work. Companies that use AI responsibly not only meet regulatory requirements – they also strengthen trust among candidates, business stakeholders, and employees.


Disclaimer: This article does not constitute legal advice and does not replace an individual legal assessment. Despite careful research, no guarantee is given for completeness or accuracy.
 
 
 

Comments


bottom of page