We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


Don’t allow AI to erode workplace trust
Síobhra Rush, employment law partner at Lewis Silkin

19 Jun 2024 / employment Print

Opinion: Don’t allow AI to erode workplace trust

The key workplace decisions of recruitment, performance, selection, promotion and termination are considered high-risk, and therefore subject to stricter rules, under the new EU AI Act, writes Síobhra Rush, employment law partner at Lewis Silkin (pictured).

The act, the world’s first comprehensive AI legal framework, has been formally adopted by the Council of the EU, with the key compliance obligations to be staggered over the next two years.  The Department of Enterprise, Trade and Employment has sought submissions from interested parties in regard to the implementation of the act.

The general expectation is that the act will become the international default, much like the GDPR, which became a model for many other laws on a global scale.

Bias within AI systems

The potential for AI systems to ‘bake in’ discrimination and bias is well recognised.

Hiring decisions using AI could therefore result in outcomes that are open to legal challenge.

Detecting and addressing the risk of discriminatory outcomes is a multi-stakeholder issue.

Provider assurances will be a key part of the procurement process and deployers must ensure that input data is representative and relevant.

Many employers will go further, putting in place bias audits and performance testing to mitigate these risks.

Similarly, ensuring that AI supported decisions can be adequately explained is critical to maintaining trust in AI systems and enabling individuals to contest effectively decisions based on AI profiling.

Proliferating tools

Novel AI tools are proliferating in areas such as recruitment, performance evaluation, and monitoring and surveillance.

The act categorises these common use cases as automatically high risk.

Lower risk scenarios are where the AI performs narrow procedural tasks or improves the result of a previously completed human activity.

This breadth reflects the wide range of AI systems already in use as workplace tools. These are interesting to consider when assessing the potential reach of the act.

Each stage of the recruitment process can now be supported by AI systems such that generative AI drafts job descriptions, algorithms determine ad targets, and candidates might interact with a chatbot when submitting their application.

Selection, screening and shortlisting supported by AI systems presents legal and ethical risks. Assessments and even interviews may now have less human input.

Collection and objective analysis of employee data means that AI is already widely used as a performance management tool.

Monitoring technology has the potential to provide a safer workplace (for example, tracking delivery drivers’ use of seatbelts and speed) but could also be overly intrusive and erode trust by monitoring keystrokes and work rate.

Rigorous end

Employment AI use cases will very likely to fall into the more rigorous end of the act’s requirements, but consequent obligations will hinge on whether the employer is a ‘provider’ or ‘deployer’ of the AI system.

Most employers will be considered a deployer, with the developers of the AI system the provider.

Providers have extensive obligations:

  • Maintaining risk-management system,
  • Ensuring training data meets quality criteria,
  • Providing monitoring logs,
  • Designing tools that can be overseen by humans, and
  • Registering tool on EU wide database.

The requirements for deployers are somewhat less onerous (and less costly) but will still require significant planning:

  • Using AI in accordance with provider instructions,
  • Assigning trained human oversight,
  • Limiting data use to that which is relevant and sufficiently representative,
  • Monitoring, and flagging incidents to provider,
  • Log-keeping,
  • Informing workers’ representatives and affected workers that they will be subject to AI system ahead of use, and
  • Conducting a fundamental rights impact assessment prior to use (if the deployer is providing public service or operating in banking or insurance).

Data request

Deployers may also be required to comply with a data request for an explanation of the role of AI in a decision which has impacted an affected person’s health, safety or fundamental rights.

However, lack of settled practice yet, as to what this kind of explanation looks like, may make compliance more difficult.

Employers should be aware that a deployer may be deemed a provider, with more onerous obligations, if s/he:

  • Substantially modifies an AI system,
  • Modifies its intended purposes, or
  • Puts their name or trademark on it.

Lighter transparency obligations apply to use cases deemed limited risk, for example the requirement that users are informed that they are interacting with an AI tool.

Certain uses are banned, such as biometric categorisation and inferring emotions or using personal physical characteristics to determine race, political views, religion, or sexual orientation.

How should employers prepare?

Deployers should:

  • Understand what’s in scope with an audit of tools both used or planned,
  • Identify what needs to change in existing practices and processes to comply with the act,
  • Update policies and procedures, in record keeping, training, and internal policies, with due weight given to the overlap between GDPR and the act,
  • Provide training and awareness for those using AI systems on their new obligations,
  • Conduct of due diligence of contracts along the full length of the supply chain,
  • Inform employee representatives that workers are subject to an AI system, and
  • Understand AI preferences and the importance of explainability.
Síobhra Rush
Síobhra Rush
Síobhra Rush is an employment law partner at Lewis Silkin