It's off to work we go
AI is changing how we recruit, manage, retain, and dismiss employees. So what are the potential benefits, challenges, and top tips for success for your organisation and your clients. Elaine Morrissey interrogates The Machine
Artificial intelligence (AI) is changing how we work, from the advent of AI assist tools to the automation of tasks and the need to upskill for this new way of working.
It is also changing how we recruit, manage, retain, and dismiss employees. AI tools are now available to assist with the life-cycle of employment – from hiring, to performance management, to termination.
One existing use-case is candidate CV screening – the AI tool will recommend the top candidates based on their CV. This can save significant resources in working through high volumes of CVs.
Another example is candidate-assessment screening. The candidate completes an online assessment, including a video and competency and/or role-based questions, which automatically generates a score/ rating. The organisation can then decide to move the top-scoring candidates to the next round of the recruitment process.
As part of the employment life-cycle, employee retention absorbs significant resources, and organisations are looking to employee-turnover-prediction tools to assist in managing employee engagement and retention. Such tools can provide useful data points for those who manage large teams and global/remote teams.
Benefits
In short, the benefits are time and better data points to make more informed decisions. Freeing up resources is always a priority. This allows employees to focus on more complex issues and can also prevent ‘task fatigue’ – where employees get bored of doing the same or similar tasks.
There are several other benefits. If an organisation can recruit faster, they will keep the top candidates engaged. In competitive markets or for highly sought-after talent, organisations need to act promptly and be seen to be sophisticated, including having a sleek hiring process.
Are you concerned that the AI tool might miss a good CV or miss nuances in a CV or be unfair or biased? AI in the employment context is categorised as ‘high-risk AI’ under the EU AI Act.
This means that the AI system and process should be subject to rigorous checks and balances. AI has the potential (once done correctly) to be (dare we say it) fairer and more consistent. Each candidate will be assessed in the same way.
There is the same approach for all, with no human bias – noting the challenges of building AI models that avoid or limit bias. However, this is all predicated on the AI model being designed appropriately. Humans will not have the same checks and balances as an AI tool, and bias is a human trait.
But overall, organisations can save precious resources, move faster, be consistent in approach, and make better decisions based on data points/ predictions.
Challenges
If only it were all so easy! There are several challenges that organisations will need to consider. We will focus on four:
The law
Obligations under the EU AI Act and other applicable legislation must be considered – for example, GDPR, employment legislation, and then, if the initiative is global, it needs to consider obligations in the relevant jurisdictions.
From an EU AI Act perspective, we have time: it will be circa summer 2026 (Q2/Q3 2026) before it applies to AI in the employment context. However, all existing legislation needs to be considered.
In general, subject to some limited derogations, AI in the employment context is considered to be ‘high-risk AI’. High-risk AI requires a very considered approach due to the onerous obligations that come with the development or use of such a tool.
While there are several derogations, like with any legislative derogations, they need to be narrowly interpreted. For example, if the AI system is only intended to ‘perform a narrow procedural task’, such an AI system is not considered high-risk – but using AI to screen candidates falls fully within the high-risk categorisation.
While there is a two-year lead-in time, for any high-risk AI tools being considered for development or use, organisations need to commence their compliance journey.
If your organisation or client is intending to ‘deploy’ an AI tool in the employment context, the following are a sample (not exhaustive) of the obligations that need to be considered:
- Technical and organisational measures (TOMs) – to ensure the AI system is being used in accordance with instructions provided by the provider (developer).
- Human oversight – high-risk AI systems need to be designed and developed in such a way that they can be effectively overseen by humans. Further deployers (users) shall ensure that the people assigned to ensure human oversight of the high-risk AI systems have the necessary “competence, training and authority as well as the necessary support”.
- Inform workers’ representatives and the affected workers that they will be subject to the system.
- A data-privacy impact assessment (DPIA) will need to be carried out.
- A fundamental-rights impact assessment (FRIA). This new-kid-onthe-block needs to be completed.
Does this tool fit into the AI strategy?
While it sounds obvious, it is so important that the purchase of any AI tool fits the organisation’s strategy. While many organisations have a plan for AI, many are working on it and considering what to prioritise.
With the avalanche of AI solutions and a lot of noise and turbulence, it’s important for an organisation to scrutinise where they are – and where they want to be.
For example, an organisation’s AI strategy may be all about using AI to save money. It may be focused on resolving bottlenecks, it may be focused on revenue generating, for example, developing the most sought-after AI tool for hospital-appointment management.
That strategy will have considered revenue-generating options, return on investment, etc. That strategy will not be static, but will likely change in the very fast-paced environment of AI.
An organisation may be focused on AI tools returning efficiencies. However, while several out-of-the-box solutions may be marketed very cleverly and at relatively low costs, organisations need to be aware of the cost of compliance and the impact on employees.
It is important for organisations to plan for AI success and scrutinise what may look like a ‘quick/easy win’ to avoid AI fails.
People
While AI is great, an organisation’s biggest asset is still its people. Considering the impact on employees and bringing employees on that journey with you is key to success.
Organisations will have employees using the tool, say HR and people leaders (managers), and then candidates and employees potentially being affected. Training and awareness are key. This all feeds into people trusting the AI systems and their employers, which, in turn, feeds into successful deployment of AI in an organisation.
There is specific reference to AI literacy in the EU AI Act, including: “In order to obtain the greatest benefits from AI systems while protecting fundamental rights, health and safety, and to enable democratic control, AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems.”
Based on the current available draft, article 4 on AI literacy states: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used.”
It will not be the same requirement for all employees or all organisations. Organisations will need to look at the gaps and see how best to fill them – for example, internal webinars, on-demand training, external courses, bespoke training, etc.
Organisations will need to consider an AI literacy programme to ensure appropriate and tailored training. This programme will need to be under regular review to ensure it is fit for purpose and develops, as AI and its benefits and challenges develop.
Data-subject requests, complaints, and regulators
While the specific obligations under the EU AI Act need to be adhered to, there are other elements that need to be considered. As good lawyers, we are thinking about what can go wrong and how to limit the occurrences or limit the impact.
The potential ramifications from an AI solution in the employment space include:
- Data-subject-rights requests (DSRRs) – there may be an increase in candidate and employees/former employees submitting DSRRs, and an increase in complexity of these requests. A disgruntled candidate may seek copies of assessments, results, and then details on the AI tool, including human-oversight measures. DSRRs have a tight timeline of one month to be complied with and can be resource heavy.
- Complaints/claims – like the above, organisations may face complaints or claims in relation to the processing of personal data to train the model, queries in relation to output, and compliance with legislation,
- Organisations may also be dealing with the issue from multiple perspectives and regulators, for example, GDPR, AI Act, employment legislation, and specific jurisdiction legislation.
Training and trust
To minimise these issues, it is back to the training, awareness, and transparency piece. Trust, transparency, and literacy are mentioned almost 100 times in the EU AI Act – that speaks volumes to the focus of the legislation.
There may be a two-year lead-in time for high-risk AI in the employment context, but that deadline will fast approach. While there are specific obligations for high-risk systems, it’s so important to bring your employees along on the organisation’s AI journey.
AI can be a brilliant tool/assistant for employees. Just set up your organisation/clients for success.
Elaine Morrissey is a member of the Law Society’s Intellectual Property and Data Protection Committee. She is senior manager-privacy, assistant global DPO, at ICON plc.
FOCAL POINT
THE FINAL COUNTDOWN
- Align with your organisation’s AI strategy.
- Scrutinise the solution: what return on investment will it provide? What is the true cost of compliance? What might be hidden costs, for example, claims from disgruntled employees or candidates?
- If purchasing a solution, scrutinise the contract.
- Review existing candidate and employee-privacy notices. They will likely need to be updated as a first step.
- Engage with your employees (know-how, awareness).
- Consider your organisation’s role – for example, provider, deployer – to ensure understanding of obligations under the EU AI Act.
Elaine Morrissey
Elaine Morrissey is a member of the Law Society’s Intellectual Property and Data Protection Law Committee and is assistant global DPO at ICON.