Artificial intelligence continues to reshape how UK employers recruit, monitor and manage staff. What began as simple automation has evolved into sophisticated tools that influence hiring decisions, productivity assessments and even the early stages of disciplinary processes.
While these technologies offer genuine efficiency benefits, they also raise important legal and ethical questions. Employers must now ensure that their use of AI complies with existing employment and data protection laws, while employees should be aware of their rights in an increasingly digital workplace.
AI is now routinely used in CV‑screening, candidate ranking, performance scoring and workforce‑planning systems. Its reach is expanding, and once algorithms begin to influence decisions about real people, the full weight of employment law comes into play.
A key concern for 2026 is the persistence of algorithmic bias. AI systems trained on historic data can inadvertently replicate outdated or discriminatory patterns, resulting in unfair outcomes. This can manifest in subtle ways such as, deprioritising applicants with career breaks, misinterpreting facial expressions during assessments, or filtering out experience commonly associated with women.
The Equality Act 2010 is clear: employers are responsible for discriminatory outcomes, regardless of whether those decisions originate from human judgment or an automated system.
Data protection obligations also remain central. AI tools often rely on large volumes of personal data, meaning employers must comply with UK GDPR requirements around fairness, transparency and purpose limitation. Employees have the right to understand how their data is being used and to challenge decisions made solely by automated systems.
Employers therefore need to ensure that any AI‑driven process includes meaningful human oversight and that decisions can be clearly explained. Regulators, including the ICO, have made it clear that organisations must be able to justify the use of AI and maintain accountability for the decisions it influences.
AI‑powered monitoring systems have also become more common, tracking everything from keystrokes to productivity patterns. While such tools may support operational efficiency, they can raise significant concerns around privacy, psychological wellbeing and fairness.
Employers must balance legitimate business needs with the rights of employees and be transparent about any monitoring taking place. Excessive or intrusive surveillance is likely to create both legal risks and workplace tensions.
A recent reminder of AI’s limitations came from the case of Fortis v Krafton, where a company reportedly relied on ChatGPT rather than seeking legal advice when attempting to avoid an earn‑out provision. The AI‑generated suggestions failed to recognise that the proposed strategy would amount to a breach of contract, did not consider duties of good faith or fair dealing, and offered no assessment of litigation risk or potential damages. Crucially, at no stage did the system advise consulting a lawyer. The company ultimately faced significant liability which was a costly outcome that could likely have been avoided had they obtained professional advice. The case illustrates an important principle: while AI can assist with information gathering, it cannot replace the strategic, contextual and risk‑focused guidance that qualified lawyers provide.
At present, the UK does not have a dedicated piece of legislation governing AI in the workplace. Instead, employers must navigate a patchwork of existing laws and regulatory guidance, including the Equality Act, UK GDPR, the Employment Rights Act and ACAS Codes of Practice. In this environment, proactive risk management is essential. Employers should be conducting Data Protection Impact Assessments, auditing AI systems for bias, updating policies to reflect technological change and ensuring that all AI‑influenced decisions receive proper human review.
From a solicitor’s perspective, disputes involving AI are already beginning to appear, particularly where individuals feel decisions lack fairness, explanation or transparency. Algorithms may seem neutral, but without proper oversight they can amplify risk rather than reduce it.
As AI becomes more embedded in everyday HR processes, both employers and employees benefit from understanding how the law applies and what safeguards must be in place.
If you or your organisation require advice on AI‑driven decision‑making, employee monitoring, discrimination risks or any aspect of workplace technology, please get in touch.
If you or your organisation require advice on AI‑driven decision‑making, employee monitoring, discrimination risks or any aspect of workplace technology, please get in touch. The initial conversation with us is free of charge. All our team is extremely friendly and experienced, not just locally, but nationally too. You will be in safe hands. Contact Us or call 0116 212 1000