- 28 Jul 2025
- Law Blog
- Employment Law
Artificial Intelligence (AI) is rapidly transforming workplaces across the UK, thereby providing increased efficiency, enhanced decision making and automation of many routine tasks. There are now recruitment tools that can scan CVs to AI through Performance Management Systems, so there is a wide range of potential benefits for employers. However, with this advancement in AI technologies, there comes legal risks and responsibilities. It is therefore important for UK employers to understand how AI should be used lawfully and ethically.
Understanding AI in the Workplace
- AI systems in the workplace take many forms, which include:-
- Automated hiring and screening tools;
- Employee monitoring and productivity trackers;
- Chatbots for HR support;
- Predictive analytics in workforce planning;
- Decision making tools for such things as promotions, redundancies, or even disciplinary action.
All of the above technologies should improve productivity and reduce human bias, but they also bring with them some legal challenges.
Key Legal Risks for Employers
Discrimination Bias
Even though AI can help reduce human bias, it has limitations because it still has to rely on the data it is trained on. Therefore if historical data includes discriminatory patterns, there is a risk that AI may replicate, or even, emphasise those biases. For example, recruitment tools trained on biased hiring data may unfairly exclude candidates based on gender, ethnicity, or age. There is a legal risk therefore that the employer may inadvertently breach their legal obligations under The Equality Act not to discriminate. It is therefore essential that employers ensure that any AI systems are thoroughly tested for bias and regularly audited.
Data Protection Privacy
AI systems usually rely on personal data and sometimes sensitive employee information. Under the UK GDPR and the Data Protection Act 2018, employers must process data lawfully, transparently and securely. It follows that the use of AI systems will bring a risk of non-compliance with their Data Protection obligations. A responsible employer should therefore conduct data protection impact assessments before implementing AI tools and it is important that employers provide clear employee notices regarding the use of AI in the processing of their Data.
Automated Data Decision Making
Under UK GDPR individuals have a right not to be subject to a decision based solely on automated processing that significantly affects them in scenarios such as recruitment and dismissal. The risk to the employer is that unlawful use of automated data decision making can result in data breaches. In order to mitigate this risk employer should provide meaningful human oversight.
Employment Law Obligations
It is also the case that using AI in the workplace can change job roles, affect Employment Contracts and, as a consequence of the efficiency, bring about redundancies. The indirect legal risk is the failure to consult staff or Unions in relation to these matters and therefore employers must ensure that they follow proper processes concerning consultation, especially when AI changes employment terms or leads to restructures.
In the case of MJAN v Uber Eats (2025)
In this case which settled without a determination by the Tribunal, an Uber Driver brought a claim of indirect discrimination based on his black ethnicity due to repeated AI facial recognition failures in the Uber App, which ultimately led to his account being suspended. This case highlights the risk of bias biometrics and was settled by the employer. It emphasises the need for an employer to ensure transparency, fairness, and in particular, human oversight when using AI technology.
Best Practices for Employers
- To mitigate legal risks and build trust, employers should take the following steps:-
- Carry out risk assessment for all AI systems used in HR or workforce management;
- Ensure transparency with employees regarding how AI is used;
- Seek legal and compliance advice early in the procurement or development of any AI technology;
- Ensure that HR and Management Staff responsible for using AI are trained on both its use and the legal obligations;
- Consider implementing a designated AI Policy which sets out guidelines regarding use together with the consequences of breaching those guidelines
- Ensure ongoing monitoring of AI systems to identify unintended consequences or bias.
The Role of Regulation
The UK Government has adopted a positive approach to AI and its regulation, focusing on principles, rather than rules. However, as explained above, the employer still has existing obligations under the Data Protection Legislation, Equality Act and General Employment Law to navigate.
In the future, it is expected that regulatory bodies such as the Information Commissioner's Office and the Equality Human Rights Commission will take a more active role in AI oversight. It is essential therefore that the employer should stay up-to-date with guidance from these organisations and best practice.
If you require employment law advice in respect of your obligations regarding AI in the workplace or help implementing an AI Policy for your staff, please do not hesitate to contact us on 0800 542 4245