’Organisations need to be aware of these risks and how best to mitigate them,’ partners say
The use of artificial intelligence (AI) in HR hiring and firing processes could lead to harder to defend liability claims.
That was according to Clyde and Co, which said in a statement yesterday (2 January 2024) that using AI in such situations raised a risk of decisions being challenged on the grounds of whether they can be considered fair or reasonable.
The law firm highlighted that firms had begun using AI as it can deliver “huge benefits” for generating efficiencies and cutting costs.
For example, it said businesses were using AI to manage existing employees, select candidates for promotion, monitor staff activity and to streamline dismissal decisions in redundancy situations.
However, James Major and Dino Wilkinson, who are both partners at Clyde and Co, said that AI tools ”are often criticised for reflecting the prejudices – conscious or otherwise – of those who did the programming or a historic data set tainted by biases”.
“Consequently, they may inadvertently perpetuate discrimination in the decisions they produce,” they added.
Case
The duo issued the warning following the Schufa ruling in the European Court of Justice on 7 December 2023.
Read: FCA publishes regulatory approach to AI and big tech
Read: Majority of UK insurance executives investing in AI and machine learning
Explore more artificial intelligence-related content here or discover other news stories here
Schufa is a German private credit agency that rates candidates’ creditworthiness.
The judgement ruled that the use of AI to create a probability value based on personal data constitutes ”automated individual decision-making” for the purposes of the GDPR, with the consequence being that individuals must be informed and may have a right to human intervention in such decisions.
In turn, Major and Wilkinson said it was ”likely that we will see an increase in claims, that are harder to defend, from people who have missed out in the selection process or singled out due to a decision made by AI”.
“Organisations need to be aware of these risks and how best to mitigate them,” they added.
”This will include engaging with AI developers to make sure that the products are as strong and robust as possible and that they have actively addressed discrimination risks.
“Companies should also be implementing clear policies on the use of AI within the organisation.”
Interested in all things insurance technology and insurtech.
Writer of the monthly TechTalk section of the magazine and backchat. When not writing can be found doing yoga, at some kind of dance workshop, singing, globetrotting, or baking – not in any specific order.View full Profile
No comments yet