‘Balancing the excitement of the potential of AI with the cutting edge associated risks is crucial’, says head of innovation
Artificial intelligence (AI) could bring “enormous potential benefits” to the insurance industry, but regulators should implement a bespoke AI maturity framework to assist its “safe and responsible” adoption, says global law firm Kennedys.
Responding to the closing of the FCA, Bank of England and Prudential Regulation Authority’s recent discussion paper – which was published on 11 October 2022 and closed on 10 February 2023 – Kennedys said that this proposal would represent a “risk-based approach that achieves fair and equitable customer outcomes whilst being an enabler for insurers”.
AI maturity refers to the degree to which organisations have understood the technology’s capabilities and risks, with an AI risk maturity framework referring to a framework for its safe implementation.
The regulators’ discussion paper asked relevant stakeholders for their opinions on the potential benefits, risks and harms of AI – it questioned whether there were any regulatory barriers to the safe and responsible adoption of AI in UK financial services and how current regulation could be clarified with respect to AI.
In a section describing the potential benefits and risks of adopting AI into UK financial services, the regulators explained that “if misused, [AI] may potentially lead to harmful targeting of consumers’ behavioural biases or characteristics of vulnerability, discriminatory decisions, financial exclusion and reduced trust”.
A spokesperson for Kennedys told Insurance Times that regulators should prioritise “improved consistency, efficiency and enhanced customer experience” when designing any regulations.
They explained: “AI can be used to provide personalised and convenient customer experiences, such as through the use of chatbots or personalised financial advice.
“However, the benefits that AI can bring to the customer need to be balanced with the risks – maturity frameworks have been developed by the likes of Microsoft, IBM and Gartner to guide advances in AI and manage those risks.
“These frameworks are at a high level however and Kennedys argues that the financial services industry needs a more tailored approach that would allow insurers to differentiate between different types of activity and classes of business – and also between specialty and general insurance.”
Framework details
The framework suggested by the law firm prioritises six elements – these are data collection, training, testing or monitoring, explainability, transparency and legitimate use.
Read: Lloyd’s boss acknowledges that insurance sector struggles with ethnicity data
Read: Is the insurance industry using too much data for underwriting?
Explore more AI-related content here or discover other news stories here
Training AI algorithms, for example, would require a risk assessment and defined methodology – the training data set would be auditable to limit and prevent bias, allowing the user to articulate the coverage or volume of data to create “safe and responsible” algorithms.
Explainability would be achieved via clear articulation of outcomes to mitigate irresponsible and unsafe use of algorithms – this is vital, said Kennedys, in relation to “legal, ethical and reputational risks”.
The firm explained: “Take the use of an AI system to determine a customer’s car insurance premium – historically, insurers have regarded women as safer drivers than men.
“In recent years, however, it has not been possible for insurers to use gender as a variable.
“The criteria of fairness on the one hand and inclusiveness on the other needs to be carefully balanced and the output of any AI system clearly understood.”
Richard West, Kennedys partner and head of innovation, said: “Balancing the excitement of the potential of AI with the cutting edge associated risks is crucial – it will be vital to bring stakeholders and public opinion along as products develop.”
No comments yet