While the onus for responsible use of AI currently lies with the regulator, should the insurance sector be addressing these concerns as a matter of priority?
The UK government is attempting to take a leadership position on artificial intelligence (AI), with plans for the country to become the “global hub of AI”.
With more and more insurance firms exploring and adopting this technology, the issue of whether companies may need to put an AI insurance policy in place has arisen.
Alongside the potential need for new products, concerns over regulation of the impact of AI remain paramount in the discussion of the technology’s impact.
For example, technology solution provider FIS revealed that some 63% of insurance executives were investing in AI and machine learning in the UK alone, according to figures published in August 2023.
With more firms using large language models (LLMs) and other forms of AI in their businesses – and with knowledge of the potential for AI to hallucinate and provide inaccurate information – policies to insure against potential complications have been front of mind for many.
James Teare, commercial and technology partner at law firm Bexley Beaumont, told Insurance Times: “One of the major concerns with LLMs like ChatGPT or Google Bard is [the question] ’are they secure or not?’
”That is one of the issues about using ChatGPT and Google Bard, is whatever you upload compromised in some way?”
Issues with utilising AI technology in business could include these data privacy concerns, as well as the potential challenges of providing customers with false information.
Teare continued: “There’s issues with data, ownership and confidentiality.
“If you are uploading information, which belongs to your client, firms have a strong risk that you are going to be breaching agreements with clients, regulatory interest and possible litigation.
”Using generative AI in any kind of reckless way is going to cause you a reputational issue.”
In a case where sensitive data is being handled, Teare recommended a “closed environment” to guard against any potential breaches.
However, he admitted that the safest thing to do was just “not give ChatGPT anything”.
Responsible use
For Teare, holding an insurance policy to cover for AI risks and guard against confidential information being leaked or stolen enhances AI in a productive way.
Read: UK takes ‘very pragmatic approach’ towards insurance AI and regulation
Read: Majority of UK insurance executives investing in AI and machine learning
Explore more artificial intelligence-related content here or discover other news stories here
But what might this AI insurance policy look like?
Enter Lloyd’s Lab and insurtech startup Armilla Assurance.
On 2 October 2023, Armilla launched a warranty product for AI models.
Karthik Ramakrishnan, chief executive and cofounder of Armilla, explained: “One of the biggest challenges for businesses when they are procuring models from third parties is knowing how well these models work.”
Armilla does two things in this instance – it tests these models and then provides an assessment report to determine whether the model delivers the results the firm wants, as well as the probability of it failing in the future.
Ramakrishnan added: “This is the first product that we’re launching that allows the buyer to get confidence in an AI model – that it’s been vetted by an independent third party based on industry standards.”
Armila collectively holds more than a decade of experience in AI across its staff aims to “ensure that AI is adopted responsibly”.
Ramakrishnan explained: “There’s an inherent aspect of uncertainty with the outputs of these models, which is where the real risks are coming from. If you take ChatGPT, it will never give the same answer every single time.”
AI’s probabilistic nature stands in comparison to the “deterministic behaviour” of the way traditional software models operate, with written code determining outputs within predefined bounds.
Therefore, government regulation around AI is aimed at ensuring AI developers are “on top of this uncertainty”, Ramakrishnan said.
In March 2023, the government issued five new principles to regulators, including the FCA and PRA, to ensure the ethical use of AI by the corporate world.
These principles were safety, security and robustness, transparency and explainability, fairness, accountability and governance and contestability and redress.
Later, on 12 July 2023, the FCA published its own regulatory approach to AI and big tech.
At the time, its chief executive Nikhil Rathi, said the regulator “still has questions to answer about where accountability should sit – with the users, the firms or the AI developer?”.
He added: “Any regulation must be proportionate enough to foster beneficial innovation but robust enough to avoid a race to the bottom.”
Better approach?
On a similar note, Robin Gilthorpe, chief executive of AI-focused insurtech Earnix, said the “better approach” to AI regulation was for industry and corporate leadership bodies to establish principles about how AI models are built.
He explained that one issue with implementing new technology was that it can provide results in one area before being unhelpfully applied to others.
Gilthorpe explained: “The reason is that different types of AI are appropriate in different use cases and these technologies should sit in a governance structure.
”It’s incumbent on all firms to ensure that the inputs they are using are rational, permitted and non-discriminatory.”
However, datasets are often based on legacy behaviour and these may not be unbiased.
Like Teare, Gilthorpe warned that firms should be “careful about what the feedstock [for AI programmes] is, particularly in the LLM case”.
“There are some generalisable principles worth thinking about, as AI is a very powerful technology. You need to make sure it is used judiciously and not overused,” Gilthorpe added.
Earnix utilises “dynamic monitoring” to ensure that models perform as intended, which guards against “significantly different outcomes” than originally modelled.
Gilthorpe explained: “Being able to have that monitoring capability is important because sometimes these things are baked into the data, sometimes they’re baked into the behaviour – that’s particularly the case when using historical data.”
Data bias has previously reared its head in the insurance sector where motor premiums for ethnic minorities are concerned. This has been flagged for two years running by Citizens Advice reports published in March 2022 and 2023.
Not clear cut
However, for Andre Symes, group chief executive at Genasys, in certain cases “the use of AI at scale is a real challenge for the industry because insurance is rarely clearcut”.
He stressed that the use of responsible technology should be to “help insurance businesses offer the best policy price and best customer experience”.
Symes added: “The data you receive when applying AI for decision making is only as good as the data entered.
“The issue is that AI works to find the optimal route, which means that empathy just doesn’t factor within this process.”
He noted that the “best decisions rely on the human element” and “augmenting the data with more complex emotive reasoning” is something AI still cannot match.
In conclusion, Teare added: “Fair enough, we didn’t have policies in place ahead of ChatGPT, nobody knew that would happen
“But now that we have been forewarned, it would be practically negligent.”
Interested in all things insurance technology and insurtech.
Writer of the monthly TechTalk section of the magazine and backchat. When not writing can be found doing yoga, at some kind of dance workshop, singing, globetrotting, or baking – not in any specific order.View full Profile
No comments yet