’You have to know its limitations and risks,’ says knowledge lawyer
Insurance firms have been told that they ”really have to know how to use” artificial intelligence (AI) due to the risks it poses.
The warning came during a discussion about AI at law firm RPC’s Global Access Week event, with Olivia Dhein, knowledge lawyer at RPC, stating that AI can be ”a very powerful tool”.
AI is becoming more popular within insurance, with figures from technology solutions provider FIS, published in August 2023, finding that 63% of UK insurance executives were investing in such processes.
Within insurance, it has been experimented with for claims, quotes and underwriting processes, while it has also been used for back office functions, compliance checks and to detect fraud.
However, with concerns being raised in the past, such as those surrounding accuracy, Dhein said it was important that insurance firms considered ”the quality of thought around how you use AI”.
“You really have to know how to use it, she said.
”You have to know its limitations and risks. It’s a very powerful tool.”
Risks
This came following the launch of a new code of conduct for the use of AI within the insurance industry.
Read: TechTalk – AI uptick could see firms needing insurance to cover ‘mishaps’
Read: Musk lawsuit against OpenAI highlights need for clarity over ‘wild west nature’ of AI
Explore more artificial intelligence-related content here or discover other news stories here
The initiative does not impose new regulations on firms but aims to establish a standard of responsibility when they are using AI for claims settlements.
This should help mitigate risks – Sean McGarry, partner at law firm Miller Thomson – said that firms needed to be aware of a range of risks, such as the threat of AI potentially hallucinating or fabricating answers.
For example, ChatGPT creators OpenAI were hit with a defamation lawsuit in June 2023, with a Georgia man claiming the chatbot generated a false legal summary about him.
McGarry felt that firms using technology in the right way could catch out any mistakes made by AI.
“[If] AI is generating something that is not true, this isn’t a problem that cannot be solved with technology,” he said.
While McGarry also noted ”the potential cost savings for going through a large volume of documents” can be a benefit of AI, he warned about its ability to look at specialised risks.
“Lloyd’s of London has a tendency to underwrite and insure more specialised risks, he said.
”We struggled to see how AI is going to be used to replace a human who’s actually looking at that unique risk, what the underwriting implications are for that particular risk and how to quantify the premium.”
But given the evolution of technology, McGarry said it would be interesting to see how the technology adapts to more specialised insurance products.
No comments yet