Experts discuss artificial intelligence during Insurance Times’ Fraud Charter
Computer programmes could soon become the main way to perpetrate fraud from across the globe, according to Paul Holmes, partner at legal services firm DWF.
Speaking during Insurance Times’ Fraud Charter roundtable event last week (16 May 2023), Holmes said the shift was “worrying” during a discussion about artificial intelligence (AI).
One such risk being considered by the insurance sector over AI was the potential uptick in fraudulent activity arising from more empowered hackers.
And in March 2023, cyber security firm Darktrace reported the emergence of more convincing and complex AI scams.
Holmes said: “From a fraud perspective, the worrying thing is we are genuinely very, very close to a period where you won’t need warehouses full of people in whatever country to perpetrate frauds, you can just set a computer programme to do it”.
Regulation
Meanwhile, the growing use of data science continues to transform the way the insurance industry analyses the risks it covers.
Read: Zurich UK prevents more than £70m worth of fraudulent claims in 2022
Read: Cost of living crisis may drive ‘dishonest customers’ to opportunistic fraud – Ifed
Explore more fraud-related content here or discover other news stories here
Recently, it was reported that insurer Zurich had been experimenting with ChatGPT and exploring how it can use AI technology for tasks such as extracting data for claims and modelling.
In a government whitepaper entitled A pro-innovation approach to AI regulation, published in March 2023, the government issued five new principles to regulators – including the FCA and PRA – to ensure the ethical practice of AI for insurance firms.
These included – safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Speaking about regulating AI, Dan Mount, princpial of online safety at Ofcom, said: “When regulating AI or algorithms, there’s generally a challenge – [there is] a lot of of neural networks and complicated algorithms [and] you can’t really just look at the code and understand what they’re doing.
“It’s more about what was the purpose they were created for and what is the training data they were trained on – and then what is the outcome.
“Looking at the input and outputs of these systems is, from a regulatory standpoint, more effective than just ‘show us the code and hand over the algorithm.’”
No comments yet