’Those insurers that don’t invest or haven’t invested in the this sort of technology will be left behind, if they haven’t been already,’ says head of claims counter-fraud

Artificial intelligence (AI) has provided numerous benefits to the insurance sector – but greater access to it has begun an arms race.

Even in this early stage of adoption and implementation, many insurers are finding benefit to more efficient data modelling, document ingestion, chatbots and claims management – to name just a few use cases. 

On the other hand, however, the explosion in access to AI – and particularly generative AI – over the past few years, marked by the public release of OpenAI’s ChatGPT, has also created new problems for insurers where fraud is concerned. 

Insurance fraud is a perennial challenge for the sector and insurers have developed sophisticated systems to counter it – but the novel capabilities AI provides to both opportunistic fraudsters and determined criminals has created new hurdles to overcome.

Generating challenges

In just one typology of AI-powered insurance fraud – that involving the use of the AI-generated deepfakes to commit identity fraud – recent Signicat research, published on 21 February 2025, showed that there had been a 2,137% increase uptick in incidents. 

And more broadly across fraud typologies, counter-fraud experts from the sector have also noted that the technology is driving new challenges for insurers. 

Mark Allen, the ABI’s head of fraud and financial crime, told Insurance Times: ”In simple terms, insurance fraud is currently being driven both by individuals and by technology, including the increased use of AI.

”AI now offers what is a very much cost effective means to manipulate images much more convincingly than they used to be and it’s a lot more convenient and cheaper for criminals to do that than stage physical accidents.” 

For example, Allen noted that counter-fraud teams across the sector were seeing examples of total-loss motor claims being supported by AI-doctored images of vehicles from scrap yards, as well as increasing numbers of documents to support fraudulent claims that had been created via generative AI. 

Pete Ward, Aviva’s head of claims counter-fraud at Aviva, added: ”We’ve seen a rise in manipulated images and documents in support of opportunistic claims, which can be quite difficult to detect because the software to detect this is also evolving.”

In another use case, Ward said that Aviva had noticed ghost brokers – fraudsters who imitate insurance brokers in order to sell fraudulent insurance policies – were now using generative AI to make themselves appear much more legitimate via professional looking images and marketing. 

Ben Fletcher, director of financial crime at Allianz UK, corroborated that his firm was also dealing with these developments, adding that AI is providing fraudsters with more opportunity to commit fraud. 

He explained: “If you go back to the basic fraud triangle, people need a rationale, a motive and an opportunity to commit fraud. 

”Generative AI absolutely means that the opportunity to make a fraudulent claim or take out a fraudulent policy is easier, because the tools are more readily available.” 

Fighting back 

Combating this rise in AI-powered fraud has required that insurers adopt AI-powered technologies themselves to augment their counter-fraud teams’ capabilities. 

“Those insurers that don’t invest or haven’t invested in the this sort of technology will be left behind, if they haven’t been already.” 

Laura Horrocks, customer success manager at fraud-focused technology supplier Shift Technology, said: ”AI is a double-edged sword because, while insurers are going to benefit from the many different aspects of using it to improve customer journeys and fraud detection, fraudsters are also able to be much more agile.

“Deep fakes are one of those areas that, even with the best people in the world, it’s going to be really difficult to detect. The examples that we are now seeing are far more convincing than when we first became aware of the issue a few years ago, so there needs to be technology there.” 

Counter-fraud teams at insurers are aware of the arms race they are now in with fraudsters over the use of this technology, with the pace of adoption now speeding up.

Fletcher explained that Allianz had been using various forms of AI for six years to supplement fraud detection models and was continuing to invest, with machine learning providing “an ability and scale to detect fraud that [Allianz] wouldn’t been able to by doing it manually”.

Ward added: ”Most insurers, and certainly the larger ones, have invested quite heavily in advanced analytics models to assist with fraud detection and Aviva’s no different. 

“Those insurers that don’t invest or haven’t invested in the this sort of technology will be left behind, if they haven’t been already.” 

And while this technology has been invested in, the insurers Insurance Times spoke to all admitted that the tools they would like to counter the increase in AI-powered fraud did not yet cover all bases, with more development needed.

Asymmetric conflict

The other factor limiting insurers in their fight against AI-powered fraud is the asymmetric nature of what must be considered.

Fraudsters, as criminals, obviously do not comply with regulations around the use of technology and also have no-one to consider apart from their own interests. 

Insurers, on the other hand, must ensure that their counter-fraud activities do not unduly impinge on the customer journeys of legitimate customers and must also comply with regulations around the use of technology and AI. 

“It’s absolutely vital that government gets the AI regulatory framework right because, without that, we’re operating with one hand tied behind our back.”

Allen explained: ”There is a tension between protecting against fraud and making sure that the customer journey is seamless and frictionless.”

AI can help to keep counter-fraud processes unobtrusive by screening claims for certain markers of fraud and putting these in front of professionals.

Horrocks added: ”With AI, customers wouldn’t need to know about document screening processes in the digital journey at all and most customers would recognise the need for this, as long as it didn’t make their experiences worse.” 

This approach, however, would require a regulatory regime that allowed insurers to utilise AI technology without falling foul of various regulations, such as General Data Protection Rules on the storing of personal information or Consumer Duty’s requirement for treating customers fairly.

The use of AI is currently governed by existing regulations, but a Treasury Select Committee has issued a call for evidence on specific regulation for the technology, with the powers that be having previously indicated the need for updated regulation that balances the opportunities with protecting consumers.

Allen explained: “It’s absolutely vital that government gets the AI regulatory framework right because, without that, we’re operating with one hand tied behind our back.

”The predicament that we face in the fraud arena is that fraudsters are agile and have no regard for the law.

”Conversely, insurers have to act in accordance with the law, so we need a balanced framework that, on the one hand, gives government and regulators the confidence that insurers are using AI ethically to protect consumers, but on the other hand allows us to at least level the playing field.”