’The balance between fostering responsible AI development and hindering innovation is critical’ when it comes to new regulation, says chief technology officer

Now that Keir Starmer and the Labour government are getting comfortable in Number 10, following victory at the general election on 4 July 2024, the party can use some of its newfound clout to action its technology strategy, which it launched in May 2024.

Labour’s approach is keenly focused around the importance of having a responsible strategy around the use of artificial intelligence (AI), to ensure that the UK can remain a global technology superpower.

Two key facets of this include building trust between AI and the general public and establishing a regulatory innovation office centred around the technology.

Labour’s manifesto, which was published in June 2024, stated that voluntary AI safety commitments for tech companies would be made statutory if the party came into power.

If enacted, this action would mean that AI developers would be required to release safety data.

But what does the new government’s approach to AI mean for the insurance industry?

Speaking exclusively to Insurance Times, Vinod Singh, co-founder and chief technology officer of insurtech Concirrus, which has been using predictive AI for a decade, said: “I wouldn’t be surprised if UK regulation [becomes] almost the same as the European Union AI Act.

”I am a bit concerned that [the government] might go a bit too far in terms of restricting AI. If [it goes too] far, we could get left behind as others, such as Singapore or China, are taking a balanced approach to AI.”

Singh added that increased regulation around AI could make it harder for small, AI focused insurance startups to meet compliance standards, in turn bumping up business running costs.

Miqdaad Versi, partner at insurance consultancy Oxbow Partners, noted that “carriers, brokers, industry bodies, regulators and the government all have a part to play in working together to facilitate, enable and accelerate the realisation of AI and generative AI’s potential in insurance.”

Building trust

Labour’s ambition to build trust between corporate AI usage and the general public is a good move, according to Nutan Rajguru, head of analytics for UK and Europe at data and analytics firm Verisk.

She said: “In terms of building public trust and competence in AI, it’s important that there is some form of regulation that helps the public have confidence.”

This is particularly important considering there has been “scaremongering about what AI can or can’t do and how many jobs it is going to take”, added Graeme Howard, non-executive director at technology consulting firm Esynergy.

He continued: “Insurance is a grudge purchase. The transparency of how data is being used and how models are created is important. The more that can be demonstrated, the more trust will grow.”

For Singh, a primary way the insurance sector can develop trust around AI models is to eliminate “hallucination” – where AI models generate incorrect or misleading results.

He said: “One of the biggest problems with generative AI is hallucination. We have spent time and money making sure it does not hallucinate – that’s a core part of our strategy. When you are talking to an AI bot, it should be clear. Building trust means that you have to be open.”

Rajguru agreed: ”It’s very important as data scientists that we check the quality of the data – [whether] the distribution is representative, up to date and that the results from the models are fair and reasonable.”

’Good idea on paper’

Currently, there is no formal regulation in place around the use of AI in insurance. However, an industry working group – led by Jel Consulting director Eddie Longworth – did establish a voluntary code of conduct in January 2024 around the use of AI in the claims sector.

The three principles that underpin this initiative are fairness, accountability and transparency.

Singh believes, therefore, that building a regulatory innovation office is a “good idea on paper”.

He continued: “I don’t know how effectively it will run on the ground – it remains to be seen. Regulatory bodies are very important to take bad actors out of the game. The balance between fostering responsible AI development and hindering innovation is critical.”

Rajguru is a personal signatory of the AI code of conduct.

She said: “One of the advantages of that principles-based approach is that we were able to deliver it very quickly. Developing regulation, you have to be very careful that it is done in such a way that achieves the aims, but does not have unintended consequences – that takes time to do.

“I hope that the new government will put in some measures that will help build public confidence and take a similar approach with underpinning good practice with the correct principles, rather than being too prescriptive.”

Data

Labour additionally intends to amend the UK’s planning policies to enable the construction of more data centres, addressing a current shortage and helping to support the growing demand for AI.

The party initially planned to build a data centre on a former quarry next to the M25, but the idea was shot down due to the potential disruption it could cause on the motorway.

For Rajguru, these plans are positive. In particular, she approved of the party’s intention to form a national data library because this “recognises that data is a key ingredient to developing AI”.

She said: “It’s great to see recognition [from the government] that data is key to driving AI. As a data scientist, I understand that getting access to good quality data is often the key barrier to the development of new AI applications.”

However, Rajguru warned that ”data is very precious” because of the personal information it can contain, therefore the insurance industry has ”serious responsibilities to safeguard people’s data”.

She continued: ”Ensuring that we have the right consent to use data to develop AI is important. It can be very difficult to get access to the right level of data.”

 

Insurance Times Fantasy Football