’The real risks are with the use of AI, not with the technology itself,’ says chief executive
The “UK has got its approach right” when it comes to regulating artificial intelligence (AI), according to Brian Mullins, chief executive of AI firm Mind Foundry.
In January 2024, a group of 127 industry specialists in the UK launched a voluntary code of conduct around the use of AI in the insurance claims sector. This is based on three principles – fairness, accountability and transparency.
In contrast, the European Union (EU) legislated the world’s first comprehensive AI law this month (13 March 2024), known as the AI Act. This uses a risk-based approach.
Split into four categories – minimal risk, limited risk, high risk and unacceptable risk – the AI Act aims to foster the use of trustworthy AI across Europe, providing firms that use AI with clear requirements and obligations around the technology’s use.
Elements of the act will start coming into force for EU member states over the next three years.
For example, governance rules and the obligations for general purpose AI models will become applicable across member states in the next 12 months, while the rules for AI systems that are embedded into regulated products will be effective after 36 months.
Mullins told Insurance Times that he felt the UK had taken a better approach to AI regulation compared to the EU, enabling bodies such as the Competition and Markets Authority (CMA) to “regulate industries’ use of AI, rather than regulating the technology itself”, which is “the case with the EU now”.
He explained that while the EU has “correctly identified the risk associated with AI”, the “real risks are with the use of AI, not with the technology itself”.
He continued: “This regulation will put a significant strain on innovation and competition within the region. When you punish business, you push it away.
“It is this approach that is going to help the UK develop pioneering AI products and stay at the forefront of global innovation. Regulate the use, not the technology.”
Leading on standards
Mullins said that the AI Act indicated “an attempt by the EU to be at the forefront of global standards on AI regulation”.
He added that the act “has an overly broad and ambiguous scope, making it harder for companies to decipher the extent to which they can build new AI applications”.
Businesses that operate in the EU may end up “moving their operations and sales to regions with a more favourable regulatory environment”, Mullins continued.
Marcus Evans, partner and European head of data privacy at global law firm Norton Rose Fulbright, said: “Businesses can expect more detail in the coming months on the specific requirements [of the AI Act] as the EU Commission establishes and staffs the AI office and begins to set standards and provide guidance on the act.”
Read: UK takes ‘very pragmatic approach’ towards insurance AI and regulation
Read: Creating AI regulatory framework set to be ‘significant piece of work’ in 2024
Explore more artificial intelligence-related content here or discover other news analysis stories here
For Melissa Collett, chief executive of trade association Insurtech UK, the EU’s AI Act is “the first piece of binding legislation in this area”.
She continued: ”It will have implications that insurers, intermediaries and technology service providers to the insurance sector will have to consider.”
Collett explained that the act is specifically aimed at “high risk AI systems”, therefore “its impact for insurtechs will depend on their specific role, provider or deployer, the nature of the AI system and how it is used”.
Insurtech UK will be looking to discuss the UK government’s plans “in terms of rolling out its own AI regulatory framework in the future”, she added.
Potential problems?
Roi Amir, chief executive of insurtech Sprout.ai, said that it was ”important that the [EU’s] approach [to AI] does not stifle innovation by dividing risks in the wrong category [in fields] where AI is extensively used”, such as claims processing and fraud management.
For Amir, “the focus should be on fairness in AI decision-making and consumer protection”.
He explained that this can be achieved by requiring ”greater transparency from insurers and insurtechs [around] where and how AI is being used in the insurance process, enabling customers to understand and trust AI”.
No comments yet