The professional body emphasised that insurance ‘professionals should always be prepared to take accountability for the outcomes created by AI’
Membership body the Chartered Insurance Institute (CII) has urged the UK government to ensure that institutions and individuals are held accountable for artificial intelligence (AI) driven decision-making in financial services.
In a submission to the Treasury Select Committee last month (March 2025), the professional body – which represents more than 120,000 members – argued that British firms must be responsible for the AI algorithms they deploy, even where the decision-making process is not fully explainable.
The CII stressed that accountability around AI usage in financial services should not be optional and must be underpinned by rigorous validation and testing, to identify and mitigate discriminatory outcomes.
It further recommended that the results of such assessments should be made publicly available, to build consumer trust.
Matthew Connell, director of policy and public affairs at the CII, said: “While AI has been employed within insurance for many years, it is important that we continuously assess how it can be optimised for both professionals and consumers.
”We welcome the opportunity to offer recommendations to the Treasury Select Committee and utilise the extensive consumer research carried out by the CII to inform this work on AI in financial services.”
Responsible AI adoption
In February 2025, the Treasury Select Committee launched a call for evidence that sought to better understand how the financial services sector could utilise AI while still protecting consumers against potential risks. The deadline for entries was 17 March 2025.
At the time, committee chair Dame Meg Hillier said: “Successive governments have made clear their intention to embed and expand the use of AI to modernise the economy.
“My committee wants to understand what that will look like for the financial services sector and how the City might change in the coming years as that transformation gathers pace.
“It’s critically important [that] the City can capitalise on innovations in AI and continue to be a world leader in finance. We must, though, also be mindful of ensuring there are adequate safeguards in place to mitigate the associated risks, particularly for customers. This piece of work will allow us to see the full picture.”
As part of its submission to this inquiry, the CII advocated for a sector-wide skills strategy, to equip professionals at all levels with the knowledge required to use AI responsibly. It said that this education should include awareness of the potential harms that can result from AI mismanagement, as well as highlight the benefits of adopting emerging technologies.
Read: CII aims to help insurers build public trust with new report
Read: AI usage in insurance is an ‘evolution, not revolution’ – Hiscox
Explore more artificial intelligence related stories here, or discover more news here
The body’s recommendations were supported by findings from its Public Trust Index – its long-standing consumer research that has tracked customer attitudes over several years. This research indicated that AI can support improvements in areas valued by consumers and SMEs, including cost, protection, ease of use and confidence in products.
The CII has already undertaken work around the use of AI in insurance. This includes, for example, its Digital Companion to the Code of Ethics and Addressing Gender Bias in Artificial Intelligence papers, both of which were published in 2029. These provide practical guidance to firms on implementing AI responsibly.
The institute has also developed a number of educational resources that aim to help professionals navigate the risks and opportunities of AI, including introductory courses to data science and AI, continuing professional development materials and thought leadership content.

No comments yet