After January 2024’s introduction of an industry-wide code of conduct for the use of AI in the claims sector, the actuarial industry is now following suit thanks to a regulatory update

By Jon Guy

Jon Guy

Jon Guy

There has been much debate over the use of artificial intelligence (AI) within the insurance sector – and none more so than in the actuarial sciences.

The rise in generative AI – instigated by the introduction of ChatGPT in 2022 – has brought with it a challenging juxtaposition between the benefits that the technology can bring and the risks it may pose for insurers.

Generative AI has the ability to create new content, such as text, images, music, audio and videos.

Given the huge reliance that the insurance industry places on actuaries, the arrival of a type of AI that is equipped to analyse risks at a pace beyond human capability would always be a major attraction for market participants – but regulators have been keen to ensure the associated risks are managed.

This month (October 2024), the Financial Reporting Council (FRC) has published a revised Technical Actuarial Guidance: Models document, to support the growing use of AI and machine learning (ML) in actuarial work.

This update aims to provide actuaries with examples relating to model bias, understanding and communication, governance and stability when using AI or ML models in the application of the principles-based Technical Actuarial Standard 100 (TAS 100).

Mark Babington, executive director of regulatory standards at the FRC, explained: “As the use of artificial intelligence and machine learning increases in actuarial work, it remains essential to keep pace with the rise of new technologies and emerging opportunities.

“Practitioners also need to ensure associated risks are being appropriately managed and [our] updated guidance will help actuaries navigate these challenges while producing quality actuarial work.”

Reserved reaction

The market has given the FRC’s refreshed guidance a guarded welcome.

Greig Bingham, head of financial modelling at independent financial services consultancy Broadstone, said: “I welcome the changes to the guidance to include specific references to AI [and] ML as these areas are becoming increasingly prevalent in actuarial work.

“In particular, there are some learnings from other communities – such as data science – around specific ML techniques that will prove useful additions.”

However, actuaries are still fairly relaxed over the role AI can and will play in the near to medium term.

As with any technology, the quality of the output is totally dependent on the quality of the data provided to train and inform it – and data all too often will contain a degree of bias that will influence that outcome.

The actuarial role is full of nuances that are still above the remit of current AI systems. In my view, it is appropriate that AI’s role is subject to caution and a recognition of its limitations.

Insurance Times Fantasy Football