Artificial intelligence is making huge inroads into what were once professional occupations, including selling insurance. But in areas such as medicine, who pays if something goes wrong?
Insurers and brokers understand the myriad of benefits of artificial intelligence (AI) and machine learning. Whether it be improving the claims process or streamlining quotes, AI and machine learning is revolutionising the way insurance is sold and administered.
And outside of insurance, artificial intelligence is already gathering pace with a variety of different real-world applications, but there are risks with this increasing reliance on technology.
In medicine, for example, AI is being used to analyse things like tumours and moles, and all the research is saying that it can do it much quicker and more accurately than human doctors.
While there is a great potential to save lives, there is still a big risk that things could go wrong, through misdiagnoses or people being referred for unnecessary operations and procedures, or even through crimes such as biohacking, which could lead to the highjacking or malfunction of medical devices such as pacemakers.
Complex nature
But Keoghs professional indemnity partner Christopher Stanton says the complex nature of the medical world, which includes equipment manufacturers, technology and software developers (including AI), as well as the medical professionals themselves, creates an issue for insurers.
“If something does go wrong with the use of an AI diagnostic programme, what insurance policies might respond,” he asks. “You may well have a cyber policy, or medical malpractice cover for the doctors involved. The AI provider and the software designers may well have professional negligence policies, and the manufacturers of the equipment might also have product liability or public liability.
“The issue is that there are a lot of entities that may be liable, and quite a significant number of insurance policies could respond. This could lead to a big dispute as to which policy provider is liable in the event of a claim being made, and this applies across a whole range of different sectors too.”
When software goes wrong
Many professional services and financial services firms are also using AI to their advantage, but risks of the software going wrong do exist, whether that be from accidental bias entering the system, errors in programming or deliberate manipulation of the software to deliver certain results.
Stanton says this increasing and more widespread use of AI presents a particular problem for insurers, particularly if the insurance industry fails to keep pace with the rate of change.
“Machines are increasingly doing the jobs that were originally done by professionals,” he says. “Here at Keoghs we have our own AI lawyer, which reviews documents and is capable of giving informed advice to clients, based upon machine learning and automation.
“That is already happening in the world of professional services and AI is also being used within financial institutions, and one of the big questions is whether regulators and insurers are actually able to keep up with the rapid developments we are seeing.”
One area where insurers could be falling behind the curve is with their policy wordings.
“Policy wordings do need adapting and updating,” Stanton says. “The whole problem with AI revolves around the definition of what the insuring event is, but also around exclusions under the policy and what other policies there could be that might also respond.
“Insurers need to look very carefully at their wording and ensure that they understand the risk profile of their policyholder, and that their policies are fit for purpose.”
Because if insurers do fail to keep up with the advancements being made in AI, the risks facing the insurance industry could be huge.
Digital Risks insurance lead for emerging technologies Ben Davis says the process by which machine learning works also makes the rise of systemic risk more likely.
“The way machine learning works is by processing a large amount of data as a learning technique,” he says. “The risk is, however, that hackers could get into that pool of data and perform data poisoning so that they poison the data to trick the AI into learning things incorrectly.
“This is a big risk when you have AI training newer AI, which does not necessarily have human oversight. This could potentially lead to a large loss because the AI has been manipulated by a hacker.”
Driverless cars
And Stanton points to the example of driverless cars as another example of how insurers could face systemic risk from AI.
“If that technology went wrong and there are a number of crashes as a result, that could be an enormous claim,” he says. “Any insurer will need to look closely at the aggregation clause in these types of policies. If you have all these claims from a seemingly related problem, then as an insurer you are going to try and aggregate these into one claim to limit their liability under that particular policy.
“So the broker and the insurer need to sit down and ensure that the policy is fit-for-purpose, because no insurer is going to want to give open-ended indemnity [for these types of AI risks] and effectively write a blank cheque, but equally the insured needs to be confident that they will have cover if something does go wrong.”
No comments yet