What’s better at uncovering lies: a program that analyses tremors in a customer’s voice or a seasoned investigator? Katie Puckett explores the possibilities
Try this: make up a story about where you last parked your car. Now start from the end and tell it backwards. Now go back to the moment you locked the car – which pocket did you put your keys in? Where did you go afterwards? What did you have for breakfast that morning?
No matter how good a liar you are, you will almost certainly have shown subtle signs of stress during your performance. But what is the best way to detect those signs? There are two favoured methods: specialist claims handlers who ask questions to hone in on areas of inconsistency and software that analyses “micro tremors” in a claimant’s voice.
The second technique is by far the more controversial. Firms have been using voice stress analysis (VSA) since 2001. And while the customer experience is not as bad as having Robert De Niro hook your fingers to a lie detector, the industry remains divided. Some swear the technology has saved them millions of pounds; others say their own staff outperformed it.
To add fuel to the fire, researchers from the universities of Stockholm and Gothenburg in Sweden have just published a paper claiming VSA is no more reliable than the toss of a coin and that it is “at the astrology end of the validity spectrum”.
As fraudulent claims rise in the recession, anything that might help reduce unnecessary payouts will provoke renewed interest – but is VSA really the answer?
Voice stress analysis, or voice risk analysis, is used not only by insurers but also by 25 councils to assess benefits claims. The best known technology provider is Digilog, which has 14 insurance customers, sometimes supplied via a Capita division called Teceris Validation Services.
The technology is used alongside a detailed interview technique known as cognitive questioning, or sold by Digilog as “narrative integrity analysis”. Claims handlers lead callers through a scripted interview to get their account of, say, the theft of a car. The theory is that people who are lying about all or some of their story will struggle, become defensive or crumble under the pressure, and their voices will betray them.
Esure has been using Digilog’s VRA software for several years, first as an outsourced service and now with a trained in-house team. An Esure spokesman says the company uses the software for between 5% and 10% of claims – generally for stolen and unrecovered or burnt-out cars, which are instances where fraud is relatively common.
“If you’ve had something stolen, it’s a traumatic incident and people need to talk about what happened because they feel wronged,” says the spokesman. “Victims of crime will talk at length about the circumstances and be able to go into amazing detail.
“But that doesn’t happen when someone’s making a fraudulent claim. We get them to tell us the full story. We go from front to back, back to front and from the middle outwards in both directions. When you’re questioned about the wider circumstances, you have to be extra creative to make up the story and cross-reference it against what’s already been said. It’s incredibly stressful and it puts the brain into overdrive.”
He adds: “If people are exhibiting particular signs of stress about one question, we keep digging. If, for example, they show high stress when we ask the last time they saw the car outside their home, the person monitoring it can ask more detailed questions.”
But surely, it can be argued, if your car has been stolen your blood pressure would be pretty high anyway? And not everyone enjoys discussing the details of a traumatic experience. Some people are just nervous on the phone. To counter this, the software takes a background reading of stress in the person’s voice in the first three or four minutes of the call. This provides a baseline to compare against.
“We get them to tell us an undeniable truth,” says the Esure spokesman. “We ask them for their name – and then we get them to spell it. They could be lying about their name, but no one can lie about the spelling of a word.”
If that still doesn’t sound fool-proof, insurers that use voice stress detection techniques say they would never turn down a claim on the basis of a suspicious-sounding interview – it just leads them to investigate further.
Proponents say the technology’s main benefit is that it speeds up the validation process and payment for the honest majority of claimants.
Provident Insurance has been using Digilog’s software since 2005 and has a team of nine people who use the program. All claims go through a primary screening and 30% go through to a more detailed vetting within three days. After the secondary investigation, 40% of claims are paid immediately.
Adrian Furness, claims director at Provident, says it has cut the time taken to investigate most claims from months to days. “It’s a lot faster than the traditional method where an external investigator, ex-CID, takes a month to set up a four-hour interview at someone’s home, which is very intrusive and a real interrogation.”
Provident still employs external investigators but only for between 5% and 10% of claims. Since 2005, it has repudiated 12,000 claims worth £6m – a dramatic increase to 29% of all claims, from 9%.
Furness’s other measure of success is that many people deemed high-risk at the first stage will abandon their claim or simply never call back. But it’s hard to shake the suspicion that the really effective part of this fraud detection method is the cognitive questioning, which is also used as a standalone technique. After all, professionals such as police and social workers spend their days interviewing people who are lying to them and they don’t usually use software to work it out.
It’s these people who are employed by TCF Corporate, which has clients such as RSA, Equity and KGM. It employs 40 people from a range of backgrounds, including experts in linguistics and human behaviour.
RSA uses TCF’s services as an overspill from its own fraud team, which undergoes several weeks of training in simple psychology and spotting subtle errors in normal conversation.
“A good chunk [of claims] are settled at the end of that phone call,” says John Beadle, the insurer’s counter-fraud manager. “Some are genuine but had indicators for some reason. After further explanation we decide they can be settled. Another chunk will say ‘I’m lying’ as a result of the process. Something comes out during the conversation that will make it clear the claim won’t be paid – like they left the keys somewhere. And a chunk won’t pursue it. A lot of people don’t like to lose face. They say, ‘Let me have a think, you won’t believe this but I’ve found those items’.”
Richard Sheridan is managing director of Teceris, a claims management company. He does not challenge the importance of the interview technique to the success of VSA. But he says using the technology speeds up the process, reducing customer complaints. “It can take seven or eight minutes to validate, whereas if it’s a complicated scenario with a cognitive interview it could be 20 minutes,” he says.
But how customer-friendly – and indeed fair – is it to subject claimants to the equivalent of a lie detector test? Sheridan says a recent survey of Birmingham council’s benefits service found 98% of users satisfied. It’s a little different in a commercial market where customers can choose between many competing firms, however.
Conversely, devotees of cognitive interviewing believe it can be a positive experience. Tony Jones, client services director for TCF Motor, compares it to a “cup of tea and a conversation” – a possibly welcome change for victims forced to battle with impersonal call centres.
“You can actually be extremely empathetic to them on the phone, with lots of good customer behaviour,” says RSA’s Beadle. “It’s a way of talking to people that’s not threatening in any way, listening to someone’s story and drawing them out. In the drive for technology, we shouldn’t forget the actual human element and the need to have good, skilled, well-trained staff.”
VSA and cognitive interviewing are undeniably intensive, but there is a more prosaic use of technology that can help insurers to identify high-risk claims before they happen, leaving policyholders none the wiser. Many insurers use varying forms of profiling to flag up high-risk policyholders or claims from tell-tale signs. For example, high-value claims made in the first few months of a policy might automatically prompt further investigation. Or, a series of policies registered with the same mobile number might indicate an organised network of criminals.
Insurers can buy off-the-shelf software packages that will analyse trends in their own data. Web-based systems can compare policy or claims details with a wider network of insurers. Data-sharing facilities run by organisations such as the Insurance Fraud Bureau can also help.
“A lot of what we do is interrogation of data,” says Allan Clare, head of financial crime for Royal Bank of Scotland Insurance. “Different claims require different solutions. You could not apply cognitive interviewing or voice stress analysis to every customer, but you can do profiling first to filter off high-risk and medium-risk claims.”
Groupama uses Hunter software for all of its motor and fleet policies, and plans to use it for household and commercial claims in the future. “The key is identifying fraud at the front end,” says Stephen Teeling, counter-fraud manager.
“If you take a new policy from someone who is an organised criminal making claims left, right and centre, if we can find the link, we can prevent claims happening.”
But one thing is certain: fraud is on the rise. TCF Corporate, for example, says its fraud detection rate has risen sharply from 11.25% of motor claims in 2007 to 18.6% in 2008, in a sample of 12,500 claims referred from insurers.
So whatever you think about the merits of anti-fraud technologies, the coming months are sure to test the viability of each method more than ever before.
No comments yet