Managing director says that to stay at the forefront of technology innovation and implementation, businesses must be ‘prepared to evolve’ and have ‘the nerve’ to try new things
Removing the fear of failure is key to developing great technology, according to Nik Ellis, Insurance Times’ 2024 Technology Champion.
Ellis, who is managing director at automotive expert witness service provider Laird Assessors and a director at technology business Swiftcase, was crowned Technology Champion of the Year at Insurance Times’ annual Tech and Innovation Awards – this year held on 19 September 2024 at the Royal Lancaster London.
Laird Assessors was shortlisted in a further two categories at the event. These were Best Use of Artificial Intelligence (AI) – Service Provider and Best Use of Technology for Customer Experience.
For Ellis, this recognition underscored Laird Assessors’ commitment to adopting and leveraging technology – such as AI and telematics – to enhance its service proposition, facilitated by the cloud-based and AI driven workflow platform provided by Swiftcase.
The aim of this digital transformation focus, according to Ellis, has been to ensure that Laird Assessors’ clients can receive a faster, efficient and more accurate service.
Following his award win, Ellis tells Insurance Times: “We put a lot of effort into technology. [Laird Assessors is] a very small, agile firm, so we allow our techies to do experimental things.
“We encourage failure. If you’re going to fail, fail fast. An award like [the Technology Champion accolade] is a bit of positivity and we’re very proud of it – especially as we were up against a couple of giants.”
Ellis adds that his victory at the 2024 Tech and Innovation Awards demonstrates that AI implementation is not all about “the survival of the fittest”, but that successfully utilising great tech comes down to businesses being “prepared to evolve”.
He continues: “We’ve embraced new technology since our inception [24 years ago].
“In the last two or three years, we’ve been gifted so many new bits of technology – especially AI. I know it’s a big buzzword at the moment, but we’ve got a dozen bits of AI sitting within our system and we’ve got huge plans for the future.
“We’re quite edgy and we have the nerve to actually try things and not be afraid of failure.”
Customer conversations
Perhaps unsurprisingly, Ellis speaks with evangelical zeal and detail on potential technology use cases that can help both insurers and consumers – for example, by implementing technology tools that support greater transparency and better communication.
One way this can be achieved, he explains, is via back end system automation that can reduce staff costs or allow employees “to take on a lot more work per staff member”.
He continues: “We developed our chatbot about three years ago to deal with total losses. That was highly successful.
“[Nowadays], people shy away from [using] the phone – especially generation Z. We’ve found that people like the ability to deal with any critical loss 24/7 and we have an 85% success rate with our AI bots.
“Of course, [the chatbots] just keep getting better as they get more natural language in. I hope that insurers will one day have their own version of that.
“Technology has changed the world from ‘I wonder if we can [do that]’ to ‘what shall we do now?’ and ‘where can automation take us?’ It has made pretty much everything possible.”
AI regulation
At the end of January 2024, an industry working group revealed a voluntary code of conduct for the use of AI in the claims sector.
Although the code is principles-based and not mandatory, it does perhaps indicate the direction of travel and that more stringent regulation around AI in insurance could be around the corner, following the footsteps of action taken by the European Union.
Sharing his view on whether AI should be regulated, Ellis says: “My personal view is that the cat is well and truly out of the bag on AI regulation. It’s like trying to regulate the internet – it won’t really happen.
“However, we have a massive responsibility if we’re using generative AI in our learning models to make sure that when we train them, we absolutely concentrate on the ethics that we would instil in our staff.
“That’s going to be hugely important for insurers, to make sure they’re non-discriminatory.
“Especially [considering] proprietary AI, the stuff we build and train ourselves, we have to be incredibly careful where that data comes from to ensure no bias.”
No comments yet