Eleanor Brodie, data science manager at LexisNexis Risk Solutions tells Insurance Times about the biggest misconceptions with data in the industry and why tapping into it could be an asset
What are the biggest misconceptions you encounter about data for insurance risk assessment?
A big misconception in the industry is that big data and analytics driven products will replace human capital. Data-driven products allow insurance companies to streamline their existing processes and are meant to complement their existing workflow. Human judgment and expertise will always be required to set the foundation to ensure risk is accurately priced and aligns with each company’s business strategy. However, data-driven insights can assist with the decision process.
Another misunderstanding about data science is there is a simple formula where all data is put into a magic funnel that draws out the desired outcome. However, before any predictive model can be built or nugget of insight can be gleaned, data generally needs to be enriched, filtered, and structured and this process relies heavily on the quality of the initial data sources and how they are modelled.
How much has changed for the data science team at LexisNexis Risk Solutions in the past year both operationally and in terms of focus as a direct consequence of the pandemic?
Aside from the adjustment to homeworking which has entailed lots of video calls with customers and colleagues, the past year has certainly fuelled the appetite for more and more data, attributes, and scores. Insurance providers are hungrier now than they have ever been to evaluate new data driven solutions to better understand and segment their customers.
A significant challenge for insurance providers is any sudden change in consumer behaviour. We all do things we do not even think about as part of our daily routines. Shopping for insurance, driving to work and going to the supermarket are just a few activities that just happened until the pandemic took hold. With very little notice, insurance providers needed to augment and expedite their ability to service their prospects and customers virtually.
This includes prefill solutions, data driven underwriting, pricing applications, and contactless claims processing. Fortunately, we have been developing these solutions for years, so we have been in a good place to help insurance providers interact effectively with their customers and understand how risks have changed as a direct consequence of the pandemic.
Data is often called ‘the new gold’, but what process does the business go through to assess the market opportunity and bring datasets to insurers and brokers to ensure they are satisfying an insurance market need?
We start by building an analytical prototype to validate an idea from our colleagues in the home, motor, or commercial teams. These ideas come from the constant dialogue we have with customers on the pain points they need solved. Our new Covid-19 attributes are a good example of this – we could see the value the market could gain from understanding changes in motor policy behaviour during the first lockdown versus changes outside of that time.
Once we have proved the concept, we start the product development work which may leverage millions of data points. We then create the final specs for technology to implement. As we near implementation, we develop the attributes or inputs into the solution and our audit team works with technology to ensure the final product performs as expected.
As the product is being developed, we give our customers the opportunity to test the solution on their own data. This might be through actionable insight studies or we may perform retro validations tests through our batch team.
Closer to launch, we look at any required regulatory documents on the solution inputs, outputs, and overall performance. Then following launch we monitor the attributes and scores to ensure they continue to perform as expected.
It’s a well-oiled process and can be hugely rewarding when you find a new data attribute that you know can solve a pain point for the market.
Given the increasing demand for data scientists across many industries, what makes working with insurance data attractive to candidates and how do you attract and nurture this talent?
Competition for data scientists is indeed hot and while there are more colleges delivering great talent into the marketplace, I think demand will exceed supply for some time yet.
I might be biased but we do have a lot more to offer good data scientists – we have what they dream of - good quality data and lots of it. Our data scientists work with literally hundreds of millions and often billions of records to solve our customers’ problems. Where other companies are limited in the breadth and depth of their data and the ability to pull it all together in a commercially viable application, this is what we do every day and that is exciting to a candidate.
Our Data Science Rotational Program (DSRP) sees graduates from a wide array of disciplines, including but not limited to mathematics, statistics, computer science, data science, physics, financial math, actuarial science, and engineering join LexisNexis for a two-year cycle through four different teams.
This experience provides a robust hands-on journey from data access, data analysis, model building to model implementation. Right now, we have six DSRP team members in this programme globally and we typically hire three new positions each year.
How can brokers and insurers put themselves in the strongest position to use data as part of the customer journey – from quoting to claim as we head into 2021?
Look for data enrichment solutions that can be pulled into existing processes seamlessly and efficiently. Calling out for data from multiple data sources is inefficient and can lead to a poor customer experience so look for platforms solutions that offer the widest choice of data. You may not want that whole choice today but you do want the flexibility to expand in the future.
Consider joining data sharing initiatives which will help you gain a view of the industry’s experience of an individual, vehicle or location. Shared motor policy history data has already established links between policy behaviour and claims losses. The next development will focus on shared claims data.
Also conquer the challenge of creating a single customer view. Many insurance providers have gone through merger and acquisition activity over the past few years which has made customer database management a real headache. It means you can end up with multiple records for the same customer. When you are able to pull those disparate records together through linking and matching technology to create one consolidated view, you open up cross sell and upsell opportunities based on a much clearer understanding of the customer. It also helps you gain the most value from data enrichment solutions.
Finally, for insurance providers to truly maximise and leverage the opportunities new data sources will bring through data enrichment at point of quote and renewal, it’s worth going back to their initial data sources.
This can start with refreshing the initial data model. Refreshing the data will identify behaviours used in rating the risk, how it’s changed and how it should be adapted for the current market. Without this crucial first step, adding in new data could be duplicating effort and capturing behaviours that existing models may already capture.
It is also worth viewing new data sources as possible replacements for existing data sets rather than an add on to the current data used. Insurance providers need to look at the incremental benefits a new data source will bring.
As the industry continues to rise to the challenge of pricing in a highly competitive market, maximising the opportunities of its data models to create more intelligible insights and a strategic advantage over competitors is vital.
No comments yet