ChatGPT holds both opportunities and risks for insurance sector and for the journalists who write about it
By Clare Ruel
It’s the thought everyone has had – can OpenAI’s ChatGPT make my role redundant?
ChatGPT – which stands for Chat generative pre-trained transformer – is an artificial intelligence (AI) chatbot that uses natural language processing to create human like conversational dialogue. For example, it can respond to questions, write music, code, compose emails and of course write articles.
Various firms in the insurance industry have also begun experimenting with the technology to explore its uses in the sector – including Zurich and Consilium.
Ironically, in my case as technology editor here at Insurance Times, many newsrooms have also begun to explore how Google Bard – another generative AI chatbot – or ChatGPT could assist journalists to write short stories, with the Daily Mirror and Daily Express already making use of the latter.
In a similar way to how the insurance industry is deploying ChatGPT for basic and administrative tasks to allow staff to focus on their roles, AI could free up time for journalists so that we could focus attention on more complex tasks, such as feature writing, news analysis and thought leadership.
At an Ignite Software Systems event about the launch of an OpenAI-powered chatbot earlier in June, talk got underway about what might happen if ChatGPT was to replace insurance journalists – and whether news stories might lose credibility if the reader knew that it was written by a bot.
One attendee said that he valued the voice, knowledge and writing style of particular journalists and noted that knowing a bot had written a story could discredit their work.
In the same way as the saying that a journalist is only as good as their sources, is a chatbot only as good as the information that is fed into it? And do journalists or insurers control what AI spits out?
For the insurance industry, this question has been front and centre as more insurers and insurtechs experiment with AI technology.
This risk was marked by the so-called ‘godfather of AI’, Jeffrey Hinton, when he quit Google at the beginning of May this year – a company where he had worked for more than a decade.
The reason for his resignation, according to a speech, was so that he could speak freely about the risks of the technology he helped create, such as the threat of it upending jobs.
At the time, Hinton also voice his concerns that AI needed appropriate guardrails and regulations so mitigate the potential of the technology being exploited by bad actors.
Liable or not?
Turning our attention back to insurance journalism, how could readers decipher what a journalist had written and what AI had produced?
Read: UK takes ‘very pragmatic approach’ towards insurance AI and regulation
Read: Can brokers trust AI to handle sensitive data?
Explore more artificial intelligence-related content here or discover other news analysis content here
For the moment, writing about insurance remains too niche for AI programmes to effectively contribute, but this isn’t to say that the technology won’t eventually be able to produce the quality content everyone has come to expect from Insurance Times.
However, while a chatbot could eventually produce an article – there are concerns about misinformation.
It has been well reported that AI chatbots ‘hallucinate’ – or make up information with no basis in reality.
This is a problem for journalism as the first rule of reporting is accuracy – something that is also particularly important for the insurance sector.
If an AI chatbot was to hallucinate a claim, invent a customer or brand a dangerous risk as safe, who would be to blame?
Much like in the publishing world, responsibility would lie with the human involved in the process.
In the insurance industry, I can see that AI holds endless opportunities for insurtech as reiterated by prime minister Rishi Sunak this week during London Tech Week in his keynote speech.
In journalism however, AI does hold opportunities for basic tasks, but safety and proper regulation must be considered.
One question that fascinates me as technology editor is, could the very technology I write about take the jobs of the people I write for?
Not yet – in the same way that I doubt ChatGPT could replace journalists, I also doubt it could currently replace the vital work that humans in the insurance sector complete.
But maybe I shouldn’t speak so soon…
No comments yet