Context by Cohere
How Insurers Can Unlock Their Data Vaults with AI

How Insurers Can Unlock Their Data Vaults with AI

Early adopters across the insurance sector are racing to gain a share of an expected $70 billion windfall from using generative AI.


Insurers have long seen the value in harnessing unstructured data to enhance decision-making and reduce costs. According to some estimates, up to 80% of insurer data is buried in reports, emails, transcripts and other documents. Generative AI’s breakthrough in processing and analyzing massive volumes of unstructured data has now opened the floodgates. Insurers can finally capitalize on this data, and they already have a long list of use cases to pursue.

Yet, despite their enthusiasm, many insurers find themselves at a standstill. One thing holding them back is overwhelm. There are simply dozens of ways for insurers to use AI across the entire insurance life cycle, such as in risk assessment, policy personalization, customer service, and more, and gain a share of the up to $70 billion in value that McKinsey expects the industry to unlock.

Sample of Insurance Use Cases

Those who overcome this decision paralysis often face another roadblock. After tapping a general-purpose large language model (LLM) for a proof-of-concept, they soon find themselves disillusioned. Either they see low performance (the accuracy and precision of outputs isn’t up to par because the model doesn’t speak their industry’s language) or they experience spiraling costs once an effective POC moves into production.

How can insurers facing these challenges join early AI adopters who are already seeing significant results and poised to more than double revenue growth in the next two years? Let’s take a look at how some insurers are already using AI, and how those who are new to the technology can lay the groundwork at their organization.

First, Take Inspiration from Early Adopters

We’re beginning to see two broad categories of AI-powered solutions stand out among early adopters in the insurance sector: summarization and knowledge assistants. 


While summarizing information may sound like a mundane use case, it’s actually quite powerful. It can significantly accelerate and improve decision-making. Specialty insurer Convex, for instance, is using Cohere’s AI solutions to extract unstructured data from medical, building, engineering, and other reports when evaluating and underwriting risk. Summarizing with Cohere Command has reduced the time it takes to distill 100-page engineering reports into three-page summaries, going from 10 days to 10 minutes. The upshot is that their underwriters can now assess risks both faster and more accurately.

AI Knowledge Assistants

AI knowledge assistants, powered with state-of-the-art search and retrieval language models, can streamline tasks by finding information buried in insurer data, accurately answering questions based on it, and even creating entirely new content. Add in multilingual capabilities, and these assistants can support global teams like never before. 

In claims, such knowledge assistants are already helping to automate key tasks for both customers and claims staff, such as making it easier for a policyholder to submit a notification of a loss, and enabling claims staff to find information faster when speaking with customers. Knowledge assistants are also good at correlating the risk factors and costs from assessment reports to improve claims resolution processes. 

In customer operations, an area in which McKinsey estimates insurers will see the highest AI-powered productivity gains, knowledge assistants also drive greater efficiency and productivity. One company used the technology to create a chatbot that can answer routine customer questions online. It’s reduced response times to seconds and freed up human agents to focus on more complex customer requests. Others provide their customer service agents with an AI knowledge assistant to help them find and interpret process and policy information enabling them to more effectively answer customer policy questions

There’s no one-size-fits-all formula when it comes to AI. Where insurers focus their efforts first will typically be driven by their organization’s business strategy and goals as they build out their competitive advantage (for example, enabling the industry’s fastest claims processing, best policy pricing, or more comprehensive coverage). But these summarization and knowledge assistant use cases offer a glimpse of where insurers have started to make progress.

Second, Don’t Forget the Pre-Work 

When transitioning to cloud computing, companies have to prepare their networks, optimize their IT infrastructures, and make other changes in order to reap the full benefits of the cloud. Likewise, insurers might want to do some prep work before deploying AI solutions. This includes identifying the right approach for their particular use case, upgrading their search and retrieval stack, and fine-tuning their chosen model for specific tasks and domain-specific jargon. 

Understand the Tradeoffs 

Many backend processes that make a knowledge assistant possible don’t require a gigantic model — and the associated costs. There are numerous levers that organizations can pull to reduce the cost of AI solutions. Many early adopters, for instance, combine different types of large language models and tools to reduce costs. Convex, for example, used Cohere Embed to semantically search their engineering reports. They then used Cohere Command with retrieval-augmented generation (RAG) to retrieve relevant information from reports and generate answers in natural language. This combination enabled them to gain the necessary levels of accuracy at a cost that made sense for the organization.

Optimize Search Results

AI knowledge assistants depend on RAG technology to deliver optimal search results. With RAG, LLMs can reference authoritative, proprietary data sources, such as policy and claims management systems, in real time. This can significantly increase the accuracy of LLM outputs and minimize incorrect outputs (known as hallucinations).

For RAG to work well, insurers must optimize their solutions’ search and retrieval capabilities. This includes transforming large, complex, and unstructured text, into numerical vectors (called embeddings). The vectors aim to capture the meaning of the text. They can then be stored in a database where they can be analyzed with the queries, also in the form of a vector, for similarities. This process, known as semantic search, provides an added level of precision and improved accuracy beyond legacy systems based on keyword search.

Legacy keyword search can also be optimized by using a reranking model to reorder search results based on relevance. This can reduce the number of irrelevant results when a claims adjuster, for example, is searching for specific documents about a past claim.

Fine-tune the Model

General-purpose LLMs can generate responses on a broad range of topics. But they often need some level of customization, called fine-tuning, which augments the model with in-depth industry knowledge to provide more accurate and relevant responses. This is the case for insurance and other financial services. Fine-tuning enables an insurer to train an LLM on their underwriting guidelines, policy documents, call center transcripts, and other enterprise data. In fine-tuning their models, insurers will need to create training data sets that provide relevant and representative data for each specific domain or task, and then train the models on that data (testing, evaluating the outputs, and retraining along the way). Cohere offers a simple fine-tuning dashboard that enables enterprises to easily manage and run their fine-tuning projects, including a pricing calculator.

Third, Don’t Wait to Get Started with AI

Insurers hold an enormous amount of data that can unleash immense possibilities for AI-powered productivity and growth. While the options can seem overwhelming, early movers offer some direction on where to get started. Now is the time to dive in.

To get started,

Talk to Sales
Keep reading