Context by Cohere
How Generative AI and LLMs Unlock Greater Workforce Productivity

How Generative AI and LLMs Unlock Greater Workforce Productivity

Share:

AI has been around for many decades, making steady progress in automating or augmenting well-defined, specialized tasks. Until recently, AI-powered applications had been expensive to build in-house, requiring a highly skilled team building over a period of months or years. Businesses could only pursue such projects for use cases where the return was high enough to warrant the time and effort. For the vast majority, AI was out of reach altogether.

Generative AI is a Different Kind of AI

Over the past several years, AI has reached a tipping point. The advent of capable large language models (LLMs), and the launch of ChatGPT as an interface to LLMs in November 2022, have brought generative AI capabilities to users and businesses alike. Hundreds of millions of people have experienced for themselves how these models can write, summarize, and converse with almost human-like quality. Some models even tout the capability to pass professional legal and medical entrance exams. Businesses across industries are seeing AI as a strategic technology that can benefit their workflows, workforce, and bottom line. Many are actively implementing their first solution, while others are building entirely AI-first businesses.

With progress comes innovation, and businesses today have more and more choice. However, they are not dependent on off-the-shelf, purpose-built AI solutions or AI features bolted onto familiar software. They can go straight to the source and use an enterprise-grade, managed LLM platform (like Cohere) to build a bespoke solution. Such platforms give development teams ultimate flexibility to easily integrate an LLM into their own applications through a simple API, or through private deployments when privacy and security is crucial. This makes LLMs accessible, customizable, and trainable by any software developer, regardless of experience with AI.

The combination of LLM capability and accessibility has resulted in a Cambrian explosion of new applications, plug-ins, and capabilities being built every day. Arguably, this is the third great revolution in human-computer interfaces, with the first being the advent of the world wide web (1993, with ~$3.3 trillion of global growth created 1), and the second, the smartphone (2007, with ~$4.5 trillion of economic value created 2). LLMs have a similar potential to unlock economic value and growth, with early impact estimates of $1 trillion, or 4% of GDP, in the U.S. alone 3. Extrapolating to developed economies, the likely impact is similar. Given much lower upfront investment in infrastructure (we don’t need users to buy expensive phones, or lay fiber optic cable now), and a far larger developer base, it’s reasonable to expect much faster adoption this time.

New Studies Show LLMs’ Impact on Productivity

One of the key benefits of LLM-powered workflows — cutting across business units — is increased productivity, primarily for knowledge workers like marketers, support agents, community managers, sales representatives, or software developers. LLMs automate or augment repetitive and low-level tasks, freeing workers to focus on responsibilities that provide higher value to the business. Not only does this create happier, more engaged employees, but it also allows businesses to better allocate their resources and grow the potential of their workforce.

So, how can we look at the scale of this trend? Several new academic and industry studies are available that quantify the expected impact. These studies see productivity gains of up to 50% on certain knowledge-based tasks today, with an eye to much more in future as these models mature and have full read/write access to enterprise data and applications. We supplement these studies with conversations that we’ve had with our users.

The following are highlights of key learnings from recent studies conducted by MIT 4, NBER 5, Goldman Sachs 6, OpenAI/University of Pennsylvania 7, and Accenture 8:

  1. Many developed world occupations will be substantially impacted through work augmentation or replacement. While the impacts vary greatly by occupation, overall, LLMs have the potential to impact 80% of the U.S. workforce and, according to a report by Goldman Sachs, raise global GDP by 7%. A study from Accenture shows that, out of 22 occupation groups, 40% of all tasks today have high potential for automation or augmentation, with the most potential across Financial Services, Software & Platforms, Energy, and Communications & Media.
  2. Potential productivity gains are significant. Even with relatively early implementations of LLMs not grounded on enterprise context, a recent MIT study showed that worker productivity on typical white-collar tasks could be improved by over 50%, with coding tasks being completed twice as quickly using AI. Results vary considerably across functions: in customer support, studies have seen 14% more issues per hour assisted by AI and economists observed a 10-20% increase in productivity. Yet in sales, sales development representatives (SDRs) can send 15 customized, researched, outreach emails per hour with AI assistance versus three per hour manually — a 5x difference. As models improve and are able to access and synthesize enterprise data more holistically, these numbers will increase further.
  3. Less productive, or newer workers tend to benefit disproportionately. Per the MIT study into white-collar tasks, not only were all workers more productive with AI, but the gains were far higher for workers who were initially less productive, and their rate of improvement was higher. The same outcome was noticed with the NBER research into call center productivity, which, put simply, allowed call center operators to choose AI generated responses to support inquiries. In the latter study, the model was continuously improved around the answers from highly skilled operators, so effectively, the tacit knowledge of the higher skilled workers was shared across the entire workforce.  
  4. Employee satisfaction and output quality also improve. As with productivity gains, quality gains and satisfaction skew towards newer or lower skilled workers, with highly skilled workers showing little improvement, especially in output quality. Some of the drivers of employee satisfaction come from the reduction in mundane tasks. According to a Customer Support leader we spoke to, automating the step of post-call summarization led to a significant increase in agent satisfaction. Summaries were also completed more consistently by AI compared to agent-written summaries. Improving employee satisfaction also improved retention; for high turnover roles like call center operators, replacement and training costs can add up to $10-$20,000, suggesting substantial savings here too.
  5. Large improvements were seen even with relatively simple baseline models. This was the case with models that lacked domain or task customization and had limited context. In other words, the models weren’t using any domain or user context to generate responses, so they couldn’t be specific or use user data to generate responses. If this capability were added, productivity gains would be higher still.

Productivity Will Only Increase Over Time

These productivity gains are enticing for businesses looking to harness the capabilities of LLMs. So, what’s next? We believe these language models will begin to power Knowledge Assistants that act as intelligent companion tools for knowledge workers. Knowledge Assistants will evolve over three separate phases, with productivity gains far higher than of today’s base state.

Figure 1: Expected evolution of Knowledge Assistant impact
  • Phase 1 is where we are today. Large language models are being trained and deployed across point solutions, where it’s straightforward to train or improve the model, or users are using front-end tools to ideate, write, or improve written text. Contextual use of corporate data is minimal or not happening yet.
  • Phase 2 will leverage retrieval augmented generation (RAG). Large language models will have full access to relevant corporate data, and users can interact with them like a chatbot (e.g., “Summarize the last five reports from our Scranton office and identify our best salespeople,” or “What are the top five paper products by sales, and which have the highest gross margin”). This is literally a Knowledge Assistant that efficiently helps search, synthesize, and report on almost anything that a human would otherwise do. While wide-scale deployment will likely span several years, we expect that the productivity gains will be a low multiple of those of Phase 1 given the increased capability.
  • Phase 3 is when the Knowledge Assistant can take action on behalf of a worker. The Knowledge Assistant will intelligently and reliably interact with enterprise systems (e.g., “Send 500 reams of 80 pound stock from the Syracuse branch and invoice at no discount”). This will take time given the diversity of enterprise systems and interfaces required, but we expect that the language models will be able to create and understand the required formats relatively quickly (they can already understand or write in common communication formats like JSON). Wide scale deployment will lag behind Phase 2, but for some specialized use cases where paperwork volume is a burden (e.g., maintenance of capital equipment, healthcare care use cases), or a speech/text interface improves worker safety, earlier adoption is possible. Further productivity gains are much harder to estimate, but will likely be a meaningful increase over Phase 2.

Next Steps for Business Leaders

Clearly, a major technical disruption is happening, and business leaders are paying attention. Every business has its own priorities, but if your business is heavily exposed to AI disruption, it’s worth starting the journey early given that it will take time to develop a strategy and plan. Wait too long, and your competitors will race ahead with AI and gain a substantial advantage in the marketplace.

So, how do you get started? First, we recommend getting hands on with LLM-powered tools and playing around with their capabilities to give you a sense of the power and potential of this new technology. You can test out Cohere’s content generation, summarization, classification, and embedding capabilities on the Cohere Playground, or test a variety of other publicly available generative AI technologies, of course. Getting a feel for it will help you imagine how AI could transform various areas of your organization, so that you can reap the benefits of greater productivity, resource optimization, and new opportunities for your business.

Get started with Cohere by contacting our sales team to discuss how our LLM platform can benefit your business.  Technical users can also create a free account that lets them try Cohere in code for free.  


1.  Manyika, J., & Roxburgh, C. (2011, October 1). The Great Transformer: The impact of the internet on economic growth and prosperity. McKinsey & Company. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-great-transformer

2. Kenechi Okeleke, S. (2023). The Mobile Economy 2022. GSMA. Retrieved 6 June 2023, from  https://data.gsmaintelligence.com/research/research/research-2022/the-mobile-economy-2022

3. Thomas Tunguz:  Which Increases Productivity More: The Advent of Personal Computer or a Large-Language Model?. Thomas Tunguz. Retrieved 6 June 2023, from https://tomtunguz.com/llm-impact-gdp/

4. Noy, S., & Zhang, W. (2023). Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence. SSRN Electronic Journal. doi: 10.2139/ssrn.4375283

5. Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at Work. NBER. doi: 10.3386/w31161

6. Surrender your desk job to the AI productivity miracle, says Goldman Sachs. (2023). Retrieved 6 June 2023, from https://www.ft.com/content/50b15701-855a-4788-9a4b-5a0a9ee10561

7. GPTs are GPTs: An early look at the labor market impact potential of large language models. (2023). Retrieved 6 June 2023, from https://openai.com/research/gpts-are-gpts

8. A new era of generative AI for everyone. (2023). Accenture. Retrieved 6 June 2023, from https://www.accenture.com/content/dam/accenture/final/accenture-com/document/Accenture-A-New-Era-of-Generative-AI-for-Everyone.pdf

Keep reading