Context by Cohere
How to Avoid the Pitfalls of Generative AI Projects

How to Avoid the Pitfalls of Generative AI Projects

Working with large language models requires a different approach compared with traditional AI strategies of the past. Those who adapt will reap transformative gains.

Share:

As interest in generative AI technology escalates, enterprises are expanding their technology capabilities to include large language models (LLMs). With an estimated economic windfall as high as $7.9 trillion annually, these early adopters are hoping to grab an outsized portion of the benefits. In doing so, they might be tempted to apply practices they followed for previous-generation AI projects, only to discover those approaches are ill-suited to LLM technologies. 

Nearly every top executive plans to pour money into generative AI. It’s not surprising. Research shows that AI can boost employee performance by up to 40 percent. To capitalize on these promising investments, companies may need to re-evaluate their approach. The rewards of deploying generative models successfully are huge, but realizing those rewards takes strategic planning across the whole value chain. Rethinking workflow, team structure, and end goals can help organizations maximize the benefits.

In this article, we identify the key differences between LLM-based and traditional AI projects and provide a set of guidelines to avoid some of the pitfalls.

Embracing the New

The stakes are high, and compared with previous AI projects, IT leaders face unfamiliar challenges when implementing LLM technologies. Those embarking on a generative AI project must take into account the key differences between LLM-based AI and traditional AI projects. Below, we’ve split the differences into the benefits versus the challenges that come with generative AI projects. 

The benefits of LLM-based projects:

  • Low upfront cost: Traditional AI models tend to be built from scratch. For a given use case, such as predictive maintenance, the model usually requires extensive training and multiple iterations before it is ready for production. By comparison, LLM implementations start with a pre-trained language model that already understands language and can be customized quickly for specific use cases, such as generating reports or contracts. Currently, some LLMs are open source while many offer a pay-as-you-go cost model, which again reduces the upfront costs for these projects.
  • Time to value: Many generative AI projects generate value quickly since they require less training data to be useful. Examples include AI chatbots grounded to existing company knowledge. For example, a retailer could quickly do a proof-of-concept AI assistant that is grounded to a product catalog and can assist with sales in the field. 
  • Technology progress: Unlike previous AI technologies, where models were incrementally upgraded over the span of several years, LLMs are rapidly improving the landscape of options and capabilities within weeks. LLMs, like embedding models and generative models, have frequent releases equipped with better performance and enhanced capabilities. Additionally, model chaining or orchestration, the sequencing of multiple models and toolkits like semantic search with retrieval-augmented generation (RAG), can drive further innovation. For example, an insurance provider can now use a sequence of natural language processes to automate part of the claims journey by grounding the models to the best sources of internal information and using a combination of embedding and generative models. These types of enterprise solutions are being tested today.
  • Skills spectrum: Unlike traditional AI projects that require highly trained machine learning (ML) engineers even for the smallest of projects, working with LLM technologies does not always require that level of expertise. Smaller, less complex LLM deployments can be executed with just LLM prompting or software engineering knowledge, and not extensive ML expertise. Although the current AI boom has dramatically increased demand for LLM experts, and the supply has not yet caught up, we are seeing a rapid upskilling and interest in skills development in both academia and industry.

The challenges of LLM-projects:

  • High operating costs: Driven largely by the scarcity of GPUs, extensive training, and inference (runtime), ongoing deployment of generative AI projects can get very expensive very quickly. We are noticing more customers beginning to scale back their POC, away from using large models and instead looking for smaller, lower-cost options that provide a more tailored solution without the high price tag.
  • Security considerations: Often the data needed to train or test a model can be some of the most sensitive, especially as it’s human readable. It can include contract language, customer details, or even sensitive information subject to privacy laws like medical records. Fortunately, hosted secure deployment options are now available (e.g., Amazon SageMaker, Amazon Bedrock, Oracle Cloud Infrastructure, Google Cloud Platform, and Microsoft Azure). While most LLM technologies are available through cloud-based API systems, which may come with additional data security concerns, implementations can be sandboxed to offer a more secure solution. Currently, Cohere is the only LLM provider available across all major cloud platforms.
  • Risk analysis: The risk profile for generative AI solutions may be higher than traditional AI projects as LLMs are prone to hallucinations, which are answers generated by LLMs that are factually wrong or nonsensical. Depending on the use case, this could be a major concern. In addition, LLM output can contain ethnic bias and even profanity. The use of retrieval-augmented generation, where models are grounded to proprietary knowledge sources, has been shown to reduce hallucinations.
  • POC delays: We’ve all been to POC purgatory, and generative AI projects unfortunately also face an abundance of delays. Like traditional AI projects, concerns about user acceptance, quality controls, and branding implications can stall even the best projects and some never make it into the real world. Defining “production-ready” objectively with LLMs isn’t always straightforward as measuring accuracy with language can be difficult. Setting a target accuracy of, say 97% is one thing, but measuring it is another. The tools, and methodologies to measure output quality are still in their infancy, which is of little comfort if your users or your boss spots an answer that’s incorrect, or worse, offensive.

Guidelines for Success

To cope with this complicated landscape, we’ve identified several factors that can help you to implement a generative AI project and improve the outcomes.

Pick a Good Project

Choosing the right generative AI project is all about calculated risks and rewards. As with any new technology, IT leaders should look for quick wins — low-risk opportunities with potential for rapid, high-impact returns. LLMs present many tempting options. Some examples of low risk, high output cases include personalized sales outreach emails and agent-assist tools or co-pilots, which in some cases have increased agent productivity by 14 percent.

With good judgment and a strong risk-demand framework, companies can find the LLM project that strikes the perfect balance: low barriers to entry with accessible datasets, customization for the business, built-in oversight, and a high likelihood of significant benefits.

Before committing to any one particular solution, IT leaders may need to familiarize themselves with the rapidly evolving vendor ecosystem. They should also develop guidelines on which AI capabilities to build in-house versus outsource. This strategic insight into the options and business needs will help identify the best solutions to invest in long-term.

Gather the Best Support

Rally the troops before launching any new AI project and carefully pick an LLM initiative with low barriers to approval. Seek out use cases where accuracy, safety, and security can be measured clearly. This data-driven approach reduces perceived risk, overcoming stakeholder skepticism. Non-customer facing applications are often an easier sell to wary branding or product groups. With thoughtful preparation, you can circumvent roadblocks and objections.

Garnering broad organizational support is essential. Identify receptive leaders who see the vision to be your champions. Arm them and yourself with irrefutable metrics that speak for themselves. 

Build the Right Team

Generative AI project teams are made up of a slightly more diverse and cross-functional talent pool. Beyond traditional machine learning (ML) engineers and software engineers, you will require prompt engineers, skilled at creating text to be interpreted by an LLM. The team should also include members who understand the business case for the generative AI project to help keep everyone aligned with the scope of the project. Given the current talent crunch, you might consider upskilling existing talent to meet the demand. Fortunately many low-cost or free courses now exist, including Cohere's LLM University.

Generate Feedback Quickly

Technological advances in LLMs are to some extent dependent on getting user-feedback and lots of it. Unlike traditional AI where the focus was on numerical sources of data, LLM projects usually involve conversational language. Previously, feedback could only be attained through sophisticated ML number crunching. Today, customers who use LLM-based tools are often an excellent source of feedback to improve the models quickly.

The AI project team should actively establish ways to regularly collaborate with their end users. A simple way to do this is by encouraging end users to send rapid-fire feedback. A clickable thumbs-up or thumbs-down button next to results is enough to share positive or negative feedback. The ML teams can then diagnose performance issues and retrain the model. Users can also be encouraged to provide direct text edits to the AI project team, also speeding up the process.

Bolt the Solution to Existing User Journeys

Even with the right project and the perfect team in place, without customers, your project may struggle. Part of successfully implementing a generative AI project is identifying how and for what customers want to use the technology. To encourage customer engagement and early adoption, try using a pre-existing or familiar interface that most employees or customers can quickly pick-up without further training. Many of the most useful LLM implementations are integrated into current workflows, such as customer support and CRM systems. For example, one popular use case is creating a Slack bot for your LLM solution that allows users to interact with the bot in an established and comfortable environment. Users can query the bot directly to trigger a response, allowing the model to answer questions for whoever is in the channel.

Focus on Security

In contrast to traditional AI implementations, a security breach for a generative AI project can be more substantial. For example, in the past, a breach may have looked like a leak of non-sensitive numerical data that many would have struggled to understand or use. An LLM-based breach by comparison could consist of generating human-readable confidential documents, which could lead to greater consequences.

This broader security risk calls for more safeguards right from the start, including for POCs. Cloud AI services like Google Cloud Provider, Oracle Cloud Infrastructure, Amazon Bedrock and Amazon SageMaker, and Azure offer an easy, secure onboarding path where teams can build solutions while maintaining complete control over training and run-time data. For customers requiring even more protection, virtual private cloud (VPC) and on-premise solutions are also offered, however, these services require preparation work and ongoing expenses that should be considered when choosing the right project.


Understanding how generative AI projects differ from traditional AI is a great first step to avoid any unnecessary pitfalls. LLM-based projects can be affordable, value can be achieved faster, and they have the potential to generate substantial competitive advantage — that is, if you start early. Building the execution muscle and exploring the many available use cases for these technologies can set you on the right path.


About the Authors

Neil Shepherd is Cohere’s Head of Growth.

Shaun Hillin is Cohere’s Global Head of AI/ML Architecture.

Vivek Muppalla is Cohere’s Director of Engineering.


Explore what's possible in Cohere's playground. Try it today. 

Keep reading