
Using the Command Model in Amazon SageMaker Studio
Cohere's Command model is now available on Amazon SageMaker JumpStart.
At Cohere, we are committed to making it easy for our customers to use our cutting-edge large language models (LLMs). With Amazon SageMaker Studio, teams can access our Command model, available on Amazon SageMaker JumpStart.
About Cohere’s Command Model
The Command model is an instruction-following text generation model, enabling a wide range of use cases spanning multiple industries. With this foundation model, you can quickly and easily add advanced AI capabilities to your projects without worrying about the complexities of training your own models or building the underlying infrastructure for serving and operating LLMs.
Customers can complement Command's generative capabilities with our embedding models, also available on AWS. The following is a summary of our available models on AWS to date.
Generative Models
Embedding Models
Early adopters are already leveraging these generative models to enhance their businesses, either by building AI-first products or extending the impact of existing products. And these companies realize that to be successful, all interactions with LLMs, such as data exchanges and user sessions, must occur in a secure environment. This is useful for a variety of reasons: for example, to protect data IP as a key differentiator, or to serve companies who want to provide security assurances to their customers.
About Amazon SageMaker JumpStart
Amazon SageMaker JumpStart is a machine learning (ML) hub with foundation models, built-in algorithms, and prebuilt ML solutions that you can deploy with just a few clicks.
Serving Cohere’s foundation models via Amazon SageMaker JumpStart comes with a number of benefits, including:
- Access to Cohere's LLMs in a fully private environment, ensuring that your users’ data remains secure and confidential.
- Access through Amazon SageMaker Studio, which covers aspects like geo-replication, responsive scaling, and monitoring. This means that you can focus on your core business objectives without having to worry about infrastructure provisioning.
- Billing on a per-instance basis based on hours of use, so you only pay for the resources you use. This provides a cost-effective way to access Cohere's LLMs.
- Ability to spend AWS credits and leverage Savings Plans.
Exploring What’s Possible
The Command model makes a broad set of language-based, generative AI use cases possible. Here are some sample areas where companies can elevate their products and unlock new use cases that enhance their end users' experience and deliver quality, efficiency, and productivity gains in their workflows.
- Automating tasks: Consistently produce outputs of a certain format and quality. Examples: writing ad copy, generating product descriptions, and extracting key information from documents.
- Accelerating writing: Draft the first or even the final version of a document. Examples: composing emails, writing reports, and producing marketing copy.
- Brainstorming ideas: Generate the outline of a document instead of working off a blank canvas. Examples: generating product pitches, developing presentation outlines, and structuring reports.
- Condensing information: Transform a dense or complex piece of text into a simpler, more accessible form. Examples: summarizing transcripts, simplifying technical explanations or error logs, and turning news articles into bullet points.
- Improving writing: Enhance an existing body of text. Examples: making a passage more coherent, fixing writing errors, and transforming meeting minutes into action plans.
Getting Started
To get started with the Command model on SageMaker JumpStart, follow these steps:
- In the AWS Console, go to Amazon SageMaker and click
Studio
. - Then, click
Open Studio
. If you don't see this option, you first need to set up a SageMaker domain. - A new JupyterLab tab will open. Look for
Prebuilt and automated solutions
and clickJumpStart
. - A list of models will appear. In the
Foundation Models: Text Generation
category, look forCohere Command
and then clickView notebook
. - This will open up a sample notebook to get started with the model. To run the notebook, your organization must first subscribe to the Command model.
The notebook goes through an example of creating an endpoint (the complete notebook is here), which involves the following steps:
Step 1: Import the required libraries
!pip install cohere-sagemaker
from cohere_sagemaker import Client
import boto3
Step 2: Define the Command model’s product ARN
Select the product ARN while creating a deployable model using Boto3.
# Map the ARNs (Available in 16 regions at the time of writing)
model_package_map = {
"us-east-1": "arn:aws:sagemaker:us-east-1:865070037744:model-package/cohere-gpt-xlarge-v1-2-4d938caa0259377e94c4eb5bf6bc365a",
"eu-west-1": "arn:aws:sagemaker:eu-west-1:985815980388:model-package/cohere-gpt-xlarge-v1-2-4d938caa0259377e94c4eb5bf6bc365a",
}
region = boto3.Session().region_name
if region not in model_package_map.keys():
raise Exception(f"Current boto3 session region {region} is not supported.")
model_package_arn = model_package_map[region]
Step 3: Create an endpoint
co = Client(region_name=region)
co.create_endpoint(arn=model_package_arn, endpoint_name="cohere-gpt-xlarge", instance_type="ml.p4d.24xlarge", n_instances=1)
# You will get "---------!" as the output. This is expected.
Step 4: Run inference on the endpoint
prompt="Write a creative product description for a wireless headphone product named the CO-1T"
response = co.generate(prompt=prompt, max_tokens=100, temperature=0.9)
print(response.generations[0].text)
SAMPLE RESPONSE:
The CO-1T is a sleek and stylish wireless headphone that is perfect for on-the-go listening. With a comfortable and secure fit, these headphones are perfect for all-day wear. The wireless design allows for easy movement and convenience, while the crisp sound quality ensures that you can enjoy your favorite tunes without any distractions. The CO-1T is also equipped with a noise-canceling microphone, so you can take calls and texts without any interference.
Step 5: Delete the endpoint.
Note: You can see all existing endpoints by going to SageMaker -> Inference -> Endpoints in the AWS console.
co.delete_endpoint()
co.close()
Final Thoughts
We believe that making our models available through Amazon SageMaker will greatly simplify the process of deploying LLMs for our customers, and it will especially open the door to use cases that deal with sensitive data. And now, with the addition of the Command model available in Amazon SageMaker Studio via Amazon SageMaker JumpStart, building applications just got much easier. We are excited to see how you will use our models, deployed through Amazon SageMaker, in your new and innovative projects.
Get started with the Command model via Amazon SageMaker Studio now.