Cohere has entered the aggressive race of releasing LLMs with their newest providing – Command A. Their earlier mannequin, Command R+, was launched in August 2024, adopted by Command R7B in December 2024. Now, with Command A, Cohere has made a robust comeback, introducing a state-of-the-art generative language mannequin tailor-made for enterprise use circumstances. Optimized for top efficiency with minimal {hardware} calls for, Command A supplies a cheap and environment friendly resolution for companies. It joins Cohere’s suite of fashions, famend for his or her scalability and sturdy efficiency throughout a variety of functions. Let’s be taught extra about it on this article!
What’s Cohere Command A?
Command A is a strong 111B parameter mannequin with a context size of 256K, permitting it to deal with for much longer paperwork in comparison with most main fashions. It excels in areas reminiscent of device use, retrieval-augmented technology (RAG), brokers, and multilingual use circumstances. This mannequin is designed to be extremely environment friendly, requiring solely two GPUs (A100s/H100s) to run, which is considerably fewer than different comparable fashions

New Options:
- Net Search
- Python Interpreter
- API Integration
- Database Interplay
- Retrieval-Augmented Era (RAG)
- Brokers and Complicated Reasoning
- Multilingual Assist (23 languages)
- Enterprise-Grade Safety
Efficiency and Benchmarks
Cohere Command A is a prime massive language mannequin (LLM) that stands out, particularly for companies. Right here’s why it’s particular:
Nice Efficiency, Much less Energy
Command A delivers sturdy outcomes utilizing much less computing energy. It has 111 billion parameters and a 256k context size however solely wants two GPUs (like A100s or H100s) to run. Evaluate that to DeepSeek V3, which wants eight GPUs for a 128k context size. This makes Command A strong but inexpensive for firms.
Tremendous Quick
It’s 150% sooner than Cohere’s earlier mannequin, Command R+ (launched in August 2024). It could possibly deal with 156 tokens per second, beating fashions like OpenAI’s GPT-4o and DeepSeek V3 in velocity and effectivity.
Constructed for Enterprise
Command A shines in duties firms want:
- Retrieval-Augmented Era (RAG): It makes use of outdoors knowledge effectively, making it nice for issues like pulling monetary information or answering questions from lengthy recordsdata. Command A and GPT-4o had been in contrast in enterprise RAG duties. Educated annotators rated them blindly on fluency, accuracy, and usefulness.
Device Use and Brokers: It really works with instruments like serps or APIs and runs quick brokers for robust considering and analysis duties.
Multilingual: It helps 23 languages (like English, Spanish, Arabic, and Japanese), so it really works for customers worldwide and might translate too. When comapred with DeepSeek V3 on in depth human analysis customers strongly most popular Command A over DeepSeek-V3 throughout most languages on a variety of enterprise use circumstances.
Command A vs GPT 4o (Arabaic)

Inexpensive
| Cohere API Pricing | Enter Tokens | Output Tokens |
|---|---|---|
| Command A | $2.50 / 1M | $10.00 / 1M |
How you can Entry Command A?
Cohere Platform
- Create an account on the Cohere Platform if you happen to don’t have already got one.
- Choose the “Chat with Cohere” possibility

Hugging Face
Command A can be out there for analysis use on Hugging Face. You may entry it by means of the next steps:
Mannequin Web page: Go to the C4AI Command A mannequin web page on Hugging Face.
Set up: Set up the transformers library from the supply repository that features the mandatory modifications for this mannequin.
pip set up transformers
Utilization Instance
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
mannequin = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
gen_tokens = mannequin.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
- Strive It Out: You too can check out Command A within the hosted Hugging Face Area earlier than downloading the weights.
Observe: Command A will quickly be out there on main cloud suppliers. Control Cohere’s bulletins for updates
Let’s Strive Command A
I’m going to attempt a few of the options provided by Command A:
Activity 1: Net Search
Immediate: “Plot the common 1 bed room rental value in Feb 2025 for the 5 most costly cities in India.“
Output:

Statement:
Command A carried out decently however couldn’t discover rental costs for New Delhi and Chennai, possible as a result of it solely checked 10 web sites. For net searches, Google, Perplexity, or Kimi k1.5 are higher choices
Additionally Learn: OpenAI vs Google: Who Does Deep Analysis Higher?
Activity 2: Analysis
Immediate: “Plot the highest 5 firms within the photo voltaic panel trade by income final 12 months.”
Output:

Statement:
The analysis function is common. It’s high-quality for fast solutions, however for detailed experiences or in-depth evaluation, use OpenAI’s Deep Analysis possibility.
Activity 3: Studying a New Matter
Immediate: “Plot sin(x) from -2pi to 2pi, utilizing a step measurement of 0.05, and clarify the maths like I’m 5.“
Output:

The sine operate, or sin(x), is a mathematical operate that describes a clean wave-like sample. Think about you’re on a swing, and as you swing forwards and backwards, your top goes up and down in a clean, repeating sample. That’s just like what the sine operate seems like!
The sine operate takes an enter worth, x, and provides you a worth between -1 and 1. Whenever you plot sin(x) on a graph, you get a wavy line that goes up and down, crossing the x-axis at common intervals.
Within the plot above, the x-axis represents the enter values of x, starting from -2π to 2π. The y-axis represents the corresponding sine values. The step measurement of 0.05 implies that the graph is calculated and plotted at intervals of 0.05 alongside the x-axis, providing you with a clean and detailed illustration of the sine wave.
Consider it like drawing a wavy line on a bit of paper, however as a substitute of drawing it freehand, you’re utilizing a ruler to verify every level is strictly 0.05 models aside. The result’s a ravishing, symmetrical wave that repeats itself because it goes from left to proper.
Statement:
The output successfully meets each necessities: it appropriately plots sin(x) from -2π to 2π utilizing 0.05 step measurement with correct code, whereas explaining the idea with child-friendly language and a relatable swing analogy. The technical implementation and simplified rationalization work collectively seamlessly to satisfy the immediate’s twin targets.
Conclusion
Command A is defnetily higher than the earlier cohere fashions – providing choices that each different LLM out there may be providing in in the present day’s time. Is it one of the best mannequin on the market? Probably not. Strive it out yourslef and let me know your expertise within the remark part beneath!
If you wish to be taught extra about how AI can match into your work, be part of our Generative AI Approach of Life program.
Login to proceed studying and luxuriate in expert-curated content material.
