Right now, we’re asserting the final availability of an extra 18 absolutely managed open weight fashions in Amazon Bedrock from Google, MiniMax AI, Mistral AI, Moonshot AI, NVIDIA, OpenAI, and Qwen, together with the brand new Mistral Giant 3 and Ministral 3 3B, 8B, and 14B fashions.
With this launch, Amazon Bedrock now gives practically 100 serverless fashions, providing a broad and deep vary of fashions from main AI corporations, so clients can select the exact capabilities that finest serve their distinctive wants. By intently monitoring each buyer wants and technological developments, we frequently develop our curated choice of fashions primarily based on buyer wants and technological developments to incorporate promising new fashions alongside established trade favorites.
This ongoing growth of high-performing and differentiated mannequin choices helps clients keep on the forefront of AI innovation. You’ll be able to entry these fashions on Amazon Bedrock by way of the unified API, consider, change, and undertake new fashions with out rewriting purposes or altering infrastructure.
New Mistral AI fashions
These 4 Mistral AI fashions are actually obtainable first on Amazon Bedrock, every optimized for various efficiency and price necessities:
- Mistral Giant 3 – This open weight mannequin is optimized for long-context, multimodal, and instruction reliability. It excels in lengthy doc understanding, agentic and gear use workflows, enterprise information work, coding help, superior workloads equivalent to math and coding duties, multilingual evaluation and processing, and multimodal reasoning with imaginative and prescient.
- Ministral 3 3B – The smallest within the Ministral 3 household is edge-optimized for single GPU deployment with robust language and imaginative and prescient capabilities. It reveals sturdy efficiency in picture captioning, textual content classification, real-time translation, knowledge extraction, brief content material technology, and light-weight real-time purposes on edge or low-resource units.
- Ministral 3 8B – The perfect-in-class Ministral 3 mannequin for textual content and imaginative and prescient is edge-optimized for single GPU deployment with excessive efficiency and minimal footprint. This mannequin is right for chat interfaces in constrained environments, picture and doc description and understanding, specialised agentic use instances, and balanced efficiency for native or embedded techniques.
- Ministral 3 14B – Essentially the most succesful Ministral 3 mannequin delivers state-of the-art textual content and imaginative and prescient efficiency optimized for single GPU deployment. You should use superior native agentic use instances and personal AI deployments the place superior capabilities meet sensible {hardware} constraints.
Extra open weight mannequin choices
You should use these open weight fashions for a variety of use instances throughout industries:
| Mannequin supplier | Mannequin title | Description | Use instances |
| Gemma 3 4B | Environment friendly textual content and picture mannequin that runs regionally on laptops. Multilingual help for on-device AI purposes. | On-device AI for cell and edge purposes, privacy-sensitive native inference, multilingual chat assistants, picture captioning and outline, and light-weight content material technology. | |
| Gemma 3 12B | Balanced textual content and picture mannequin for workstations. Multi-language understanding with native deployment for privacy-sensitive purposes. | Workstation-based AI purposes; native deployment for enterprises; multilingual doc processing, picture evaluation and Q&A; and privacy-compliant AI assistants. | |
| Gemma 3 27B | Highly effective textual content and picture mannequin for enterprise purposes. Multi-language help with native deployment for privateness and management. | Enterprise native deployment, high-performance multimodal purposes, superior picture understanding, multilingual customer support, and data-sensitive AI workflows. | |
| Moonshot AI | Kimi K2 Pondering | Deep reasoning mannequin that thinks whereas utilizing instruments. Handles analysis, coding and sophisticated workflows requiring tons of of sequential actions. | Complicated coding tasks requiring planning, multistep workflows, knowledge evaluation and computation, and long-form content material creation with analysis. |
| MiniMax AI | MiniMax M2 | Constructed for coding brokers and automation. Excels at multi-file edits, terminal operations and executing lengthy tool-calling chains effectively. | Coding brokers and built-in growth atmosphere (IDE) integration, multi-file code modifying, terminal automation and DevOps, long-chain instrument orchestration, and agentic software program growth. |
| Mistral AI | Magistral Small 1.2 | Excels at math, coding, multilingual duties, and multimodal reasoning with imaginative and prescient capabilities for environment friendly native deployment. | Math and coding duties, multilingual evaluation and processing, and multimodal reasoning with imaginative and prescient. |
| Voxtral Mini 1.0 | Superior audio understanding mannequin with transcription, multilingual help, Q&A, and summarization. | Voice-controlled purposes, quick speech-to-text conversion, and offline voice assistants. | |
| Voxtral Small 1.0 | Options state-of-the-art audio enter with best-in-class textual content efficiency; excels at speech transcription, translation, and understanding. | Enterprise speech transcription, multilingual customer support, and audio content material summarization. | |
| NVIDIA | NVIDIA Nemotron Nano 2 9B | Excessive effectivity LLM with hybrid transformer Mamba design, excelling in reasoning and agentic duties. | Reasoning, instrument calling, math, coding, and instruction following. |
| NVIDIA Nemotron Nano 2 VL 12B | Superior multimodal reasoning mannequin for video understanding and doc intelligence, powering Retrieval-Augmented Technology (RAG) and multimodal agentic purposes. | Multi-image and video understanding, visible Q&A, and summarization. | |
| OpenAI | gpt-oss-safeguard-20b | Content material security mannequin that applies your customized insurance policies. Classifies dangerous content material with explanations for belief and security workflows. | Content material moderation and security classification, customized coverage enforcement, user-generated content material filtering, belief and security workflows, and automatic content material triage. |
| gpt-oss-safeguard-120b | Bigger content material security mannequin for advanced moderation. Applies customized insurance policies with detailed reasoning for enterprise belief and security groups. | Enterprise content material moderation at scale, advanced coverage interpretation, multilayered security classification, regulatory compliance checking, high-stakes content material assessment. | |
| Qwen | Qwen3-Subsequent-80B-A3B | Quick inference with hybrid consideration for ultra-long paperwork. Optimized for RAG pipelines, instrument use & agentic workflows with fast responses. | RAG pipelines with lengthy paperwork, agentic workflows with instrument calling, code technology and software program growth, multi-turn conversations with prolonged context, multilingual content material technology. |
| Qwen3-VL-235B-A22B | Understands photos and video. Extracts textual content from paperwork, converts screenshots to working code, and automates clicking by way of interfaces. | Extracting textual content from photos and PDFs, changing UI designs or screenshots to working code, automating clicks and navigation in purposes, video evaluation and understanding, studying charts and diagrams. |
When implementing publicly obtainable fashions, give cautious consideration to knowledge privateness necessities in your manufacturing environments, examine for bias in output, and monitor your outcomes for knowledge safety, accountable AI, and mannequin analysis.
You’ll be able to entry the enterprise-grade safety features of Amazon Bedrock and implement safeguards custom-made to your software necessities and accountable AI insurance policies with Amazon Bedrock Guardrails. You may as well consider and examine fashions to establish the optimum fashions to your use instances by utilizing Amazon Bedrock mannequin analysis instruments.
To get began, you’ll be able to rapidly check these fashions with a couple of prompts within the playground of the Amazon Bedrock console or use any AWS SDKs to incorporate entry to the Bedrock InvokeModel and Converse APIs. You may as well use these fashions with any agentic framework that helps Amazon Bedrock and deploy the brokers utilizing Amazon Bedrock AgentCore and Strands Brokers. To study extra, go to Code examples for Amazon Bedrock utilizing AWS SDKs within the Amazon Bedrock Consumer Information.
Now obtainable
Test the full Area checklist for availability and future updates of latest fashions or search your mannequin title within the AWS CloudFormation sources tab of AWS Capabilities by Area. To study extra, take a look at the Amazon Bedrock product web page and the Amazon Bedrock pricing web page.
Give these fashions a attempt within the Amazon Bedrock console right this moment and ship suggestions to AWS re:Publish for Amazon Bedrock or by way of your regular AWS Help contacts.
— Channy
Up to date on 4 December — Amazon Bedrock now helps Responses API on new OpenAI API-compatible service endpoints for GPT OSS 20B and 120B fashions. To study extra, go to Generate responses utilizing OpenAI APIs.


